Prosecution Insights
Last updated: April 19, 2026
Application No. 18/380,946

SYSTEMS AND METHODS FOR ACCESSING AND STORING SNAPSHOTS OF A REMOTE APPLICATION IN A DOCUMENT

Non-Final OA §103
Filed
Oct 17, 2023
Examiner
KHAN, SHAHID K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Palantir Technologies Inc.
OA Round
5 (Non-Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
287 granted / 389 resolved
+18.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
420
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 389 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the after-final amendment filed 2/3/26 in which claims 2, 11, and 20 were amended. Claims 2-21 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/3/26 has been entered. Response to Arguments Applicant argues: Applicant respectfully submits that the cited references do not teach or suggest at least some limitations cited in amended claim 2. For example, the Office Action asserts that Sterkel discloses "the artifact including an identifier of a software application providing the one or more data objects...the identifier of the software application configured to be used in a subsequent data request" 63("When a user hovers over or makes a preliminary selection of the marker, the moving map engine may cause display of a name or summary of the content item represented by the marker. When a user touches, clicks on, or selects the content item [subsequent data request], the moving map engine may cause display of additional information about the content item, or part or all of the content item itself. The name information may be stored locally by the moving map engine, in a database that includes content item names, geographical locations, and storage locations, and the content item itself may be retrieved from a content server identified by the storage location for the content item [identifier of the software application]" (the Office Action at pages 6-7, emphasis in original document and added) and further discloses "transmitting the subsequent data request to the software application based at least in part on the identifier of the software application; (but see Sterkel 31 ("For example, server logic running on the client machine may use an identity of the selected item to query [subsequent data request] another machine that is onboard or offboard the vehicle, and utilize the results of the query to display additional information. Also, logic running on the other machine may acquire or retrieve requested content from storage on the other machine or from various network-connected storage locations."))" (id., at pages 7-8, emphasis in original document and added.) As such, the Office Action has recited different information between the first limitation and the second limitation on the teaching of "subsequent data request", and the Office Action skipped "the identifier of the software application" in the second limitation entirely. Additionally, the present application discloses "the artifact can include, for example, an identifier for the application 312 that provides the data objects 332" (para. [0046]) and "if, as a result of the manipulation, more data objects are to be displayed via interface 352a, interface module 354 can also provide a request for the additional data objects to application 312. Interface module 354 can also acquire an editing of the data objects (e.g., editing of the attribute(s) and/or propertie(s) of the data objects) via application data interface 352a, and synchronize the editing with server 310" (para. [0047]). Therefore, the cited references do not disclose at least "receiving a first input associated with at least one of the one or more data objects; retrieving the identifier of the software application from the artifact; transmitting the subsequent data request to the software application based at least in part on the identifier of the software application" as recited in amended claim 2. Applicant’s arguments with respect to claims 2, 11, and 20 have been fully considered but are moot because the Kapler and Sterkel references have been remapped in combination with a new reference to Harris to teach the claimed invention and the amended limitations. Examiner respectfully directs Applicant to the detailed rejection below for further explanation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 2-9, 11-18, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Kapler (US 2006/0238538 A1; published Oct. 26, 2006) in view of Sterkel (US 2012/0232791 A1; published Sep. 13, 2012) and Harris (US 2014/0280103 A1; published Sep. 18, 2014). Regarding claim 2, Kapler discloses [a] method comprising: accessing one or more data objects in a geographical area; (¶ 30 (“Referring to FIG. 3, user planning and interaction with the tool 12 is facilitated through two main components, namely the timeline data 16 and the visualization representation 10 (e.g. a 2D or 3D battlefield) [geographical area], such that navigation thought the timeline data 16 is synchronized with changes in the display of visual elements representing the data objects 14 [data objects] and chart data 20 in the visualization representation 10. Navigation of the timeline data 12 is facilitated, for example, through use of a time marker 18 (e.g. a slider control) moved in the context of a common temporal reference frame 19. The timeline data 16 includes a plurality of sequenced elements 17 (e.g. tasks, process step, actions, events, resources, or other time variant processes) overlapping in time as represented by the temporal reference frame 19, as further described below. Interdependencies between the sequenced elements 17 can be defined in the chart data 20, for example. The chart data 20 can include components such as but not limited to various icons 13 for use in representing the data objects 14 and descriptions/definitions 15 of the various data objects 14. The data objects 14 can include terrain 11 or other spatial data 11 (e.g. a process flow chart). The data objects 14 can also be used to represent routes/paths 9 shown on the terrain 11 and/or in the air above the terrain 11, as desired.”), ¶ 64 (“Referring to FIGS. 1, 2, and 10, an example operation 700 of the tool 12 is shown for coordinating display of synchronized spatial information and time-variant information on the visual interface 202 as the visual representation 10 of a multi-dimensional planned process. The method has the example steps of: step 702—access [accessing] the time-variant information from the data store 122 including timeline data 16 including at least two sequenced elements 17 having overlapping time spans with respect to the common temporal reference frame 19; step 704—access the spatial information from the data store 122 including a plurality of data elements 14 for representing visual elements for display in the visual representation 10 with respect to a reference surface 11, such that each of the visual elements are operatively coupled to at least one sequenced element 17 of the sequenced elements”)) rendering a first representation of an artifact including the one or more data objects at a first time, the artifact including the one or more data objects associated with the geographical region, the first representation of the artifact including a representation of the geographical region and one or more representations corresponding to the one or more data objects, (¶ 29 (“Referring again to FIG. 2, the tool 12 interacts via link 116 with a VI manager 112 (also known as a visualization renderer) of the system 100 for presenting the visual representation 10 on the visual interface 202, along with visual elements [first representation] representing the synchronized data objects 14 [an artifact including the one or more data objects at a first time], timeline data 16 and chart data 20. The tool 12 also interacts via link 118 with a data manager 114 of the system 100 to coordinate management of the data objects 14 and data 16,20 resident in the memory 102. The data manager 114 can receive requests for storing, retrieving, amending, or creating the objects 14 and the data 16,20 via the tool 12 and/or directly via link 120 from the VI manager 112, as driven by the user events 109 and/or predefined operation of the tool 12. The data objects 14 and the data 16,20 can be stored in a data store 122 accessible by the tool 12 and data manager 114. Accordingly, the tool 12 and managers 112, 114 coordinate the processing of data objects 14, data 16,20 and associated user events 109 with respect to the graphic content displayed on the visual interface 202 [.”), ¶ 47 (“The tool 12 uses a custom 3D engine 52 (in conjunction with the VI manager 112—see FIG. 2) for rendering highly accurate models of real-world terrain 11, for example, which are texture mapped with actual map and landsat data.”)) the artifact including…a first representation state corresponding to the first representation, (¶ 42 (“The real-world time, as depicted by the state of the data objects 14 [artifact] in the visualization representation 10, is indicated on the temporal reference frame 19 with a marker 18 that can be moved across the temporal reference frame 19 to show the progress of time, which is synchronized with the displayed state [first representation state] of the data objects 14 (preferably animated) as the marker 18 is scrolled from side to side. For example, the sequenced elements 17 shown to the left of the marker 18 occurred in the past, while sequenced elements 17 to the right have yet to occur. Users of the tool 12 can drag the marker 18 along the temporal reference frame 19 to view the sequenced elements 17 that occurred in the past or that have yet to occur. Doing so updates the animations of the data objects 14, associated with the sequenced elements 17, in the 3D visualization representation 10.”)). Kapler does not expressly disclose the artifact including an identifier [of a software application] providing the one or more data objects… the identifier [of the software application] configured to be used in a subsequent data request (but see Sterkel ¶ 57 (“The moving map engine causes display, on a map view displayed to a user in a browser, of graphical indications of content items associated with the location of the vehicle. A list of candidate content items may be stored on the moving map engine, along with, for each content item, a location or region of the content item and an indication [identifier] where the content item is stored. For example, the content item may be stored on a device that is onboard the vehicle, such as on a server running the moving map engine, or a device that is offboard the vehicle. Other candidate content items may be drawn from social networking sites such as Facebook or Twitter [software application providing the one or more data objects].”), ¶ 63 (“When a user hovers over or makes a preliminary selection of the marker, the moving map engine may cause display of a name or summary of the content item represented by the marker. When a user touches, clicks on, or selects the content item, the moving map engine may cause display of additional information about the content item, or part or all of the content item itself. The name information may be stored locally by the moving map engine, in a database that includes content item names, geographical locations, and storage locations, and the content item itself may be retrieved from a content server identified by the storage location for the content item [identifier]. The name information may be sent to the browser client in association with marker locations of content items that may be displayed on the map view by the browser client.”)) receiving a first input associated with at least one of the one or more data objects; (but see Sterkel ¶ 64 (“For example, users may select a content marker to see a name or summary of the content, and further select the content marker to retrieve the content associated with the marker.”)) transmitting the subsequent data request to the software application based at least in part on the identifier [of the software application]; (but see Sterkel ¶ 31 (“For example, server logic running on the client machine may use an identity of the selected item to query [subsequent data request] another machine that is onboard or offboard the vehicle, and utilize the results of the query to display additional information. Also, logic running on the other machine may acquire or retrieve requested content from storage on the other machine or from various network-connected storage locations.”)) rendering a second representation of the artifact including an updated representation of the at least one of the one or more data objects based at least in part on the first input and a response to the subsequent data request; (but see Sterkel ¶ 31 (“The retrieved content [response to the subsequent data request] may then be returned to the client machine, for display to the user. The additional information may be added to the map view or may be displayed, concurrently or non-concurrently with the map view, in a separate window. For example, selection of a pin may cause display of the name of the pin, a summary of information described by content item(s) represented by the pin, partial content from the content item(s) represented by the pin, or full content of the content item(s) represented by the pin [second representation of the artifact]. Selection of the pin may also run one or more programs. The additional information may also be provided in non-display formats such as audio. For example, selection of the pin may trigger playing an audio clip related to a location that was marked by the pin.”)) generating a second representation state corresponding to the second representation; (but see Sterkel ¶ 65 (“In one embodiment, user-selection of the marker causes the content to be loaded above, below, next to, or on a layer on top of the moving map [second representation state]. User-selection of the marker may also cause an application to load customized content. The content may be displayed in the same window as the moving map or in another window. In one example, the content is displayed on a sidebar or in a frame above or below the moving map. In another example, the content appears in a popup window next to the marker.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kapler to incorporate the teachings of Sterkel to generate a map view that covers a bounded region and includes a graphical representation of the user’s vehicle at the location of the vehicle on the map and that includes graphical representations of content items that relate to the bounded region, at least because doing so would provide a user with geographically relevant data based on the user’s location. See Sterkel ¶ 26. Kapler and Sterkel do not expressly disclose that the content identifier is an identifier of the software application (but see Harris ¶ 29 (“The content providers may include, for example, social media platforms (e.g., FACEBOOK, TWITTER, INSTAGRAM, FLICKR, etc.), online knowledge databases, and/or other providers that can distribute content that may be relevant to a geo-location.”); Harris ¶ 81 (“Referring to FIG. 5, interface 500 may include a geofeed bounded by a geo-location defined by polygon 501. The geofeed may include content items 510-515 whose locations reside within the geo-location defined by polygon 501. Individual content items 510-515 may be associated with a geotag that has been obtained from one or more geotag sources. Geotag sources may include, for example, a GPS-enabled device (e.g., smartphone), a user input (e.g., the content creator manually inputting a geo-location when creating a social media post), a content provider (e.g., the content provider creating geotag data using various techniques as apparent to those of ordinary skill in the art), a user profile (e.g., the “home” location of the content creator), and/or location prediction module 113 (e.g., geotagged by crawling hyperlinks within content, automatic correlation, and/or other ways as discussed herein with respect to location prediction module 113).”); Harris ¶ 82 (“As illustrated, when content item 510 is selected (e.g., moused over, clicked, touched, or otherwise interacted with), interface 500 may cause geotag detail element 510A to appear. Geotag detail element 510A may include information related to the geotag associated with content item 510 such as, for example, a geotag source (e.g., GPS-enabled device) that provided the geotag and/or a confidence level associated with the geotag.”) (Harris teaches that the content provider of geotagged data is included in the geotag information)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kapler and Sterkel to incorporate the teachings of Harris to include the software application source of geotagged data in the data objects taught by Kapler, at least because doing so would enable using an identity of the selected item to query another machine and utilize the results of the query to display additional information. Kapler further discloses receiving a second input associated with the artifact; and ¶ 46 (“It is recognized that the module 304 can be used to update the display of the pane 50 and corresponding values 48, where the pane 50 can display properties for one or more sequenced elements 17. For example, the pane 50 can display only those sequenced element(s) 17 selected, the pane 50 can display the properties of all sequenced elements 17 shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), the pane 50 can display the properties of any sequenced element(s) 17 not shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), or a combination thereof. Double-clicking on the task in the timeline data 16 can also prompt the user to set the start and end times, as desired. Further, it is recognized that the display of the timeline data 16 can be simplified by selectively removing (or adding) the battlefield units from the timeline data 16 that the user is not interested in.”) rendering a third representation of the artifact based at least in part on the second input; (see figure 8 (each route or event associated with the visual representation has properties, including a start time, stop time, location, latitude, longitude, etc. that are updated as route or event is updated)) wherein at least a part of the method is performed by one or more processors (¶ 67 (“The logical operations of the described systems, apparatus, and methods are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine modules within one or more computer systems.”)). Claim 11 is an apparatus claim corresponding to claim 2 and is similarly rejected. Regarding claim 3, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein the first input includes an indication of changing location of the at least one of the one or more data objects (¶ 54 (“In one embodiment, units and tasks may be associated by using the association module 312 to drag one icon (representing data objects 14) on to another, thus linking the two icons. For navigable maneuver tasks, the ink path is also dragged on to the task, defining the route for the battlefield unit to follow. These associated battlefield units and ink paths appear as children of the task in its properties pane 50 (see FIG. 8). Bach task may only have one unit associated with it, as desired. In a further embodiment, the module 312 can also be used to facilitate storing of associations done by dragging units directly to the sequenced elements 17 in the timeline data 16. For example, tasks can be dragged directly to the unit's line/row of the timeline data 16 to associate the new task with the unit. Further, users can reassign tasks to different units by dragging the selected task from one unit to another displayed in the timeline data 16.”)). Claim 12 is an apparatus claim corresponding to claim 3 and is similarly rejected. Regarding claim 4, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein the geographical area is a first geographical area, wherein the second input includes changing the artifact to be associated with a second geographical area that is different from the first geographical area (¶ 47 (“Users can use the I/O 108 to pan and rotate around the terrain 11 and zoom in and out to view varying amounts of space.”) (zooming in and out of the space causes the geographic area covered by the displayed terrain to decrease or increase)). Claim 13 is an apparatus claim corresponding to claim 4 and is similarly rejected. Regarding claim 5, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein the second input includes an indication of recalling the first representation state, wherein the rendering a third representation of the artifact includes rendering the first representation of the artifact (¶ 30 (“Navigation of the timeline data 12 is facilitated, for example, through use of a time marker 18 (e.g. a slider control) moved in the context of a common temporal reference frame 19.”) (manipulating the slider control allows moving back and forth temporally and updating the map accordingly)). Claim 14 is an apparatus claim corresponding to claim 5 and is similarly rejected. Regarding claim 6, Kapler, in view of Sterkel and Harris, discloses the invention of claim 5 as discussed above. Kapler further discloses wherein the second input includes an input to a button indicating backward (¶ 30 (“Navigation of the timeline data 12 is facilitated, for example, through use of a time marker 18 (e.g. a slider control) moved in the context of a common temporal reference frame 19.”) (manipulating the slider control allows moving back and forth temporally and updating the map accordingly)). Claim 15 is an apparatus claim corresponding to claim 6 and is similarly rejected. Regarding claim 7, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein the one or more data objects are one or more first data objects, wherein the second input includes an indication of adding one or more second data objects, wherein the rendering a third representation of the artifact includes rendering one or more representations corresponding to the one or more second data objects (¶ 46 (“It is recognized that the module 304 can be used to update the display of the pane 50 and corresponding values 48, where the pane 50 can display properties for one or more sequenced elements 17. For example, the pane 50 can display only those sequenced element(s) 17 selected, the pane 50 can display the properties of all sequenced elements 17 shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), the pane 50 can display the properties of any sequenced element(s) 17 not shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), or a combination thereof. Double-clicking on the task in the timeline data 16 can also prompt the user to set the start and end times, as desired. Further, it is recognized that the display of the timeline data 16 can be simplified by selectively removing (or adding) the battlefield units from the timeline data 16 that the user is not interested in.”)). Claim 16 is an apparatus claim corresponding to claim 7 and is similarly rejected. Regarding claim 8, Kapler, in view of Sterkel and Harris, discloses the invention of claim 7 as discussed above. Kapler further discloses wherein the second input includes an indication of acquiring additional data objects (¶ 46 (“Further, it is recognized that the display of the timeline data 16 can be simplified by selectively removing (or adding) the battlefield units from the timeline data 16 that the user is not interested in.”)). Claim 17 is an apparatus claim corresponding to claim 8 and is similarly rejected. Regarding claim 9, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein at least selected from a group consisting of the first representation, the second representation, and the third representation includes a representation within a map (see FIG. 3) (displaying battlefield map)). Claim 18 is an apparatus claim corresponding to claim 9 and is similarly rejected. Regarding claim 20, Kapler discloses [a] method comprising: accessing one or more data objects in a geographical area; (¶ 30 (“Referring to FIG. 3, user planning and interaction with the tool 12 is facilitated through two main components, namely the timeline data 16 and the visualization representation 10 (e.g. a 2D or 3D battlefield) [geographical area], such that navigation thought the timeline data 16 is synchronized with changes in the display of visual elements representing the data objects 14 [data objects] and chart data 20 in the visualization representation 10. Navigation of the timeline data 12 is facilitated, for example, through use of a time marker 18 (e.g. a slider control) moved in the context of a common temporal reference frame 19. The timeline data 16 includes a plurality of sequenced elements 17 (e.g. tasks, process step, actions, events, resources, or other time variant processes) overlapping in time as represented by the temporal reference frame 19, as further described below. Interdependencies between the sequenced elements 17 can be defined in the chart data 20, for example. The chart data 20 can include components such as but not limited to various icons 13 for use in representing the data objects 14 and descriptions/definitions 15 of the various data objects 14. The data objects 14 can include terrain 11 or other spatial data 11 (e.g. a process flow chart). The data objects 14 can also be used to represent routes/paths 9 shown on the terrain 11 and/or in the air above the terrain 11, as desired.”), ¶ 64 (“Referring to FIGS. 1, 2, and 10, an example operation 700 of the tool 12 is shown for coordinating display of synchronized spatial information and time-variant information on the visual interface 202 as the visual representation 10 of a multi-dimensional planned process. The method has the example steps of: step 702—access [accessing] the time-variant information from the data store 122 including timeline data 16 including at least two sequenced elements 17 having overlapping time spans with respect to the common temporal reference frame 19; step 704—access the spatial information from the data store 122 including a plurality of data elements 14 for representing visual elements for display in the visual representation 10 with respect to a reference surface 11, such that each of the visual elements are operatively coupled to at least one sequenced element 17 of the sequenced elements”)) rendering a first representation of an artifact including the one or more data objects at a first time, the artifact including the one or more data objects associated with the geographical region, the first representation of the artifact including a representation of the geographical region and one or more representations corresponding to the one or more data objects, (¶ 29 (“Referring again to FIG. 2, the tool 12 interacts via link 116 with a VI manager 112 (also known as a visualization renderer) of the system 100 for presenting the visual representation 10 on the visual interface 202, along with visual elements representing the synchronized data objects 14 [first representation of an artifact including the one or more data objects at a first time], timeline data 16 and chart data 20. The tool 12 also interacts via link 118 with a data manager 114 of the system 100 to coordinate management of the data objects 14 and data 16,20 resident in the memory 102. The data manager 114 can receive requests for storing, retrieving, amending, or creating the objects 14 and the data 16,20 via the tool 12 and/or directly via link 120 from the VI manager 112, as driven by the user events 109 and/or predefined operation of the tool 12. The data objects 14 and the data 16,20 can be stored in a data store 122 accessible by the tool 12 and data manager 114. Accordingly, the tool 12 and managers 112, 114 coordinate the processing of data objects 14, data 16,20 and associated user events 109 with respect to the graphic content displayed on the visual interface 202 [.”), ¶ 47 (“The tool 12 uses a custom 3D engine 52 (in conjunction with the VI manager 112—see FIG. 2) for rendering highly accurate models of real-world terrain 11, for example, which are texture mapped with actual map and landsat data.”)) the artifact including…a first representation state corresponding to the first representation; (¶ 42 (“The real-world time, as depicted by the state of the data objects 14 in the visualization representation 10, is indicated on the temporal reference frame 19 with a marker 18 that can be moved across the temporal reference frame 19 to show the progress of time, which is synchronized with the displayed state [first representation state] of the data objects 14 (preferably animated) as the marker 18 is scrolled from side to side. For example, the sequenced elements 17 shown to the left of the marker 18 occurred in the past, while sequenced elements 17 to the right have yet to occur. Users of the tool 12 can drag the marker 18 along the temporal reference frame 19 to view the sequenced elements 17 that occurred in the past or that have yet to occur. Doing so updates the animations of the data objects 14, associated with the sequenced elements 17, in the 3D visualization representation 10.”)). Kapler does not expressly disclose the artifact including an identifier [of a software application] providing the one or more data objects…the identifier [of the software application] configured to be used in a subsequent data request (but see Sterkel ¶ 57 (“The moving map engine causes display, on a map view displayed to a user in a browser, of graphical indications of content items associated with the location of the vehicle. A list of candidate content items may be stored on the moving map engine, along with, for each content item, a location or region of the content item and an indication [identifier] where the content item is stored. For example, the content item may be stored on a device that is onboard the vehicle, such as on a server running the moving map engine, or a device that is offboard the vehicle. Other candidate content items may be drawn from social networking sites such as Facebook or Twitter [software application providing the one or more data objects].”), ¶ 63 (“When a user hovers over or makes a preliminary selection of the marker, the moving map engine may cause display of a name or summary of the content item represented by the marker. When a user touches, clicks on, or selects the content item, the moving map engine may cause display of additional information about the content item, or part or all of the content item itself. The name information may be stored locally by the moving map engine, in a database that includes content item names, geographical locations, and storage locations, and the content item itself may be retrieved from a content server identified by the storage location for the content item [identifier]. The name information may be sent to the browser client in association with marker locations of content items that may be displayed on the map view by the browser client.”)) receiving a first input associated with at least one of the one or more data objects; (but see Sterkel ¶ 64 (“For example, users may select a content marker to see a name or summary of the content, and further select the content marker to retrieve the content associated with the marker.”)) transmitting the subsequent data request to the software application based at least in part on the identifier [of the software application]; (but see Sterkel ¶ 31 (“For example, server logic running on the client machine may use an identity of the selected item to query [subsequent data request] another machine that is onboard or offboard the vehicle, and utilize the results of the query to display additional information. Also, logic running on the other machine may acquire or retrieve requested content from storage on the other machine or from various network-connected storage locations.”)) rendering a second representation of the artifact including an updated representation of the at least one of the one or more data objects based at least in part on the first input and a response to the subsequent data request; (but see Sterkel ¶ 31 (“The retrieved content [response to the subsequent data request] may then be returned to the client machine, for display to the user. The additional information may be added to the map view or may be displayed, concurrently or non-concurrently with the map view, in a separate window. For example, selection of a pin may cause display of the name of the pin, a summary of information described by content item(s) represented by the pin, partial content from the content item(s) represented by the pin, or full content of the content item(s) represented by the pin [second representation of the artifact]. Selection of the pin may also run one or more programs. The additional information may also be provided in non-display formats such as audio. For example, selection of the pin may trigger playing an audio clip related to a location that was marked by the pin.”)) generating a second representation state corresponding to the second representation; (but see Sterkel ¶ 65 (“In one embodiment, user-selection of the marker causes the content to be loaded above, below, next to, or on a layer on top of the moving map [second representation state]. User-selection of the marker may also cause an application to load customized content. The content may be displayed in the same window as the moving map or in another window. In one example, the content is displayed on a sidebar or in a frame above or below the moving map. In another example, the content appears in a popup window next to the marker.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kapler to incorporate the teachings of Sterkel to generate a map view that covers a bounded region and includes a graphical representation of the user’s vehicle at the location of the vehicle on the map and that includes graphical representations of content items that relate to the bounded region, at least because doing so would provide a user with geographically relevant data based on the user’s location. See Sterkel ¶ 26. Kapler and Sterkel do not expressly disclose that the content identifier is an identifier of the software application (but see Harris ¶ 29 (“The content providers may include, for example, social media platforms (e.g., FACEBOOK, TWITTER, INSTAGRAM, FLICKR, etc.), online knowledge databases, and/or other providers that can distribute content that may be relevant to a geo-location.”); Harris ¶ 81 (“Referring to FIG. 5, interface 500 may include a geofeed bounded by a geo-location defined by polygon 501. The geofeed may include content items 510-515 whose locations reside within the geo-location defined by polygon 501. Individual content items 510-515 may be associated with a geotag that has been obtained from one or more geotag sources. Geotag sources may include, for example, a GPS-enabled device (e.g., smartphone), a user input (e.g., the content creator manually inputting a geo-location when creating a social media post), a content provider (e.g., the content provider creating geotag data using various techniques as apparent to those of ordinary skill in the art), a user profile (e.g., the “home” location of the content creator), and/or location prediction module 113 (e.g., geotagged by crawling hyperlinks within content, automatic correlation, and/or other ways as discussed herein with respect to location prediction module 113).”); Harris ¶ 82 (“As illustrated, when content item 510 is selected (e.g., moused over, clicked, touched, or otherwise interacted with), interface 500 may cause geotag detail element 510A to appear. Geotag detail element 510A may include information related to the geotag associated with content item 510 such as, for example, a geotag source (e.g., GPS-enabled device) that provided the geotag and/or a confidence level associated with the geotag.”) (Harris teaches that the content provider of geotagged data is included in the geotag information)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kapler and Sterkel to incorporate the teachings of Harris to include the software application source of geotagged data in the data objects taught by Kapler, at least because doing so would enable using an identity of the selected item to query another machine and utilize the results of the query to display additional information. Kapler further discloses receiving a second input associated with the artifact; and (¶ 46 (“It is recognized that the module 304 can be used to update the display of the pane 50 and corresponding values 48, where the pane 50 can display properties for one or more sequenced elements 17. For example, the pane 50 can display only those sequenced element(s) 17 selected, the pane 50 can display the properties of all sequenced elements 17 shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), the pane 50 can display the properties of any sequenced element(s) 17 not shown in the timeline data 16 contained within the temporal reference frame 19 (shown in the bar 46), or a combination thereof. Double-clicking on the task in the timeline data 16 can also prompt the user to set the start and end times, as desired. Further, it is recognized that the display of the timeline data 16 can be simplified by selectively removing (or adding) the battlefield units from the timeline data 16 that the user is not interested in.”)) rendering a third representation of the artifact based at least in part on the second input; (see figure 8 (each route or event associated with the visual representation has properties, including a start time, stop time, location, latitude, longitude, etc. that are updated as route or event is updated)) wherein the first input includes an indication of changing location of the at least one of the one or more data objects; (¶ 54 (“In one embodiment, units and tasks may be associated by using the association module 312 to drag one icon (representing data objects 14) on to another, thus linking the two icons. For navigable maneuver tasks, the ink path is also dragged on to the task, defining the route for the battlefield unit to follow. These associated battlefield units and ink paths appear as children of the task in its properties pane 50 (see FIG. 8). Bach task may only have one unit associated with it, as desired. In a further embodiment, the module 312 can also be used to facilitate storing of associations done by dragging units directly to the sequenced elements 17 in the timeline data 16. For example, tasks can be dragged directly to the unit's line/row of the timeline data 16 to associate the new task with the unit. Further, users can reassign tasks to different units by dragging the selected task from one unit to another displayed in the timeline data 16.”)) wherein at least a part of the method is performed by one or more processors (¶ 67 (“The logical operations of the described systems, apparatus, and methods are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine modules within one or more computer systems.”)). Regarding claim 21, Kapler, in view of Sterkel and Harris, discloses the invention of claim 20 as discussed above. Kapler further discloses wherein the geographical area is a first geographical area, wherein the second input includes changing the artifact to be associated with a second geographical area that is different from the first geographical area; (¶ 47 (“Users can use the I/O 108 to pan and rotate around the terrain 11 and zoom in and out to view varying amounts of space.”) (zooming in and out of the space causes the geographic area covered by the displayed terrain to decrease or increase)) wherein the second input includes an indication of recalling the first representation state, wherein the rendering a third representation of the artifact includes rendering the first representation of the artifact (¶ 30 (“Navigation of the timeline data 12 is facilitated, for example, through use of a time marker 18 (e.g. a slider control) moved in the context of a common temporal reference frame 19.”) (manipulating the slider control allows moving back and forth temporally and updating the map accordingly)). Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kapler, Sterkel, and Harris as applied to claims 2 and 11 above, and further in view of Johnson (US 2008/0307498 A1; published Dec. 11, 2008). Regarding claim 10, Kapler, in view of Sterkel and Harris, discloses the invention of claim 2 as discussed above. Kapler further discloses wherein the one or more data objects are associated with one or more properties, (see FIG. 8). Kapler does not expressly disclose wherein at least one property of the one or more properties is associated with an access control status, wherein at least selected from a group consisting of the first representation, the second representation, and the third representation is based on the access control status. However, Johnson teaches “A number of geospatial attributes or parameters associated with GIS data are used to filter requests for geo-visualization of the data and to determine whether the request is subject to a restriction.” Abstract. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention’ to have modified Kapler to associate restriction attributes to objects on the battlefield terrain, at least because doing so would enable generating a personalized map representation without compromising data security. See Johnson ¶ 7. Claim 19 is an apparatus claim corresponding to claim 10 and is similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Adam, Nabil R., Basit Shafiq, and Robin Staffin. "Spatial computing and social media in the context of disaster management." IEEE Intelligent Systems 27.6 (2012): 90-96. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID KHAN whose telephone number is (571)270-0419. The examiner can normally be reached M-F, 9-5 est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571)272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHID K KHAN/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Oct 17, 2023
Application Filed
Jun 28, 2024
Non-Final Rejection — §103
Sep 16, 2024
Interview Requested
Sep 26, 2024
Examiner Interview Summary
Sep 26, 2024
Applicant Interview (Telephonic)
Sep 27, 2024
Response Filed
Nov 16, 2024
Final Rejection — §103
Nov 30, 2024
Interview Requested
Dec 13, 2024
Applicant Interview (Telephonic)
Dec 15, 2024
Examiner Interview Summary
Jan 28, 2025
Response after Non-Final Action
Feb 21, 2025
Request for Continued Examination
Feb 24, 2025
Response after Non-Final Action
Mar 07, 2025
Non-Final Rejection — §103
Jun 09, 2025
Interview Requested
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Jul 14, 2025
Response Filed
Oct 08, 2025
Final Rejection — §103
Nov 21, 2025
Interview Requested
Dec 03, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Jan 05, 2026
Response after Non-Final Action
Feb 03, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591768
DEEP LEARNING ACCELERATION WITH MIXED PRECISION
2y 5m to grant Granted Mar 31, 2026
Patent 12579516
System and Method for Organizing and Designing Comment
2y 5m to grant Granted Mar 17, 2026
Patent 12566813
SYSTEMS AND METHODS FOR RENDERING INTERACTIVE WEB PAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547298
Display Method and Electronic Device
2y 5m to grant Granted Feb 10, 2026
Patent 12530916
MULTIMODAL MULTITASK MACHINE LEARNING SYSTEM FOR DOCUMENT INTELLIGENCE TASKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 389 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month