Prosecution Insights
Last updated: April 19, 2026
Application No. 18/214,776

ASYMMETRICAL XR NAVIGATION FOR AUGMENTING OBJECTS OF INTEREST IN EXTENDED REALITY STREAMING

Non-Final OA §103
Filed
Jun 27, 2023
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2015 has been entered. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 4. Claims 1-4, 6, 11-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Dryer et al., (“Dryer”) [US-2023/0368458-A1] in view of Bhushan et al. (“Bhushan”) [US-12,112,435-B1], further in view of Buzdar et al. (“Buzdar”) [US-12,272,007-B2] Regarding claim 1, Dryer discloses a method (Dryer- ¶0006, at least discloses displaying, via the display generation component, a first user interface, wherein the first user interface concurrently includes: a representation of a field of view of one or more cameras, the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment, and a preview of a three-dimensional model of the physical environment) comprising: generating for display on a client device a graphical representation of the 3D location (Dryer- Fig. 5E and ¶0207, at least disclose device 100 [a client device] performs edge detection and surface detection (e.g., plane detection and/or detection of curved surfaces) in the first portion of the physical environment based on the captured image and/or depth data; and as edge(s) and surfaces are detected and characterized in the first portion of the physical environment, device 100 displays respective graphical representations of the detected edges and/or surfaces in user interface 522 […] graphical object 571 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and floor 540 [a graphical representation of a 3D location]; graphical object 572 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and ceiling 538; graphical object 574 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and wall 532; and graphical object 576 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 532 and floor 540 [a graphical representation of a 3D location]); receiving a first input, via the client device, identifying a portion of the graphical representation (Dryer- Fig. 5F and ¶0208-0209, at least disclose as more edges and/or surfaces are detected in the first portion of the physical environment, additional graphical objects (e.g., graphical object 578, and graphical object 580) are displayed at the respective locations of the detected edges and/or surfaces [identifying a portion of the graphical representation] […] edges and/or surfaces of cabinet 548 have been detected but the cabinet has not been recognized, and device 100 displays graphical object 580 at the location of the detected cabinet 548 (e.g., including displaying segments 580-1, 580-2, 580-3, and 580-4 at the locations of the detected edges) to convey the spatial characteristics that have been estimated for the detected edges and/or surfaces of cabinet 548; Figs. 5K-5L and ¶0236-0237, at least disclose In FIG. 5K, while the scanning and modeling of the second portion of the physical environment is ongoing and while camera view 24 and preview 568 are being updated with graphical objects, […] as shown in FIG. 5K, detecting the start of the input includes detecting contact 616 at a location on touch screen 220 that corresponds to a portion of the partially completed three-dimensional model in preview 568. In FIG. 5K, device 100 further detects movement of contact 616 in a first direction across touch screen 220 (e.g., a swipe input or a drag input on the partially completed model in preview 568 to the right) […] In FIG. 5L, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the partially completed model in preview 568 in the first direction) [receiving a first input, via the client device]) and requesting information for the portion of the graphical representation (Dryer- Fig. 5O and ¶0239, at least disclose In FIG. 5O, in response to detecting the user input that corresponds to a request to rescale the partially completed model in preview 568, the partially completed model of room 520 is enlarged); in response to receiving the input (As discussed above): accessing a mapping between objects in the graphical representation of the 3D location and 3D objects in the 3D location (Dryer- Figs. 5L-5M shows a mapping between objects floor lamp 556 of graphical object 598 [objects in the graphical representation of the 3D location] and representation 556″ of floor lamp 556 [3D objects in the 3D location]; ¶0234, at least discloses In FIG. 5L, as the scan and modeling [accessing a mapping] of the second portion of the physical environment continue, scan and modeling of floor lamp 556 is completed, and a final state of graphical object 598 is displayed [objects in the graphical representation] to indicate the spatial characteristics of floor lamp 556. In addition, representation 556″ [3D objects] of floor lamp 556 is added to the partially completed model of room 520 in preview 568 to a position to the right of representation 550″ of TV stand 550 [3D objects in the 3D location]); based on the mapping, identifying a correspondence between a graphical object in the portion of the graphical representation and a 3D object in the 3D location (Dryer- Figs. 5L-5M shows a mapping between objects floor lamp 556 of graphical object 598 [objects in the graphical representation of the 3D location] and representation 556″ of floor lamp 556 [3D objects in the 3D location]; ¶0234, at least discloses In FIG. 5L, as the scan and modeling of the second portion of the physical environment continue, scan and modeling of floor lamp 556 is completed, and a final state of graphical object 598 is displayed [a graphical object in the portion of the graphical representation] to indicate the spatial characteristics of floor lamp 556. In addition, representation 556″ [3D objects] of floor lamp 556 is added to the partially completed model of room 520 in preview 568 to a position to the right of representation 550″ of TV stand 550 [identifying a correspondence between a graphical object in the portion of the graphical representation and a 3D object in the 3D location]); and transmitting a signal to cause the XR device at the 3D location to generate for display an augmentation for the 3D object in the 3D location (Dryer- Figs. 5X, 5Y, 5Z shows an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520; ¶0136, at least discloses GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display [transmitting a signal]; ¶0259-0263, at least disclose In FIG. 5X, in response to detecting the completion of the scanning and modeling of the entire room (e.g., all four walls and its interior, or another set of required structural elements and/or nonstructural elements), device 100 ceases to display the partially completed three-dimensional model of room 520 and displays an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520. In some embodiments, the completed three-dimensional model 634 of room 520 is displayed [display an augmentation for the 3D object in the 3D location] in a user interface 636 that does not include camera view 524 […] In FIG. 5Y, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the completed model 634 in user interface in the first direction), device 100 moves the completed three-dimensional model 634 in a first manner in accordance with the input (e.g., rotating and/or translating the completed model 634 in the first direction); ¶0273, at least discloses the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component, a first user interface (e.g., a scan user interface that is displayed to show progress of an initial scan of a physical environment to build a three-dimensional model of the physical environment, a camera user interface, and/or a user interface that is displayed in response to a user's request to perform a scan of a physical environment or to start an augmented reality session in a physical environment), wherein the first user interface concurrently includes (e.g., in an overlaying manner, or an adjacent manner): a representation of a field of view of one or more cameras (e.g., images or video of a live feed from the camera(s), or a view of the physical environment through a transparent or semitransparent display), the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment (e.g., the first viewpoint of the user corresponds to a direction, position and/or vantage point from which the physical environment is being viewed by the user either via a head mounted XR device [an extended reality (XR) device] or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device); receiving, from the XR device, the requested information, wherein the requested information is generated based at least in part on the augmentation (Dryer- ¶0239, at least discloses In FIG. 5O, in response to detecting the user input that corresponds to a request to rescale the partially completed model in preview 568, the partially completed model of room 520 is enlarged; ¶0259-0263, at least disclose In FIG. 5X, in response to detecting the completion of the scanning and modeling of the entire room (e.g., all four walls and its interior, or another set of required structural elements and/or nonstructural elements), device 100 ceases to display the partially completed three-dimensional model of room 520 and displays an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520. In some embodiments, the completed three-dimensional model 634 of room 520 is displayed [at least in part on the augmentation] in a user interface 636 that does not include camera view 524 […] In FIG. 5Y, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the completed model 634 in user interface in the first direction), device 100 moves the completed three-dimensional model 634 in a first manner in accordance with the input (e.g., rotating and/or translating the completed model 634 in the first direction); ¶0273, at least discloses the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component, a first user interface (e.g., a scan user interface that is displayed to show progress of an initial scan of a physical environment to build a three-dimensional model of the physical environment, a camera user interface, and/or a user interface that is displayed in response to a user's request to perform a scan of a physical environment or to start an augmented reality session in a physical environment), wherein the first user interface concurrently includes (e.g., in an overlaying manner, or an adjacent manner): a representation of a field of view of one or more cameras (e.g., images or video of a live feed from the camera(s), or a view of the physical environment through a transparent or semitransparent display), the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment (e.g., the first viewpoint of the user corresponds to a direction, position and/or vantage point from which the physical environment is being viewed by the user either via a head mounted XR device [an extended reality (XR) device] or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device); generating for display on the client device the requested information (Dryer- ¶0239, at least discloses In FIG. 5O, in response to detecting the user input that corresponds to a request to rescale the partially completed model in preview 568, the partially completed model of room 520 is enlarged); and storing the requested information in non-transitory memory (Dryer- ¶0067, at least discloses graphics module 132 stores data representing graphics to be used; ¶0098, at least discloses In conjunction with touch-sensitive display system 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, and/or delete a still image or video from memory 102). Dryer does not explicitly disclose receiving, from an extended reality (XR) device at a 3D location, a 3D representation of the 3D location, the XR device being operated at the 3D location; generating for display on a client device a graphical representation of the 3D location based on the 3D representation of the 3D location, the client device being operated at a location other than the 3D location; storing the requested information in non-transitory memory for future display; and in response to receiving a second input, via a second client device, identifying the same portion of the graphical representation and requesting the same information for the portion of the graphical representation: retrieving the stored information from the non-transitory memory; and generating for display on the second client device the retrieved information. However, Bhushan discloses receiving, from an extended reality (XR) device at a 3D location, a 3D representation of the 3D location, the XR device being operated at the 3D location (Bhushan- Fig. 7 and column 43, lines 43-59, at least disclose host device 106(1) [an extended reality (XR) device], one or more remote devices 106, data processing service 702, remote storage 704, tunnel bridge 706, host extended reality (XR) environment 710, remote XR environment 730, coupled to remote device 106(2) and remote environment 740 coupled to remote device 106(3); column 44, line 59 to column 45, line 5, at least disclose host device 106(1) may provide updates to the imaging data (e.g., depth sensor data and image sensor data) associated with real-world environments. For example, host device 106(1) may update the 2D surface data and the 3D depth data associated with a real-world environment [3D location] by re-scanning the real-world environments ( e.g., re-scanning every 10 seconds) using imaging sensor 726 and depth sensor 724 […] host device 106(1) may acquire new depth sensor data from depth sensor 724 and image sensor data from imaging sensor 726 in response to the user input; Fig. 10C and column 57, line 64 to column 58, line 5, at least disclose As shown in FIG. 10C, view 1020 presents remote XR environment portion 1032. View 1020 includes remote XR environment portion 952 and rendered asset 1034 [a 3D representation of the 3D location]. Remote XR. environment portion 1032 corresponds to a view of remote XR environment 730 based on a position of remote device 106(2). Remote XR environment 730 renders the XR stream, corresponding to the scene scanned by host device 106(1) [an extended reality (XR) device at a 3D location], as an adaptable 3D representation of the physical space [a 3D representation of the 3D location]); generating for display on a client device a graphical representation of the 3D location based on the 3D representation of the 3D location (Bhushan- Fig. 10D and column 58, lines 45-53, at least disclose As shown by FIG. 10D, view 1040 displays remote XR environment portion 1032 at a later time during the remote collaboration session. View 1040 [display on a client device] includes remote XR environment portion 1032, asset 1034, host device avatar 1042, pin 1044, and map 1046 [display on a client device]. During the remote collaboration session, the remote user may implement one or more collaboration tools in order to navigate through remote XR environment 730 and/or interact with portions of the adaptable 3D representation of the physical space [3D representation of the 3D location]), the client device being operated at a location other than the 3D location (Bhushan- column 42, lines 53-55, at least discloses The sensor(s) 620 may include location sensors enable the client device 106 to determine the physical location and/or orientation of the client device 106 [the client device being operated at a location other than the 3D location]; column 45, lines 1-4, at least discloses host device 106(1) may be triggered to attempt a rescan at periodic intervals (e.g., a setting to attempt a rescan every 20 seconds, every 5 minutes. etc.), in response to a change in the location of host device 106(1), and/or in response to actions taken by remote device 106(2) or 106(3) (e.g., receiving a message requesting a re-scan); column 47, lines 17-14, at least discloses host device 106(1) may include, without limitation, smartphones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, an Apple TV® devices, and so forth. It is noted that “remote” in this context means located at a different location, relative to host device 106(1)); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer to incorporate the teachings of Bhushan, and apply the client device being operated at a location other than the 3D location of the host device into the Dryer’s teachings for receiving, from an extended reality (XR) device at a 3D location, a 3D representation of the 3D location, the XR device being operated at the 3D location; generating for display on a client device a graphical representation of the 3D location based on the 3D representation of the 3D location, the client device being operated at a location other than the 3D location. Doing so would enable one user to capture three-dimensional (3D) data associated with a real-world environment and share a stream of the 3D data with remote participants who are using remote devices. The prior art does not explicitly disclose, but Buzdar discloses storing the requested information in non-transitory memory for future display (Buzdar- col 2, lines 18-19, at least discloses when the users access the same AR experience again at a future time; col 2, lines 37-44, at least discloses The disclosed techniques receive a request to resume the AR experience after the AR experience has been terminated and, in response to receiving the request to resume the AR experience, access the data that was stored prior to termination of the AR experience to generate a display of the AR experience that depicts the one or more AR elements at a particular position within a second image; col 12, lines 34-39, at least discloses In response to receiving the request to resume the AR experience, the AR element positioning system 224 can access the data that was stored prior to termination of the AR experience to generate a display of the AR experience that depicts the one or more AR elements at a particular position within a second image); and in response to receiving a second input, via a second client device, identifying the same portion of the graphical representation and requesting the same information for the portion of the graphical representation (Buzdar- col 2, lines 50-60, at least discloses The graphical user interface presents a list of AR elements associated with the AR experience and includes a first option associated with a first AR element of the list of AR elements to cause location data of the first AR element to be stored after termination of the AR experience in response to selection of the first option. The graphical user interface also includes a second option associated with a second AR element of the list of AR elements to prevent storage of location data of the second AR element after termination of the AR experience in response to selection of the second option; col 3, lines 61-67, at least discloses The contextual and/or location data can be used by the AR experience that is resumed to display the AR element at the same location and in the same context as the AR element was previously displayed by the AR experience on another device or at an earlier point in time before the AR experience was closed or terminated; col 8, lines 49-51, at least discloses the media overlay may include text, a graphical element, or image that can be overlaid on top of a photograph taken by the client device 102; col 19, line 66 to col 20, line 5, at least disclose the user interface can receive a user request to re-launch the AR experience (on the same client device 102 or on another client device 102) and can resume display of the AR experience such that the AR elements are re-presented with the same context and/or at the same particular display position as previously displayed in the AR experience; col 21, lines 22-27, at least discloses The location data or contextual data is later used by the same client device 102 or a different client device to re-present the given AR object or element at the same location within the display of the AR experience and/or with the same contextual information (e.g., the same modifications applied to the AR object or element)): retrieving the stored information from the non-transitory memory (Buzdar- col 18, lines 40-44, at least discloses message video payload 408: video data, captured by a camera component or retrieved from a memory component of the client device 102, and that is included in the message 400. Video data for a sent or received message 400 may be stored in the video table 304; col 23, lines 35-39, at least discloses In response to identifying persistently stored data that is associated with the user identifier and the identifier of the first AR experience, the AR asset generation module 520 retrieves the persistently stored data; col 32, lines 12-13, at least discloses A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information)); and generating for display on the second client device the retrieved information (Buzdar- col 3, lines 61-67, at least discloses The contextual and/or location data can be used by the AR experience that is resumed to display the AR element at the same location and in the same context as the AR element was previously displayed by the AR experience on another device or at an earlier point in time before the AR experience was closed or terminated; col 19, line 66 to col 20, line 5, at least disclose the user interface can receive a user request to re-launch the AR experience (on the same client device 102 or on another client device 102) and can resume display of the AR experience such that the AR elements are re-presented with the same context and/or at the same particular display position as previously displayed in the AR experience). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan to incorporate the teachings of Buzdar, and apply identifying and requesting the same portion of the graphical representation into the Dryer/Bhushan’s teachings for storing the requested information in non-transitory memory for future display; and in response to receiving a second input, via a second client device, identifying the same portion of the graphical representation and requesting the same information for the portion of the graphical representation: retrieving the stored information from the non-transitory memory; and generating for display on the second client device the retrieved information. Doing so would enable the interest level users have in accessing the AR experiences again. Regarding claim 2, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and further discloses wherein the graphical representation comprises a 2D graphical representation of the 3D location on the client device (Dryer- Figs. 5G, 5H and ¶0213, at least disclose in response to detecting the completion of the detection and characterization of the edges of cabinet 548, the final state of graphical object 580 is displayed which is, optionally, a set of solid lines (e.g., a two-dimensional bounding box, a three-dimensional bounding box, or other types of outlines), have a uniform and lower luminance, have no feathering or reduced degree of feathering along all edges, and/or are uniformly opaque; ¶0229, at least discloses as detection and modeling of an object is completed, its corresponding representation (e.g., a three-dimensional representation, or a two-dimensional representation) is added to the partially completed three-dimensional model in preview 568); and wherein the identifying the correspondence further comprises, based on the mapping, identifying a correspondence between the graphical object in the portion of the 2D graphical representation and the 3D object in the 3D location (Dryer- Figs. 5G, 5H and ¶0213, at least disclose in response to detecting the completion of the detection and characterization of the edges of cabinet 548, the final state of graphical object 580 is displayed which is, optionally, a set of solid lines (e.g., a two-dimensional bounding box, a three-dimensional bounding box, or other types of outlines), have a uniform and lower luminance, have no feathering or reduced degree of feathering along all edges, and/or are uniformly opaque; ¶0229, at least discloses as detection and modeling of an object is completed, its corresponding representation (e.g., a three-dimensional representation, or a two-dimensional representation) is added to the partially completed three-dimensional model in preview 568; ¶0303, at least discloses the first representation of the first object includes a virtual outline of the first object, a two-dimensional or three-dimensional bounding box of the first object, and/or a translucent mask of the first object overlaid on a pass-through view of the first object in the representation of the field of view of the cameras (e.g., camera view, and/or a view through a transparent or semi-transparent display generation component)). Regarding claim 3, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and further discloses wherein the graphical representation comprises a 3D graphical representation of the 3D location on the client device (Dryer- Figs. 5G, 5H and ¶0213, at least disclose in response to detecting the completion of the detection and characterization of the edges of cabinet 548, the final state of graphical object 580 is displayed which is, optionally, a set of solid lines (e.g., a two-dimensional bounding box, a three-dimensional bounding box, or other types of outlines), have a uniform and lower luminance, have no feathering or reduced degree of feathering along all edges, and/or are uniformly opaque; ¶0229, at least discloses as detection and modeling of an object is completed, its corresponding representation (e.g., a three-dimensional representation, or a two-dimensional representation) is added to the partially completed three-dimensional model in preview 568); and wherein the identifying the correspondence further comprises, based on the mapping, identifying a correspondence between the graphical object in the portion of the 3D graphical representation and the 3D object in the 3D location (Dryer- Figs. 5G, 5H and ¶0213, at least disclose in response to detecting the completion of the detection and characterization of the edges of cabinet 548, the final state of graphical object 580 is displayed which is, optionally, a set of solid lines (e.g., a two-dimensional bounding box, a three-dimensional bounding box, or other types of outlines), have a uniform and lower luminance, have no feathering or reduced degree of feathering along all edges, and/or are uniformly opaque; ¶0229, at least discloses as detection and modeling of an object is completed, its corresponding representation (e.g., a three-dimensional representation, or a two-dimensional representation) is added to the partially completed three-dimensional model in preview 568; ¶0303, at least discloses the first representation of the first object includes a virtual outline of the first object, a two-dimensional or three-dimensional bounding box of the first object, and/or a translucent mask of the first object overlaid on a pass-through view of the first object in the representation of the field of view of the cameras (e.g., camera view, and/or a view through a transparent or semi-transparent display generation component). Regarding claim 4, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and further discloses wherein the generating for display the augmentation for the 3D object in the 3D location comprises generating for display (see Claim 1 rejection for detailed analysis) at least one of: an indicator that appears proximate to the 3D object (Dryer- Fig. 5K and ¶0233, at least disclose the non-spatial representation 596 of cabinet 548 and the non-spatial representation 612 of TV 560 are respectively displayed at locations of their corresponding objects, but both are turned to face toward the current viewpoint of the user; Fig. 5Q and ¶0240, at least disclose the guidance provided by object 604 and 606), highlighting that appears overlaid over the 3D object (Dryer- Fig. 5V and ¶0254, at least disclose graphical objects 630 is displayed at the location of boxes 562 overlaying camera view 524 in response to detection of one or more edges and/or surfaces of boxes 562. Graphical objects 630 are spatial representations that spatially indicate the spatial characteristics of boxes 562 in camera view 524), an overlay proximate to the 3D object (Dryer- Fig. 5V and ¶0254, at least disclose graphical objects 630 is displayed at the location of boxes 562 overlaying camera view 524 in response to detection of one or more edges and/or surfaces of boxes 562., and a messaging interface proximate to the 3D object (Dryer- Fig. 5J and ¶0224, at least disclose device 100 displays a prompt (e.g., banner 602, and/or another alert or notification) for the user to scan a missed spot in the presumably completed portion of the physical environment). Regarding claim 6, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and discloses the method further comprising: creating on the XR device the graphical representation of the 3D location (Dryer- Fig. 5E and ¶0207, at least disclose device 100 displays respective graphical representations of the detected edges and/or surfaces in user interface 522 […] graphical object 571 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and floor 540 [the graphical representation of the 3D location]; graphical object 572 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and ceiling 538; graphical object 574 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 530 and wall 532; and graphical object 576 (e.g., a line, and/or a linear graphical object) is displayed at a location that corresponds to a detected edge between wall 532 and floor 540 [graphical representation of a 3D location]; ¶0273, at least discloses In method 650, the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component, a first user interface (e.g., a scan user interface that is displayed to show progress of an initial scan of a physical environment to build a three-dimensional model of the physical environment […] the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment (e.g., the first viewpoint of the user corresponds to a direction, position and/or vantage point from which the physical environment is being viewed by the user either via a head mounted XR device or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device); storing the graphical representation of the 3D location in memory (Dryer- ¶0067, at least discloses graphics module 132 stores data representing graphics to be used; ¶0098, at least discloses In conjunction with touch-sensitive display system 112, display controller 156, optical sensor(s) 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, and/or delete a still image or video from memory 102.); receiving an input, via the client device, requesting the graphical representation of the 3D location (Dryer- Figs. 5K-5L and ¶0236-0237, at least disclose In FIG. 5K, while the scanning and modeling of the second portion of the physical environment is ongoing and while camera view 24 and preview 568 are being updated with graphical objects, […] as shown in FIG. 5K, detecting the start of the input includes detecting contact 616 at a location on touch screen 220 that corresponds to a portion of the partially completed three-dimensional model in preview 568. In FIG. 5K, device 100 further detects movement of contact 616 in a first direction across touch screen 220 (e.g., a swipe input or a drag input on the partially completed model in preview 568 to the right) […] In FIG. 5L, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the partially completed model in preview 568 in the first direction) [receiving an input, via the client device]); Fig. 5O and ¶0239, at least disclose In FIG. 5O, in response to detecting the user input that corresponds to a request to rescale the partially completed model in preview 568, the partially completed model of room 520 is enlarged); and in response to receiving the input requesting the graphical representation of the 3D location (As discussed above): retrieving the graphical representation of the 3D location from the memory (Buzdar- col 9, lines 16-20, at least discloses Once an augmented reality experience is selected, one or more images, videos, or augmented reality graphical elements are retrieved and presented as an overlay on top of the images or video captured by the client device 102; col 18, lines 40-44, at least discloses message video payload 408: video data, captured by a camera component or retrieved from a memory component of the client device 102 […] Video data for a sent or received message 400 may be stored in the video table 304; col 20, lines 31-39, at least discloses The AR experience development module 500 can include in the graphical user interface 600 an identifier of the AR experience bundle and a list of AR objects or AR elements 620 that are included in the AR experience bundle. The elements can include 2D meshes, 3D meshes, videos, audio files, image files, and/or machine learning models; col 24, lines 41-46, at least discloses the client device 102 stores the placement, location, and/or contextual information as persistently stored data in association with the user and the AR element 720. The placement, location, and/or contextual information can include an object classifier of the first real-world object 710, features of the first real-world object 710, 2D or 3D coordinates of the first real-world object 710 and/or the AR element 720, latitude and longitude coordinates or GPS coordinates of the first real-world object 710, and so forth); and providing the graphical representation of the 3D location to the client device (Dryer- Figs. 5G, 5H and ¶0213, at least disclose in response to detecting the completion of the detection and characterization of the edges of cabinet 548, the final state of graphical object 580 is displayed which is, optionally, a set of solid lines (e.g., a two-dimensional bounding box, a three-dimensional bounding box, or other types of outlines), have a uniform and lower luminance, have no feathering or reduced degree of feathering along all edges, and/or are uniformly opaque; ¶0229, at least discloses as detection and modeling of an object is completed, its corresponding representation (e.g., a three-dimensional representation, or a two-dimensional representation) is added to the partially completed three-dimensional model in preview 568)). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan to incorporate the teachings of Buzdar, and apply graphical elements are retrieved from a memory into the Dryer/Bhushan’s teachings for retrieving the graphical representation of the 3D location from the memory; and providing the graphical representation of the 3D location to the client device. The same motivation that was utilized in the rejection of claim 1 applies equally to this claim. The system of claims 11-14 and 16 are similar in scope to the functions performed by the method of claims 1-4 and 6 and therefore claims 11-14 and 16 are rejected under the same rationale. Regarding claim 11, Dryer in view of Bhushan and Buzdar, discloses a system (Dryer- Fig. 1A and ¶0037, at least disclose portable multifunction device 100 with touch-sensitive display system 112; Fig. 3A shows device 300; ¶0273, at least disclose the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component […] the physical environment is being viewed by the user either via a head mounted XR device or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device) comprising: non-transitory memory ((Dryer- Fig. 1A and ¶0037, at least disclose Device 100 includes memory 102 (which optionally includes one or more computer readable storage mediums), memory controller 122); and control circuitry (Dryer- Figs. 1A, 3A show different circuitry) configured to: receive, from an extended reality (XR) device at a 3D location, a 3D representation of the 3D location, the XR device being operated at the 3D location (see Claim 1 rejection for detailed analysis); generate for display on a client device (Dryer- Figs. 1A, 3A show device 100, device 300) a graphical representation of the 3D location based on the 3D representation of the 3D location, the client device being operated at a location other than the 3D location (see Claim 1 rejection for detailed analysis); receive a first input, via the client device (Dryer- Fig. 1A and ¶0037, at least disclose Device 100 includes memory 102 (which optionally includes one or more computer readable storage mediums), memory controller 122, one or more processing units (CPUs) 120, peripherals interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, input/output (I/O) subsystem 106, other input or control devices 116, and external port 124.), identifying a portion of the graphical representation and requesting information for the portion of the graphical representation (see Claim 1 rejection for detailed analysis); in response to receiving the first input (see Claim 1 rejection for detailed analysis): access a mapping between objects in the graphical representation of the 3D location and 3D objects in the 3D location (see Claim 1 rejection for detailed analysis); based on the mapping, identify a correspondence between a graphical object in the portion of the graphical representation and a 3D object in the 3D location (see Claim 1 rejection for detailed analysis); and transmit a signal to cause the XR device at the 3D location to generate for display an augmentation for the 3D object in the 3D location (see Claim 1 rejection for detailed analysis); receive, from the XR device, the requested information, wherein the requested information is generated based at least in part on the augmentation (see Claim 1 rejection for detailed analysis); generate for display on the client device the requested information (see Claim 1 rejection for detailed analysis); and store the requested information in the non-transitory memory for future display (see Claim 1 rejection for detailed analysis); and in response to receiving a second input, via a second client device, identifying the same portion of the graphical representation and requesting the same information for the portion of the graphical representation (see Claim 1 rejection for detailed analysis): retrieving the stored information from the non-transitory memory(see Claim 1 rejection for detailed analysis) ; and generating for display on the second client device the retrieved information (see Claim 1 rejection for detailed analysis). 5. Claims 5, 8-9, 15 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Dryer in view of Bhushan, further in view of Buzdar, still further in view of Fan et al. (“Fan”) [US-8,818,768-B1] Regarding claim 5, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and further discloses wherein the identifying correspondence between an object in the portion of the graphical representation and the 3D object in the 3D location (see Claim 1 rejection for detailed analysis) and does not clearly disclose, but Fan discloses the method is performed by a server not local to the client device or the XR device (Fan- Fig. 12 and col 13, lines 46-48, at least disclose System 1200 includes a client 1202 coupled to a GIS server 1224 via one or more networks 1244, such as the Internet; col 14, lines 9-25, at least discloses User constraint module 1212 may receive at least one constraint, input by a user, for a two-dimensional photographic images from the set of two-dimensional photographic images received from GIS server 1224 […] Each constraint indicates that a position on the two-dimensional photographic image corresponds to a position on the three-dimensional model). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan/Buzdar to incorporate the teachings of Fan, and apply the GIS server into the Dryer/Bhushan/Buzdar’s teachings in order the identifying correspondence between an object in the portion of the graphical representation and the 3D object in the 3D location is performed by a server not local to the client device or the XR device. Doing so would allow the photographic images of the building may be texture mapped to the three-dimensional model to create a more realistic rendering of the building. Regarding claim 8, Dryer in view of Bhushan and Buzdar, discloses the method of claim 1, and discloses the method further comprising: identifying a correspondence between a graphical object and a 3D object in the 3D location (Dryer- Figs. 5L-5M shows a mapping between objects floor lamp 556 of graphical object 598 [objects in the graphical representation of the 3D location] and representation 556″ of floor lamp 556 [3D objects in the 3D location]; ¶0234, at least discloses In FIG. 5L, as the scan and modeling of the second portion of the physical environment continue, scan and modeling of floor lamp 556 is completed, and a final state of graphical object 598 is displayed [a graphical object] to indicate the spatial characteristics of floor lamp 556. In addition, representation 556″ [3D objects] of floor lamp 556 is added to the partially completed model of room 520 in preview 568 to a position to the right of representation 550″ of TV stand 550 [, identifying a correspondence between a graphical object and a 3D object in the 3D location]); and transmitting a second signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location (Dryer- Figs. 5X, 5Y, 5Z shows an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520; ¶0045, at least discloses The one or more input controllers 160 receive/send electrical signals [transmitting a second signal] from/to other input or control devices 116; ¶0136, at least discloses GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display [transmitting a signal]; ¶0259-0263, at least disclose device 100 ceases to display the partially completed three-dimensional model of room 520 and displays an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520 […] the completed three-dimensional model 634 of room 520 is displayed [display an augmentation for the 3D object in the 3D location] in a user interface 636 that does not include camera view 524 […] In FIG. 5Y, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the completed model 634 in user interface in the first direction), device 100 moves the completed three-dimensional model 634 in a first manner in accordance with the input (e.g., rotating and/or translating the completed model 634 in the first direction); ¶0273, at least discloses the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component, a first user interface (e.g., a scan user interface that is displayed to show progress of an initial scan of a physical environment to build a three-dimensional model of the physical environment, a camera user interface, and/or a user interface that is displayed in response to a user's request to perform a scan of a physical environment or to start an augmented reality session in a physical environment) […] the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment (e.g., the first viewpoint of the user corresponds to a direction, position and/or vantage point from which the physical environment is being viewed by the user either via a head mounted XR device [the XR device] or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device). The prior art does not clearly disclose, but Fan discloses receiving a set of coordinates, via the client device, on a portion of the graphical representation (Fan- col 3, line 66 to col 4, line 11, at least disclose Diagram 200 shows a three-dimensional model 202 and multiple photographic images 216 and 206 of a building. Images 216 and 206 were captured from cameras having different perspectives, as illustrated by camera 214 and 204 […] a user may input constraints on images 216 and 206, such as constraints 218 and 208, and those constraints may be used to determine the geometry of three-dimensional model 200. The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates) [set of coordinates], a scale (e.g., height and width), an orientation (e.g., pan, tilt, and roll)); based on the set of coordinates, identifying a correspondence between a graphical object at the set of coordinates (Fan- col 3, line 66 to col 4, line 11, at least disclose Diagram 200 shows a three-dimensional model 202 and multiple photographic images 216 and 206 of a building. Images 216 and 206 were captured from cameras having different perspectives, as illustrated by camera 214 and 204 […] a user may input constraints on images 216 and 206, such as constraints 218 and 208, and those constraints may be used to determine the geometry of three-dimensional model 200. The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates) [set of coordinates], a scale (e.g., height and width), an orientation (e.g., pan, tilt, and roll); Fig. 2 and col 5, lines 37-48, at least disclose Once three-dimensional model 202 is added, a user may add a first constraint 240. Constraint 240 maps a position on photographic image 216 to a position 242 on three-dimensional model 202. When the user adds constraint 240, three-dimensional model 202 is translated such that position 242 would appear as a location on photographic image 216 defined by constraint 240. Similarly, if a user moved constraint 240 on photographic image 216, three-dimensional model 202 would be translated such that position 242 follows constraint 240). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan/Buzdar to incorporate the teachings of Fan, and apply the coordinates into the Dryer/Bhushan/Buzdar’s teachings for receiving a set of coordinates, via the client device, on a portion of the graphical representation; based on the set of coordinates, identifying a correspondence between a graphical object at the set of coordinates and a 3D object in the 3D location; and transmitting a second signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location. Doing so would allow the photographic images of the building may be texture mapped to the three-dimensional model to create a more realistic rendering of the building. Regarding claim 9, Dryer in view of Bhushan, Buzdar and Fan, discloses the method of claim 8, and further discloses wherein the set of coordinates is received by at least one of: a mouse click, a selection, a touch on a touchscreen, and a point of interest based on eye gaze (Dryer- ¶0138, at least discloses handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input-devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc., on touch-pads; pen stylus inputs; inputs based on real-time analysis of video images obtained by one or more cameras; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized; Fan- col 3, line 64 to col 4, line 11, at least disclose FIG. 2 shows a diagram 200 illustrating creating a three-dimensional model from user selections in two-dimensional images […] The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates)). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan/Buzdar to incorporate the teachings of Fan, and apply the user selections into the Dryer/Bhushan/Buzdar’s teachings in order the set of coordinates is received by at least one of: a mouse click, a selection, a touch on a touchscreen, and a point of interest based on eye gaze. The same motivation that was utilized in the rejection of claim 8 applies equally to this claim. The system of claims 15, 18-19 are similar in scope to the functions performed by the method of claims 5, 8-9 and therefore claims 5, 8-9 are rejected under the same rationale. 6. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dryer in view of Dryer in view of Bhushan, further in view of Buzdar, still further in view of Fan, still further in view of Ryu et al., (“Ryu”) [US-2017/0154204-A1] Regarding claim 10, Dryer in view of Bhushan, Buzdar and Fan, discloses the method of claim 8, and further discloses wherein: the receiving the set of coordinates further comprises receiving a plurality of sets of coordinates from a plurality of client devices (Dryer- ¶0033, at least discloses Computer systems for augmented and/or virtual reality include electronic devices [client devices] that produce augmented and/or virtual reality environments. Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices; Fan- col 3, line 66 to col 4, line 11, at least disclose Diagram 200 shows a three-dimensional model 202 and multiple photographic images 216 and 206 of a building. Images 216 and 206 were captured from cameras having different perspectives, as illustrated by camera 214 and 204 […] a user may input constraints on images 216 and 206, such as constraints 218 and 208, and those constraints may be used to determine the geometry of three-dimensional model 200. The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates) [set of coordinates], a scale (e.g., height and width), an orientation (e.g., pan, tilt, and roll)); determining a coordinate of interest to the plurality of sets of coordinates (Fan- col 3, line 66 to col 4, line 11, at least disclose Diagram 200 shows a three-dimensional model 202 and multiple photographic images 216 and 206 of a building. Images 216 and 206 were captured from cameras having different perspectives, as illustrated by camera 214 and 204 […] a user may input constraints on images 216 and 206, such as constraints 218 and 208, and those constraints may be used to determine the geometry of three-dimensional model 200. The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates) [set of coordinates], a scale (e.g., height and width), an orientation (e.g., pan, tilt, and roll)); based on the coordinate of interest, identifying a correspondence between a graphical object at the coordinate of interest and a 3D object in the 3D location (Dryer- Figs. 5L-5M shows a mapping between objects floor lamp 556 of graphical object 598 [objects in the graphical representation of the 3D location] and representation 556″ of floor lamp 556 [3D objects in the 3D location]; ¶0234, at least discloses In FIG. 5L, as the scan and modeling of the second portion of the physical environment continue, scan and modeling of floor lamp 556 is completed, and a final state of graphical object 598 is displayed [a graphical object] to indicate the spatial characteristics of floor lamp 556. In addition, representation 556″ [3D objects] of floor lamp 556 is added to the partially completed model of room 520 in preview 568 to a position to the right of representation 550″ of TV stand 550 [, identifying a correspondence between a graphical object and a 3D object in the 3D location]; Fan- col 3, line 66 to col 4, line 11, at least disclose Diagram 200 shows a three-dimensional model 202 and multiple photographic images 216 and 206 of a building. Images 216 and 206 were captured from cameras having different perspectives, as illustrated by camera 214 and 204 […] a user may input constraints on images 216 and 206, such as constraints 218 and 208, and those constraints may be used to determine the geometry of three-dimensional model 200. The geometry of three-dimensional model 202 may be specified by a set of geometric parameters, representing, for example, a position of an origin point (e.g., x, y, and z coordinates) [set of coordinates], a scale (e.g., height and width), an orientation (e.g., pan, tilt, and roll)); and transmitting a third signal to cause the XR device at the 3D location to generate for display the augmentation for the 3D object in the 3D location (Dryer- Figs. 5X, 5Y, 5Z shows an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520; ¶0045, at least discloses The one or more input controllers 160 receive/send electrical signals [transmitting a third signal] from/to other input or control devices 116; ¶0136, at least discloses GUI updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display [transmitting a signal]; ¶0259-0263, at least disclose device 100 ceases to display the partially completed three-dimensional model of room 520 and displays an enlarged three-dimensional model 634 of room 520 that has been generated based on the completed scan and modeling of room 520 […] the completed three-dimensional model 634 of room 520 is displayed [display an augmentation for the 3D object in the 3D location] in a user interface 636 that does not include camera view 524 […] In FIG. 5Y, in response to detecting the input that includes the movement in the first direction (e.g., in response to detecting the swipe input or drag input on the completed model 634 in user interface in the first direction), device 100 moves the completed three-dimensional model 634 in a first manner in accordance with the input (e.g., rotating and/or translating the completed model 634 in the first direction); ¶0273, at least discloses the computer system (e.g., device 100, device 300, or another computer system described herein) displays (652), via the display generation component, a first user interface (e.g., a scan user interface that is displayed to show progress of an initial scan of a physical environment to build a three-dimensional model of the physical environment, a camera user interface, and/or a user interface that is displayed in response to a user's request to perform a scan of a physical environment or to start an augmented reality session in a physical environment) […] the representation of the field of view including a first view of a physical environment that corresponds to a first viewpoint of a user in the physical environment (e.g., the first viewpoint of the user corresponds to a direction, position and/or vantage point from which the physical environment is being viewed by the user either via a head mounted XR device [the XR device] or via a handheld device such as a smartphone or tablet that displays a representation of the field of view of the one or more cameras on a display of the handheld device for a handheld device). The prior art does not clearly disclose, but Ryu discloses determining a coordinate of interest based on applying a mathematical model (Ryu- ¶0063-0064, at least disclose By Equation (3) and the pairs of 2D and 3D coordinates, the problem to find 3D geometric correspondence is transformed into a problem to estimate parameters of a mathematical model from a set of observed data which contains outliers. RANSAC or other parameterization techniques such as Hough transform may be used). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Dryer/Bhushan/Buzdar/Fan to incorporate the teachings of Ryu, and apply the mathematical model into the Dryer/Bhushan/Buzdar/Fan’s teachings for determining a coordinate of interest based on applying a mathematical model to the plurality of sets of coordinates. Doing so would perform correctly the virtual objects are placed on the object in a realistic perspective that matches the perspective in the image so that the scene with the inserted objects looks realistic to a person viewing the image. The system of claim 20 is similar in scope to the functions performed by the method of claim 10 and therefore claim 10 is rejected under the same rationale. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
May 15, 2025
Non-Final Rejection — §103
Aug 19, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 10, 2025
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month