Prosecution Insights
Last updated: April 18, 2026
Application No. 18/503,041

DIGITAL ASSISTANT PLACEMENT IN EXTENDED REALITY

Non-Final OA §102§103
Filed
Nov 06, 2023
Examiner
ORR, HENRY W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
230 granted / 456 resolved
-4.6% vs TC avg
Strong +37% interview lift
Without
With
+37.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
485
Total Applications
across all art units

Statute-Specific Performance

§101
6.8%
-33.2% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This action is responsive to application communication filed on 11/6/2023. 2. Claims 1-20 are pending in the case. 3. Claims 1, 19 and 20 are independent claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-14 and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gorur et al. (hereinafter “Gorur”), U.S. Published Application No. 20190026936 A1. Claim 1: Gorur teaches A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: (e.g., storage of mobile device see Figure 2) while providing an extended reality (XR) environment: receive a first input corresponding to a request to initiate a digital assistant; (e.g., spoken word input “open the virtual assistant” to request a digital assistant par. 73; Referring back to FIG. 3, mobile device 200 can receive user input that invokes generation and presentation of an item of virtual content, such as a virtual assistant, at a corresponding placement position within the visible portion of augmented reality environment (e.g., in block 304). The received user input may include an utterance spoken by the user (e.g., “open the virtual assistant”), which is captured by a microphone incorporated within mobile device 200.) in response to receiving the first input, initiate a first instance of a digital assistant session; in accordance with initiating the first instance of the digital assistant session, display a digital assistant indicator at a first location in the XR environment; (e.g., in response to spoken input, displaying virtual assistant at a position within the XR environment par. 73; In response to the established request, mobile device 200 can perform any of the exemplary processes described herein to generate the virtual assistant, determine a placement position for the virtual assistant that conforms to any constraints that may be imposed by the augmented reality environment, and insert the virtual assistant into the augmented reality environment at the placement position.) while providing a first view of the XR environment, dismiss the first instance of the digital assistant session, including ceasing to display the digital assistant indicator at the first location in the first view; (e.g., in response to spoken input “remove the virtual assistant, dismiss the display of the virtual assistant par. 113; In one example, the spoken input corresponds to a spoken command to revoke the virtual assistant (e.g., “remove the virtual assistant”), and operations module 244 can perform any of the processes described above to remove the virtual assistant from the augmented reality environment.) and after dismissing the first instance of the digital assistant session: receive a second input corresponding to a request to initiate the digital assistant; (e.g., in response to spoken input and/or gesture, displaying virtual assistant at a position within the XR environment Examiner notes that invoking and dismissing virtual assistant may be repeated par. 73; In response to the established request, mobile device 200 can perform any of the exemplary processes described herein to generate the virtual assistant, determine a placement position for the virtual assistant that conforms to any constraints that may be imposed by the augmented reality environment, and insert the virtual assistant into the augmented reality environment at the placement position. Par. 75; The subject matter is not limited to spoken input that invokes the placement of an item of virtual content within the augmented reality environment, when detected by mobile device 200. In other examples, digital camera 212 of mobile device 102 can capture digital image data that includes a gestural input provided by the user to mobile device 200, such as a hand gesture or a pointing motion. ) and in response to receiving the second input and while providing a second view of the XR environment, initiate a second instance of a digital assistant session, including: (e.g., requesting to display virtual assistant after the user changes their orientation with respect to XR environment results in a second view initiating a digital assistant session par. 77; The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment. In other instances (not illustrated in FIG. 4B), mobile device 200 or XR computing system 130 can determine the orientation of the user based on an orientation of at least a portion of the user's body (e.g., the user's head or eyes) with respect to the augmented reality environment or a portion of mobile device 200, such as a surface of display unit 208.) in accordance with a determination that a difference between the first view and the second view satisfies a set of criteria, displaying the digital assistant indicator at the first location in the second view. (e.g., determining that the first view has no appropriate placement position and the previous view has an appropriate placement position (i.e., difference between the first view and second view satisfies a set of criteria), displaying the virtual assistant in the second view par. 106; Position determination module 170 may perform any of the operations described herein to determine that each candidate position of the virtual assistant represents a hazard (e.g., associated with an infinite value of c.sub.cons), and thus, position determination module 170 is constrained from placing the virtual assistant within the visible portion of the augmented reality environment. par. 107; Based on this determination, position determination module 170 can access stored data identifying portions of the augmented reality environment previously visible to user 601 (e.g., as maintained by storage media 132 of XR computing system 130 or storage media 211 of mobile device 200). Par. 108; Further, in some instances, position determination module 170 may determine that no previously visible or adjacently disposed portion of the augmented reality environment is suitable for the placement of the virtual assistant.) In response to this determination, position determination module 170 can generate an error signal that causes mobile device 200 or XR computing system 130 to maintain the invisibility of the virtual assistant within the augmented reality environment, e.g., during a specified future time period or until detection of an additional gestural or spoken input by user 601.) Claim 2 depends on claim 1: Gorur teaches wherein initiating the first instance of the digital assistant session includes displaying the digital assistant indicator at a second location in the XR environment, wherein the second location is different from the first location. (e.g., moving or placement of virtual assistant from one location to another par. 75; The subject matter is not limited to spoken input that invokes the placement of an item of virtual content within the augmented reality environment, when detected by mobile device 200. In other examples, digital camera 212 of mobile device 102 can capture digital image data that includes a gestural input provided by the user to mobile device 200, such as a hand gesture or a pointing motion. par. 112; Then the mobile device 200 or XR computing system 130 can identify a portion of the augmented reality environment that is visible to the user of mobile device 200 based on the newly changed device state, and perform any of the processes described herein to move the virtual assistant to a position to the portion of the augmented reality environment that is visible to the user.) Claim 3 depends on claim 2: Gorur teaches wherein the second location is a default location. (e.g., when no appropriate location is determined for the current view, display at a secondary location based on a previous view. Examiner considers any secondary location to be a “default” location when the primary location is not determined to be appropriate. par. 106; Position determination module 170 may perform any of the operations described herein to determine that each candidate position of the virtual assistant represents a hazard (e.g., associated with an infinite value of c.sub.cons), and thus, position determination module 170 is constrained from placing the virtual assistant within the visible portion of the augmented reality environment. par. 107; Based on this determination, position determination module 170 can access stored data identifying portions of the augmented reality environment previously visible to user 601 (e.g., as maintained by storage media 132 of XR computing system 130 or storage media 211 of mobile device 200).) Claim 4 depends on claim 3: Gorur teaches wherein the default location corresponds to a current front-facing direction of the electronic device. ( e.g., determining a secondary location (i.e., default location) within previous visible portion corresponding to a direction of the mobile device par. 77; The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment. Par. 85; Users 601 and 602 may be disposed within visible portion 600 at corresponding positions, and may be oriented to face in directions specified by respective ones of vectors 601A and 602A. par. 107; Position determination module 170 can then perform any of the operations described herein to determine an appropriate placement position for the virtual assistant within the previously visible portion of the augmented reality environment, or alternatively, within the adjacently disposed portion of the augmented reality environment. Claim 5 depends on claim 3: Gorur teaches wherein the default location is located a predetermined distance away from the electronic device. (e.g., determining either the distance or displacement for any secondary location based on previous visible portions par. 89; FIG. 6B shows the spatial relationship between a user 601 and a candidate virtual assistant 604A. The user 601 and the candidate virtual assistant 604A are separated by a distance d.sub.au. In FIG. 6C, c.sub.angle characterizes a viewing angle of the virtual assistant at the candidate position relative to the user at the determined position as a function of the angle between V.sub.a and V.sub.u. In FIG. 6D, c.sub.dis characterizes a displacement between the candidate position of candidate virtual assistant 604A and determined position of the user 601. Par. 93; As illustrated in FIG. 6B, position determination module 170 can compute a displacement 614 between user 601 and candidate virtual assistant 604A in the augmented reality environment. Position determination module 170 can also determine the corresponding value of c.sub.dis for user 601 and the candidate position associated with candidate virtual assistant 604A based on a predetermined, empirical relationship 624 shown in FIG. 6D. As illustrated in FIG. 6D, the value of c.sub.dis is at a minimum for displacements of approximately five to six feet, and is at a maximum for displacements approaching zero or approaching, and exceeding, fifteen feet. ) Claim 6 depends on claim 2: Gorur teaches wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the electronic device to: before dismissing the first instance of the digital assistant session: receive a third input corresponding to a request to move the digital assistant indicator from the second location, wherein the digital assistant indicator is displayed at the first location in response to receiving the third input. (e.g., moving or placement of virtual assistant from one location to another based on spoken input and gesture par. 75; The subject matter is not limited to spoken input that invokes the placement of an item of virtual content within the augmented reality environment, when detected by mobile device 200. In other examples, digital camera 212 of mobile device 102 can capture digital image data that includes a gestural input provided by the user to mobile device 200, such as a hand gesture or a pointing motion. par. 112; Then the mobile device 200 or XR computing system 130 can identify a portion of the augmented reality environment that is visible to the user of mobile device 200 based on the newly changed device state, and perform any of the processes described herein to move the virtual assistant to a position to the portion of the augmented reality environment that is visible to the user. Par. 117; User 601 may perform a pointing motion 1002 within the real-world environment in response to content accessed within the extended reality environment. As described above, the extended reality environment may include an augmented reality environment that enables user 601 to explore the train shed of Pennsylvania Station, and user 101 may perform pointing motion 1002 in an attempt to obtain additional information from the virtual assistant 1028 regarding the steam locomotive on the platform. ) Claim 8 depends on claim 1: Gorur teaches wherein: the first view depicts the XR environment from a first perspective corresponding to a first direction; (e.g., view based on vector that specifies a direction par. 77; The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment.) the second view depicts the XR environment from a second perspective corresponding to a second direction; (e.g., view based on vector that specifies a direction par. 77; The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment.) and determining that the difference between the first view and the second view satisfies the set of criteria includes determining that a difference between the first direction and the second direction is less than a threshold difference. (e.g., change in the device state considers vectors that specify direction Therefore, change of device state exceeding a predetermined threshold implies a threshold difference between first and second direction specified by the vectors par. 77; The position of mobile device 200 can be represented as one or more latitude, longitude, or altitude values. As illustrated in FIG. 4B, the orientation of mobile device 200 can be represented by a value of a roll (e.g., an angle about a longitudinal axis 414), a value of a pitch (e.g., an angle about a transverse axis 416), and a value of a yaw (e.g., an angle about vertical axis 412). The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment. Par. 91; Position determination module 170 can compute a difference, in degrees, between direction vectors 608 and 606, and can determine the corresponding value of c.sub.angle for user 601 and the candidate position associated with candidate virtual assistant 604A based on a predetermined, empirical relationship 622 shown in FIG. 6C. par. 111; A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) Claim 9 depends on claim 1: Gorur teaches wherein: the first view depicts first content of the XR environment; the second view depicts second content of the XR environment; (e.g., first view before repositioning and second view of content after repositioning par. 111; Mobile device 200 or XR computing system 130 can perform operations that determine whether a change in a device state, such as a change in the position or orientation of mobile device 200, triggers a repositioning of the virtual assistant within the visible portion of the augmented reality environment (e.g., in block 322). A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) and determining that the difference between the first view and the second view satisfies the set of criteria includes determining that the second content includes at least a threshold amount of the first content. (e.g., second view of AR content including virtual assistant (i.e., at least threshold amount of the first content) does not reposition until a magnitude of the device state change exceeds a predetermined amount par. 111; A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) Claim 10 depends on claim 1: Gorur teaches wherein: the electronic device is at a fourth location while providing the first view; the electronic device is at a fifth location while providing the second view; (e.g., tracking position of mobile device determines location of the device while providing first or second view par. 77; The position of mobile device 200 can be represented as one or more latitude, longitude, or altitude values. As illustrated in FIG. 4B, the orientation of mobile device 200 can be represented by a value of a roll (e.g., an angle about a longitudinal axis 414), a value of a pitch (e.g., an angle about a transverse axis 416), and a value of a yaw (e.g., an angle about vertical axis 412). The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment.) and determining that the difference between the first view and the second view satisfies the set of criteria includes determining that the fourth location and the fifth location are within a threshold distance of each other. (e.g., if virtual assistant is not reposition due to magnitude of the change not exceeding a predetermined threshold amount, it is implied that the fourth and fifth location are within a threshold distance of each other par. 111; A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) Claim 11 depends on claim 1: Gorur teaches wherein: the electronic device is at a sixth location while providing the second view; (e.g., tracking position of mobile device determines location of the device while providing first or second view par. 77; The position of mobile device 200 can be represented as one or more latitude, longitude, or altitude values. As illustrated in FIG. 4B, the orientation of mobile device 200 can be represented by a value of a roll (e.g., an angle about a longitudinal axis 414), a value of a pitch (e.g., an angle about a transverse axis 416), and a value of a yaw (e.g., an angle about vertical axis 412). The orientation of mobile device 200 establishes a vector 418 that specifies a direction in which the user of mobile device 200 faces while accessing the augmented reality environment.) and determining that the difference between the first view and the second view satisfies the set of criteria includes determining that the sixth location and the first location are within a second threshold distance of each other. (e.g., if virtual assistant is not repositioned due to magnitude of the change not exceeding a predetermined threshold amount, it is implied that the sixth and first location are within a threshold distance of each other par. 111; A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) Claim 12 depends on claim 1: Gorur teaches wherein: determining that the difference between the first view and the second view satisfies the set of criteria includes determining that the second view depicts the first location. (e.g., if virtual assistant is not repositioned due to magnitude of the change not exceeding a predetermined threshold amount, it is implied that the second view depicts the first location of the virtual assistant that is no repositioned. par. 111; A change in the device state may trigger the repositioning of the virtual assistant or modify objects or users within the visible portion of the augmented reality environment (e.g., to include new objects, etc.). For example, the change in the device state may trigger the repositioning of the virtual assistant when a magnitude of the change exceeds a predetermined threshold amount.) Claim 13 depends on claim 1: Gorur teaches wherein: the first input includes a spoken trigger for initiating the digital assistant. (e.g., user's utterance of “open the virtual assistant.” par. 69; For example, the generated textual data may correspond to the user's utterance of “open the virtual assistant.” Operations module 244 can associate the utterance with a request by the user to invoke the virtual assistant and dispose the virtual assistant at a placement position that conforms to constraints imposed by the extended reality environment) Claim 14 depends on claim 1: Gorur teaches wherein the one or more programs further comprise instructions, which when executed by the one or more processors, cause the electronic device to: before dismissing the first instance of the digital assistant session: receive a natural language input; and concurrently display a response affordance with the digital assistant indicator, wherein the response affordance corresponds to a response, generated by the digital assistant, based on the natural language input. (e.g., displaying locomotive information (i.e., response affordance) with virtual indicator in response to spoken input and gesture par. 20; The mobile device can apply speech recognition tools or natural-language processing algorithms to the captured utterance to determine a context of the spoken utterance and perform additional operations corresponding to the determined context. For example, the additional operations can include presentation of additional elements of digital content corresponding to the user's navigation through the augmented reality environment. Other examples include processes that obtain information from one or more computing systems in response to a query spoken by the user. par. 113; In response to the establishment of the virtual assistant within the visible portion of the augmented reality environment, the user of mobile device 200 may interact with the virtual assistant and provide spoken or gestural input to mobile device 102 specifying commands, requests, or queries. In response to the spoken input, mobile device 102 can perform any of the processes described above to parse the spoken query and obtain textual data corresponding to a command, request, or query associated with the spoken input, and to perform operations corresponding to the textual data. Par. 117; As described above, the extended reality environment may include an augmented reality environment that enables user 601 to explore the train shed of Pennsylvania Station, and user 101 may perform pointing motion 1002 in an attempt to obtain additional information from the virtual assistant 1028 regarding the steam locomotive on the platform.) Claim 16 depends on claim 14: Gorur teaches wherein: displaying the response affordance includes orienting the response affordance to face the electronic device. (e.g., modify the position of the item of virtual content to face eyewear of user par. 31; The extended reality computing system or the mobile device can also modify the position of the item of virtual content within the extended reality environment in response to a change in the state of the mobile device or the display unit (e.g., a change in the position or orientation of the mobile device or display unit within the extended reality environment). Par. 33; In other examples, mobile devices 102 and 104 include augmented reality eyewear (e.g., glasses) that include one or more lenses for displaying graphical content (such as augmented reality information layers) over real-world objects that are viewable through such lenses to establish an augmented reality environment. par. 46; Position determination module 170 provides a means for determining a position for placement of the item of virtual content in the extended reality environment at least partially based on the determined position and orientation of the user.) Claim 17 depends on claim 14: Gorur teaches wherein: initiating the first instance of the digital assistant session includes displaying the digital assistant indicator at a seventh location different from the first location; before dismissing the first instance of the digital assistant session, the response affordance is displayed at an eighth location above the seventh location; and the one or more programs further comprise instructions, which when executed by the one or more processors, cause the electronic device to: before dismissing the first instance of the digital assistant session: receive a fourth input corresponding to a request to move the digital assistant indicator from the seventh location to the first location, wherein the digital assistant indicator is displayed at the first location in response to receiving the fourth input; and in response to receiving the fourth input, display the response affordance at a ninth location above the first location. (e.g., repositioning virtual assistant and additional information from virtual assistant to new locations based on pointing gesture par. 117; User 601 may perform a pointing motion 1002 within the real-world environment in response to content accessed within the extended reality environment. As described above, the extended reality environment may include an augmented reality environment that enables user 601 to explore the train shed of Pennsylvania Station, and user 101 may perform pointing motion 1002 in an attempt to obtain additional information from the virtual assistant 1028 regarding the steam locomotive on the platform. par. 118; Further, operations module 244, when executed by processor 218, can obtain data indicative of the detected pointing motion 1002. Based on portions of gesture library 234, operations module 244 can determine that pointing motion 1002 corresponds to a request to reposition virtual assistant 1028 and obtain additional information characterizing an object associated with pointing motion 1002, e.g., the steam locomotive.) Claim 18 depends on claim 14: Gorur teaches wherein: dismissing the first instance of the digital assistant session includes ceasing to display the response affordance. (e.g., in response to spoken input “remove the virtual assistant, dismiss the display of the virtual assistant Examiner considers the virtual assistant and additional information from the virtual assistant to be tied together, during the “remove” command par. 113; In one example, the spoken input corresponds to a spoken command to revoke the virtual assistant (e.g., “remove the virtual assistant”), and operations module 244 can perform any of the processes described above to remove the virtual assistant from the augmented reality environment. Par. 114; Mobile device 200 can also perform any of the processes described above to generate graphical content or audio content (including synthesized speech) that represents the obtained information, and to present the generated graphical or audio content through the virtual assistant's interaction with the user. Par. 115; The virtual assistant may be disposed on a platform adjacent to one or more cars of a train, and may be providing synthesized studio content outlining the history of the station and the Pennsylvania Railroad. Par. 125; In other instances, XR computing system 130 can generate the animated representation of virtual assistant 1028, along with the graphical or audio content representative of the information describing the identified object using any of the processes described herein, and can transmit the animated representation and the graphical or audio content across network 120 to mobile device 200. ) Independent Claim 19: Claim 19 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 19. Independent Claim 20: Claim 20 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 20. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gorur as cited above, in view of Lee et al. (hereinafter “Lee”), U.S. Patent No. 11270672 B2. Claim 7 depends on claim 1: Gorur fails to expressly teach wherein initiating the second instance of the digital assistant session further includes: in accordance with a determination that the difference between the first view and the second view does not satisfy the set of criteria: displaying the digital assistant indicator at a third location in the second view. However, Lee teaches wherein initiating the second instance of the digital assistant session further includes: in accordance with a determination that the difference between the first view and the second view does not satisfy the set of criteria: displaying the digital assistant indicator at a third location in the second view. (e.g., if first view and second view do not comprise the same default position (i.e., both first and second views do not satisfy a criteria of having the same default position) , the virtual assistant will no longer be world locked to the default position but device-locked, appearing in a virtual window as a third location within the second view (col. 3 line 32; wherein initiating the second instance of the digital assistant session further includes: in accordance with a determination that the difference between the first view and the second view does not satisfy the set of criteria: displaying the digital assistant indicator at a third location in the second view. Col. 6 line 45; FIG. 9 shows an example method 900 of displaying a virtual assistant. Method 900 includes, at 902, displaying on a see-through display of an augmented reality display device (e.g. augmented reality display device 100) a virtual assistant associated with a location in a real-world environment, such that the virtual assistant is world-locked. Method 900 further includes, at 904, detecting a change in a field of view of the see-through display of the augmented reality display device 100. When the virtual assistant is determined to be out of the field of view after the change, method 900 comprises, at 906, displaying a representation of the virtual assistant in a virtual window on the see-through display. The virtual window may be associated with a location on the see-through display, such that its position is fixed relative to the see-through display (device-locked).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual assistant placement determination method as taught by Gorur to include displaying a device locked virtual assistant when views no longer comprises a default position as taught by Lee, with a reasonable expectation of success, to provide the benefit of allowing a user to move and improving the virtual assistant transitioning back into view in effort to avoid a disruptive user experience (see Lee; col. 2 line 20-36) . Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Gorur as cited above, in view of Ballard et al. (hereinafter “Ballard”), U.S. Published Application No. 9779517 B2. Claim 15 depends on claim 14: Gorur fails to expressly teach wherein: the digital assistant indicator is world locked; and the response affordance is world locked. However, Ballard teaches wherein: the digital assistant indicator is world locked; and the response affordance is world locked. (e.g., assigning “world locked” attribute to AR content col. 11 line 43; In some embodiments, the software application of the AR device may present an option to the user to enter an AR content “locked mode” where the AR content will appear to remain at its repositioned location relative to a certain reference user view vector, v. In locked mode, movements affecting the position of AR device 501 relative to coordinate system x, y, z may be duplicated in rendering the AR content 502.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the AR content as taught by Gorur to include a “world locked” feature as taught by Ballard with a reasonable expectation of success, to provide the benefit to allow the user to move freely within an environment without modifying or losing the relative position or allowing the user to “carry” the AR content to a new user location while maintaining the desired relative positioning of the AR content.(see Ballard; col. 11 lines 55-67, col. 12 lines 1-5) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ebstyne; Michael John et al. US 20150317831 A1 See abstract; The virtual objects transition between having a position that is body-locked and a position that is world-locked based on various transition events. Hoover; Paul et al. US 20200218074 A1 Par. 136; As another example, if the threshold distance is smaller than the vertical size of the screen, the AR system may not begin to orient the virtual screen 1430 until the virtual screen 1430 gets much closer to the wall. In various implementations, the threshold distance can be set by the user or can be set to default values (e.g., such as the size of the object). The threshold distance may change based on the user's experience with the AR system. For example, if the user finds that small threshold distance leads to rapid reorientation of the virtual objects and is distracting, the user may reset the threshold distance to be larger so that the reorientation occurs more gradually over a greater distance. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY ORR whose telephone number is (571)270-1308. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY ORR/ Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Dec 04, 2024
Response after Non-Final Action
Jan 02, 2026
Non-Final Rejection — §102, §103
Mar 10, 2026
Examiner Interview Summary
Mar 10, 2026
Applicant Interview (Telephonic)
Mar 30, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578851
SYSTEMS, METHODS, AND GRAPHICAL USER INTERFACES FOR GENERATING SHORT RUN CONTROL CHARTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572268
ACCELERATED SCROLLING AND SELECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561589
SYSTEM AND METHOD FOR INDUSTRIAL AUTOMATION RULES ENGINE
2y 5m to grant Granted Feb 24, 2026
Patent 12547304
INFORMATION PROCESSING SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR DISPLAYING ENLARGEED IMAGE CORRESPONDING TO A FILE IMAGE
2y 5m to grant Granted Feb 10, 2026
Patent 12530968
MAP-BASED EMERGENCY CALL MANAGEMENT AND DISPATCH
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
88%
With Interview (+37.2%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month