Prosecution Insights
Last updated: April 19, 2026
Application No. 18/738,452

Coordination of Interactions of Virtual Objects

Non-Final OA §103§DP
Filed
Jun 10, 2024
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims , 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 3, 4, 5, 6, 7, 9, 10, (12, 13), 14, 15, 16, 17, 18, 6, 7, 9, 10, 19 of app 17/670,946 (now is US patent US 12067688 B2). Although the claims at issue are not identical, they are not patentably distinct from each other because they both claim the same subject matters and limitations as explained below. Claim 1 is determined to be obvious in light of claim 1 of 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application claims 1 17/670,946 claim 1 A method comprising: obtaining, by an application in control of an artificial reality environment, an indication of a first virtual object; registering, in relation to a second virtual object, one or more events associated with the first virtual object, by: providing, based on the obtained indication of the first virtual object, identifications of one or more virtual objects, in the artificial reality environment, including the first virtual object; and receiving a request for the registration, in relation to the second virtual object, of the one or more events associated with the first virtual object; and A method of coordinating interactions between multiple virtual objects in an artificial reality environment, the method comprising: receiving, by a shell application in control of the artificial reality environment, an indication of a first virtual object, the first virtual object comprising first properties, wherein the artificial reality environment is set in a real-world environment; registering, with the shell application, the first properties of the first virtual object; receiving, by the shell application, one or more queries from a second virtual object; in response to the one or more queries, responding to the second virtual object with: identified features of the real-world environment in which the artificial reality environment is set, and identifications of one or more virtual objects, in the artificial reality environment, including the first virtual object and first properties of the first virtual object, the first properties comprising an anchor point and a view state, providing, based on the registration and in relation to the second virtual object, a notification of an event, wherein the second virtual object uses the identification of the first virtual object to register for events related to the first virtual object; and identifying that an event related to the first virtual object, for which the second virtual object is registered, occurred and, in response, notifying the second virtual object of the event, wherein the event indicates A) a change in position of the first virtual object relative to the second virtual object and/or B) an interaction of the first virtual object with the second virtual object; wherein, based on the notification of the event, an internal state for the second virtual object is modified and a display property for the second virtual object is updated. wherein the event indicates A) a change in position of the first virtual object relative to the second virtual object and/or B) an interaction of the first virtual object with the second virtual object, and wherein the second virtual object invokes a rule, of the second virtual object, based on notification of the event, thereby causing the second virtual object to modify its internal state and update a display property relative to the event. Claim 2 is determined to be obvious in light of Claim 2 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 2 17/670,946 Claim 2 2. The method of claim 1, wherein the first virtual object is created by a first application and the second virtual object is created by a second application that is different from the first application. 2. The method of claim 1, wherein the first virtual object is created by a first entity and the second virtual object is created by a second entity that is different from the first entity. Claim 3 is determined to be obvious in light of Claim 3 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 3 17/670,946 Claim 3 3. The method of claim 1, wherein, based on the notification, the second virtual object sends one or more queries, from the second virtual object to the first virtual object, requesting properties of the first virtual object. 3. The method of claim 1, wherein the second virtual object sends one or more second queries from the second virtual object to the first virtual object requesting second properties of the first virtual object. Claim 4 is determined to be obvious in light of Claim 4 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 4 17/670,946 Claim 4 4. The method of claim 3, wherein the one or more queries, from the second virtual object to the first virtual object, are sent via an API provided by the first virtual object. 4. The method of claim 3, wherein the one or more second queries from the second virtual object to the first virtual object are via an API provided by the first virtual object. Claim 5 is determined to be obvious in light of Claim 5 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 5 17/670,946 Claim 5 5. The method of claim 1, wherein the request for the registration is received via an API provided by the application. 5. The method of claim 1, wherein the receiving the one or more queries from the second virtual object is via an API provided by the shell application. Claim 6 is determined to be obvious in light of Claim 6 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 6 17/670,946 Claim 6 6. The method of claim 1, wherein the indication of the first virtual object includes properties of the first virtual object comprising one or more of weight, mass, collision volume, friction, or material. 6. The method of claim 1, wherein the first properties further include physical properties of the first virtual object comprising one or more of weight, mass, collision volume, friction or material. Claim 7 is determined to be obvious in light of Claim 7 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 7 17/670,946 Claim 7 7. The method of claim 1, wherein the indication of the first virtual object includes properties of the first virtual object comprising interaction rights specifying what aspects of the first virtual object can be accessed or changed. 7. The method of claim 1, wherein the first properties further include interaction rights specifying what aspects of the first virtual object the second virtual object can access or change. Claim 8 is determined to be obvious in light of Claim 9 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 8 17/670,946 Claim 9 8. The method of claim 1, wherein the display property for the second virtual object, before being updated, causes the second virtual object to be shown in a maximized view state where the second virtual object is represented as a 3D model; and wherein the updated display property for the second virtual object causes the second virtual object to be shown in: a minimized view state where the second virtual object is represented as an icon or a vertical surface view state where the second virtual object is represented as a flat panel. 9. The method of claim 1, wherein the view state comprises one or more of: a minimized view state where the first virtual object is represented as an icon for the first virtual object, a vertical surface view state where the first virtual object is represented as a flat panel, or a maximized view state where the first virtual object is represented as a 3D model view state. Claim 9 is determined to be obvious in light of Claim 10 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 9 17/670,946 Claim 10 9. The method of claim 1, wherein the event indicates the change in position of the first virtual object in relation to the second virtual object. 10. The method of claim 1, wherein the event indicates the change in position of the first virtual object in relation to the second virtual object. Claim 10 is determined to be obvious in light of Claim 12,13 of app 17/670,946 (now is US patent US 12067688 B2) based on reasons below for having similar limitations. Instant application Claim 10 17/670,946 Claim 12, 13 10. The method of claim 1 further comprising: saving a virtual scene comprising saving, for each of the one or more virtual objects, a corresponding state and/or anchor position; 12. The method of claim 1, further comprising saving a virtual scene comprising saving, for each of the one or more virtual objects, a corresponding state and/or anchor position. wherein the saved virtual scene is loaded by an artificial reality device by recalling each of the one or more virtual objects and setting its saved state and/or anchor position. 13. The method of claim 12, wherein the saved virtual scene is loaded by an artificial reality device by recalling each of the one or more virtual objects and setting its saved state and/or anchor position. Claims 11, 12, 13, 14, 15, 16, 17, 18, 19, they recite limitations similar in scope to the limitations of Claims 1-9 but as a computer-readable storage medium which determined to be obvious in light of claim 8-14 of 17/670,946 (now is US patent US 12067688 B2) which recite limitations similar in scope to the limitations of Claims 14-18, 6-10 of 17/752,331 (now is US patent US 12039658 B1) based on same reason described above for having similar limitations as described above for Claims 1-9. Claims 20, they recite limitations similar in scope to the limitations of Claims 1 but as a computer-readable storage medium which determined to be obvious in light of claim 19 of 17/670,946 (now is US patent US 12067688 B2) which recite limitations similar in scope to the limitations of Claims 19 of 17/752,331 (now is US patent US 12039658 B1) based on same reason described above for having similar limitations as described above for Claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7-13, 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 20180160105 A1, hereinafter Ross), in view of Agarawala et al. US 20210165557 A1, hereinafter Agarawala). Regarding Claim 20, Ross teaches a computing system (Ross, Fig. 3, Element 300, electronic device) comprising: one or more processors (Ross, Fig. 3, Element 390 Processor); and one or more memories (Ross, Fig. 3, Element 380 Memory) storing instructions that, when executed by the one or more processors, cause the computing system to (Ross, Paragraph [0005], “The computing device may include a memory storing executable instructions, and a processor configured to execute the instructions. Execution of the instructions may cause the computing device to”): obtain, by an application in control of an artificial reality environment, an indication of a first virtual object (Ross, Paragraph [0016], "A user immersed in a 3D augmented and/or virtual reality environment <read on artificial reality environment>; [0018], "a virtual reality (VR) application may generate a virtual environment or a virtual world."; [0044], "the non-VR event interface 14 may receive indications of events that have occurred or a status update with respect to one or more of the non-VR applications 18"; [0049], "Execution of the instructions may cause the computing device to receive, from a non-virtual reality application, a non-virtual reality event notification"); [[ register, in relation to a second virtual object, one or more events associated with the first virtual object]] by: providing, based on the obtained indication of the first virtual object, identifications of one or more virtual objects, in the artificial reality environment, including the first virtual object (Ross, Paragraph [0032], "non-textual indication, which may include one or more virtual objects or virtual graphical objects may be provided or displayed to indicate a status of a non-VR application 18." [0033], "a characteristic... of the non-textual indication e.g., a number or size or location of virtual objects that are displayed in the virtual environment... may be adjusted to indicate a status of the non-virtual reality application."); [[ and receiving a request for the registration, in relation to the second virtual object, of the one or more events associated with the first virtual object ]] and provide, based on the registration and in relation to the second virtual object, a notification of an event, wherein the event indicates A) a change in position of the first virtual object relative to the second virtual object and/or B) an interaction of the first virtual object with the second virtual object (Ross, Paragraph [0049], "Execution of the instructions may cause the computing device to receive, from a non-virtual reality application, a non-virtual reality event notification, and provide, in a virtual environment based on the non-virtual reality event notification, a non-textual indication of a status of the non-virtual reality application, wherein a characteristic of the non-textual indication is adjusted to indicate the status of the non-virtual reality application."; [0036], "A position of a virtual object may be varied within the display or the VR environment 17 to indicate a status of a non-VR application. For example, sun 26 may drop lower and lower in the sky, to indicate a setting sun, as time becomes closer to an upcoming scheduled meeting or appointment within calendar application, or the position of the sun 26 may vary or be adjusted to indicate a health status of the user e.g., sun rises up in the sky as blood pressure increases, and lowers in the sky as blood pressure lowers"); wherein, based on the notification of the event, an internal state for the second virtual object is modified and a display property for the second virtual object is updated (Ross, Paragraph [0033], "a characteristic of the non-textual indication e.g., a number or size or location of virtual objects that are displayed in the virtual environment... may be adjusted to indicate a status of the non-virtual reality application."; [0035], "A size of the virtual objects displayed may be varied as well to indicate a status <read on internal state > of a non-VR application. For example, the size of a cloud 22 may be adjusted or varied to indicate a different status of the non-VR application 18.";[0037], "For example, a volume of a sound may be adjusted, or a brightness or color <read on display property> of a virtual object may be adjusted or may change to indicate a status or change in status of a non-VR application"). But Ross does not explicitly disclose register, in relation to a second virtual object, one or more events associated with the first virtual object. However, Agarawala teaches register, in relation to a second virtual object, one or more events associated with the first virtual object (Agarawala, Abstract, "building an augmented reality (AR) meeting space comprising structured data received from a plurality of apps operating on a mobile device of a user. The structured data is translated into a three-dimensional representation of the structured data corresponding to each of the plurality of apps"; Paragraph [0011], "With a simple swipe-up gesture with their thumb or other finger, the user may move the image from the context of the 2D mobile phone screen... into the physical environment." (register/move read on placing into AR environment in relation to objects); by providing, based on the obtained indication of the first virtual object, identifications of one or more virtual objects, in the artificial reality environment, including the first virtual object (Agarawala, Paragraph [0228], "Digital objects 6110 may include visual displays of webpages, documents, images, videos, or any other multimedia <read on virtual objects>"; [0150], "the AR system described herein may include an Internet or WWW loader device or component that operates across a plurality of devices that safely downloads images, audio, data streams feeds, HTML and other data from the Internet or other networks") and receiving a request for the registration, in relation to the second virtual object, of the one or more events associated with the first virtual object (Agarawala, Paragraph [0160], "In 4530, at least a representation of the data may be retrieved from the computing device. For example, in FIG. 28, after receive a request to move data 214 into a virtual or augmented workspace, AR VR cloud system 206 may request or retrieve the data or a representation of the data 214 from computing device 216'') Ross and Agarawala are analogous since both are dealing with virtual/augmented reality environments that process application events/notifications and display virtual objects with updated properties based on status changes and user interactions. Ross provided a way of receiving non-VR event notifications via an event interface, providing non-textual virtual object indications in VR environments where characteristics are adjusted to represent event statuses, resulting in internal state modifications and display property updates. Agarawala provided a way of obtaining indications from apps operating on mobile devices, translating structured data into 3D virtual object representations registered/placed in AR meeting spaces relative to other objects via user gestures that providing object identifications in immersive AR environments. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the AR registration via app indications and object identifications taught by Agarawala into the modified invention of Ross such that virtual objects from applications register events in the artificial reality environment by providing identifications and receiving registration requests relative to other objects. The motivation is to enable seamless bridging of 2D app data into shared 3D ARNR spaces for comprehensive event handling and notifications, allowing users to view and interact with application-sourced objects in immersive environments while maintaining awareness of events across multiple applications, as discussed by Agarawala in Paragraphs [0008]-[0011]. . Regarding Claim 11, it recites limitations similar in scope to the limitations of claim 20 and the combination of Ross and Agarawala teaches all the limitations as of Claim 1. And Ross discloses these features can be implemented on a computer readable storage medium (Ross, Paragraph [0099], “Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device … a computer-readable storage medium can be configured to store instructions that when executed cause a processor ( e.g., a processor at a host device, a processor at a client device) to perform a process”). Regarding Claim 12, the combination of Ross and Agarawala teaches the invention in Claim 11 . The combination further teaches wherein the first virtual object is created by a first application and the second virtual object is created by a second application that is different from the first application (Ross, Paragraph [0049], "application 30 includes receiving, from a non-virtual reality application, a nonvirtual reality event notification. Operation 32 includes providing, in a virtual environment based on the non-virtual reality event notification, a non-textual indication of a status of the non-virtual reality application "). Regarding Claim 13, the combination of Ross and Agarawala teaches the invention in Claim 11. The combination further teaches wherein, based on the notification, [[the second virtual object sends one or more queries, from the second virtual object to the first virtual object, requesting properties of the first virtual object]] (Ross, Paragraph [0048], "The VR application 12 may receive the event notification from the non-VR event interface 14 ... The VR application 12 or the computing system 10 ... may then display the selected virtual object type, with the selected characteristic ... " (VR application coordinating status updates) Ross does not explicitly disclose the second virtual object sends one or more queries, from the second virtual object to the first virtual object, requesting properties of the first virtual object. However, Agarawala teaches the second virtual object sends one or more queries, from the second virtual object to the first virtual object, requesting properties of the first virtual object (Agarawala, Paragraph 0167, "In 4620, a second user device connected to the workspace is identified. For example, in FIG. 41 , the AR cloud platform may identify that user 1 is accessing a shared AR workspace with user 2 and that user 3 is accessing a shared VR workspace with user 2, all of whom have access to or are seeing display element C." (shared workspace objects coordinating via platform read on second object querying first object's properties; Paragraph 0150, "the AR system described herein may include an Internet or WWW loader device or component that operates across a plurality of devices that safely downloads images, audio, data streams feeds, HTML and other data from the Internet or other networks." (cross-device data requests read on queries between objects) As explained in the rejection of Claim 11 above, the obviousness for incorporating the AR registration via app indications and object identifications of Agarawala into Ross is provided above. Regarding Claim 17, the combination of Ross and Agarawala teaches the invention in Claim 11 . The combination further teaches wherein the indication of the first virtual object includes properties of the first virtual object comprising interaction rights specifying what aspects of the first virtual object can be accessed or changed (Agarawala, Paragraph [0127], "permissions with regards to their ability to manipulate or see certain display elements . [0128] In an embodiment , the display elements may correspond to an app , image , file , webpage , or other data that is stored or is operating on an underlying device"). Agarawala and Ross are analogous since both of them are dealing with presenting and updating user-visible virtual/augmented reality objects based on events and user interactions. Ross provided a way of providing user-visible indications/visual adjustments in a virtual environment based on event notifications. Agarawala provided a way of building an augmented reality (AR) meeting space, mapping application content into 30 representations, and enabling interaction with content/objects in a workspace. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate permission/role-based access control (interaction rights) for content and objects in an ARNR workspace taught by Agarawala into modified invention off Ross such that the primary event/notification framework can include interaction rights that specify which aspects of a virtual object can be accessed or changed. The motivation is to restrict cross-user functionality based on authorized permissions, roles, or document ownership discussed by Agarawala in Paragraph [0199]. Regarding Claim 18, the combination of Ross and Agarawala teaches the invention in Claim 11 . The combination further teaches wherein the display property for the second virtual object, before being updated, causes the second virtual object to be shown in a maximized view state where the second virtual object is represented as a 3D model; and wherein the updated display property for the second virtual object causes the second virtual object to be shown in: a minimized view state where the second virtual object is represented as an icon or a vertical surface view state where the second virtual object is represented as a flat panel (Agarawala, Paragraph [0278], " in another embodiment, AR user 2 may be provided with a dollhouse view of the meeting . In another embodiment , may be arranged more vertically on wall B so they all fit on digital canvas 6310. [0279] In another embodiment , location B may be selected as primary room"). Agarawala and Ross are analogous since both of them are dealing with presenting and updating user-visible virtual/augmented reality objects based on events and user interactions. Ross provided a way of providing user-visible indications/visual adjustments in a virtual environment based on event notifications. Agarawala provided a way of building an augmented reality (AR) meeting space, mapping application content into 3D representations, and enabling interaction with content objects in a workspace. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate rerendering and changing visual prominence/presentation of application representations in an AR meeting space based on updates taught by Agarawala into modified invention off Ross such that the primary system can update display properties to switch between maximized (3D model) and minimized/icon or flatpanel views in response to events/updates. The motivation is to provide visual modifications and prominence changes to indicate updated content and support user attention management discussed by Agarawala in Paragraph [0199]. Regarding Claim 19, the combination of Ross and Agarawala teaches the invention in Claim 11 . The combination further teaches wherein the event indicates the change in position of the first virtual object in relation to the second virtual object (Ross, Paragraph [0049], "according to FIG. 18, operation 30 includes receiving, from a non-virtual reality application, a non-virtual reality event notification. Operation 32 includes providing, in a virtual environment based on the non-vir"; it is noted that Ross's event notification relates to events associated with the non-VR application and is used to update what is displayed in the VR environment with respect to that application/object). Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 20 but as a method and the combination of Ross and Agarawala teaches all the limitations as of Claim 20. Therefore is rejected under the same rationale. Regarding Claim 2, it recites limitations similar in scope to the limitations of Claim 12 and therefore is rejected under the same rationale. Regarding Claim 3, it recites limitations similar in scope to the limitations of Claim 13 and therefore is rejected under the same rationale. Regarding Claim 7, it recites limitations similar in scope to the limitations of Claim 17 and therefore is rejected under the same rationale. Regarding Claim 8, it recites limitations similar in scope to the limitations of Claim 18 and therefore is rejected under the same rationale. Regarding Claim 9, it recites limitations similar in scope to the limitations of Claim 19 and therefore is rejected under the same rationale. Regarding Claim 10, the combination of Ross and Agarawala teaches the invention in Claim 1. The combination further teaches saving a virtual scene comprising saving, for each of the one or more virtual objects, a corresponding state and/or anchor position; wherein the saved virtual scene is loaded by an artificial reality device by recalling each of the one or more virtual objects and setting its saved state and/or anchor position. Claim(s) 4, 5, 14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 20180160105 A1, hereinafter Ross), in view of Agarawala et al. US 20210165557 A1, hereinafter Agarawala) as applied to Claim 3, 1, 13, 11 above respectively and further in view of TATE-GANS et al. (US 20190197785 A 1, hereinafter Tate-gans). Regarding Claim 14, the combination of Ross and Agarawala teaches the invention as recited in Claims 13. The combination does not explicitly teach but Tate-Gans teaches wherein the one or more queries, from the second virtual object to the first virtual object, are sent via an API provided by the first virtual object (Tate-Gans, Paragraph [0070], "Prisms may provide an API for key-value data storage"; Paragraph, [0062], "applications 140 do not know where their volumes are placed in the landscape---only that they exist. In some embodiments, applications may request one or more Prisms"; Paragraph [0115], "Multiple applications may render to the Universe via the Prisms 113, with process boundaries separating the Prisms 113" where each Prism contains virtual content from different applications that need to communicate through defined interfaces). Tate-Gans and Ross are analogous since both are dealing with artificial reality environments where multiple applications control and display different virtual objects/content and where these applications and virtual objects need to communicate and interact with each other. Ross provided a way of displaying virtual objects from different applications (VR application and non-VR applications) in a virtual reality environment where the virtual objects can indicate status information from their respective applications. Tate-Gans provided a way for virtual content from different applications to communicate with each other through application programming interfaces (APls) where each application provides APls for other applications to send queries and request information about virtual objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the APl-based inter application communication technique taught by Tate-Gans into the modified invention of Ross such that queries from the second virtual object to the first virtual object are sent via an API provided by the first virtual object. The motivation is to provide a standardized and efficient communication mechanism between virtual objects controlled by different applications in the artificial reality environment, enabling virtual objects to query properties and information from other virtual objects in a structured manner while maintaining process boundaries and security between applications. Regarding Claim 15, the combination of Ross and Agarawala teaches the invention as recited in Claim 11 . The combination does not explicitly teach but Tate-Gans teaches wherein the request for the registration is received via an API provided by the application (TateGans, Paragraph [0062], "In some embodiments, each application 140 making use of the Universe service to render 3D content...may be required to first register a listener with the Universe. This listener may be used to inform the application 140 of creation and destruction of rendering Prisms"; [0069), "each Prism 113 may be exposed to the application 140 via a volume listener interface with methods for accessing properties of the Prism 113 and registering content in a scene graph sub-tree"; [0070), "Prisms may provide an API for key-value data storage") Tate-Gans and Ross are analogous since both are dealing with artificial reality environments where applications control virtual objects and need mechanisms for applications to register and communicate with a controlling application to manage interactions between virtual objects from different applications. Ross provided a way of displaying virtual objects from different applications in a virtual reality environment where applications can generate and control virtual objects. Tate-Gans provided a way for applications to register listeners with a Universe application through defined interfaces, where Prisms are exposed to applications via listener interfaces with methods for registering content, and where Prisms provide AP ls for accessing and storing data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the AP I-based registration mechanism taught by Tate-Gans into the modified invention of Ross such that the request for registration of events associated with virtual objects is received via an API provided by the application in control of the artificial reality environment. The motivation is to provide a standardized interface for applications to register for event notifications related to virtual objects, enabling efficient and organized communication between the controlling application and other applications managing virtual objects, thereby facilitating proper event handling and inter-application coordination in the artificial reality environment. Regarding Claim 4, it recites limitations similar in scope to the limitations of Claim 14 and therefore is rejected under the same rationale. Regarding Claim 5, it recites limitations similar in scope to the limitations of Claim 15 and therefore is rejected under the same rationale. Claim(s) 6, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al. (US 20180160105 A1, hereinafter Ross), in view of Agarawala et al. US 20210165557 A1, hereinafter Agarawala) as applied to Claim 1, 11 above respectively and further in view of McCulloch et al. (US 20130286004 A1, hereinafter McCulloch). Regarding Claim 16, the combination of Ross and Agarawala teaches the invention in Claim 11 . The combination does not explicitly disclose but McCulloch teaches (McCulloch, Paragraph [0005], "a physics engine is executed by the one or more processors for simulating the collision based on a physics model representing one or more physical properties of the real object, a physics model representing one or more physical properties of the virtual object"; Paragraph [0119], "A frictional force may be determined based on a coefficient of friction for a surface type of material. A spring force may be determined if a modulus of elasticity of an object satisfies criteria to be significant"; FIG. 48, "Physics Parameters Data Set 352" including "mass range 353", "coefficient of friction 359" <read on properties comprising mass and friction>; FIG. 4C, "Object Physical Properties Data Set 320N" including "Physics Parameters Data Set 396N" <read on virtual object physical properties data>; [0107], "Each of the persistent particle proxies represents the impact of a single small force normal to the side of the virtual basketball object <read on collision volume for virtual object>"; [0109], "Each particle may be assigned one or more physics parameter based on the physics parameters data set 396N determined for the real object. For example, the total mass <read on mass property> assigned to the object may be subdivided among the particles"; Paragraph , "the particles for the glass table have a material type <read on material property>and tensile strength for glass"). McCulloch and Ross are analogous since both of them are dealing with managing virtual objects in augmented reality or virtual reality environments where visual objects display properties and respond to events and interactions. Ross provided a way of representing non-VR applications as virtual objects (birds, clouds, sun) in a VR environment with visual characteristics (number, size, position, color) that update based on application events and status changes, where virtual objects are tracked in three-dimensional space. McCulloch provided a way of representing virtual objects in an augmented reality display device system with physics models that include physical properties such as mass, friction coefficients, collision volumes represented by particle proxies, and material types that enable realistic collision simulation and interaction detection between virtual and real objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the virtual object physical properties and physics modeling taught by McCulloch into modified invention of Ross such that the virtual objects representing applications (birds for email, clouds for social media, sun fore-health) include physical properties like weight, mass, collision volumes, friction coefficients, and material characteristics that define how the virtual objects behave during position changes and interactions Regarding Claim 6, it recites limitations similar in scope to the limitations of Claim 16 and therefore is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20170330378 A1 SYSTEM AND METHOD FOR MANAGING INTERACTIVE VIRTUAL FRAMES FOR VIRTUAL OBJECTS IN A VIRTUAL ENVIRONMENT US 20210090341 A1 AUTOMATIC PROJECTION TYPE SELECTION IN AN ARTIFICIAL REALITY ENVIRONMENT US 20200258315 A1 SYSTEM AND METHODS FOR MATING VIRTUAL OBJECTS TO REAL-WORLD ENVIRONMENTS US 20210026441 A1 Virtual Object Control Of A Physical Device and/or Physical Device Control of A Virtual Object US 9418479 B1 Quasi-virtual objects in an augmented reality environment US 20020033845 A1 Object positioning and display in virtual environments Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 10, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month