Prosecution Insights
Last updated: April 19, 2026
Application No. 18/674,193

SYSTEMS AND METHODS FOR VIRTUAL ASSISTANTS IN VIRTUAL REALITY MEETINGS

Non-Final OA §102§103
Filed
May 24, 2024
Examiner
MCCOY, AIDAN WILLIAM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Meta Platforms Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-12.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
27
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 10 objected to because of the following informalities: Grammatical error, "a physical action performed a participant within the virtual environment". Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 6-7, 11-13, 17-18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by S. Noll et al., "Autonomous agents in collaborative virtual environments," Proceedings. IEEE 8th International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET ICE'99), Stanford, CA, USA, 1999, pp. 208-215, doi: 10.1109/ENABL.1999.80520 (hereinafter "Noll"). Regarding claim 1, Noll teaches A computer-implemented method comprising: identifying a meeting in a virtual reality environment that comprises a plurality of participants (Introduction, section 3 paragraph 4 starts with "This becomes ever more important in a CVW"); monitoring, by an artificial intelligence (AI) agent, the meeting in the virtual reality environment (Intro - "agents, which stem from the field of Artificial Intelligence (AI)," Section 3.2 subsection "handling objects" - "The underlying agents have to keep track of these user actions" & section 4 "information about participants" and following paragraph); detecting, by the AI agent while monitoring the meeting, a trigger behavior by at least one of the participants (GuideAgent and section 4 "information about participants" and following paragraph) that correlates to an action within capabilities of the AI agent (Fig. 2, Fig. 3 desc); and altering, by the AI agent, the virtual reality environment by performing the action correlated to the trigger behavior (section 3.2 subsection control, section 4 subsection giving a tour, 5.3). Regarding claim 2, Noll describes the computer-implemented method of claim 1, wherein the trigger behavior comprises speech (Fig. 2, Fig. 3 desc – “participant issues a message requesting a special service”). Regarding claim 6, Noll teaches the computer-implemented method of claim 1, wherein the action comprises generating a three-dimensional model within the virtual environment (section 3.2 subsections “creation”, “administration”, “mobile agents”). Noll describes agents serving the purpose of generating graphical objects, analogous to an action of generating a three-dimensional model within the virtual environment. Additionally Noll describes an example situation analogous to the trigger-action relationship wherein the action comprises generating a three-dimensional model. This example is that of the receptionist wherein a user will contact a receptionist agent to ask for assistance (trigger) and the receptionist will contact a guide agent to aid the user. The guide agent is represented as a three-dimensional model, therefore the generation of this guide agent caused by a user’s request can be considered analogous to the trigger and action behavior involving generating a three-dimensional model. Regarding claim 7, Noll teaches The computer-implemented method of claim 1, wherein the action comprises modifying a three-dimensional model within the virtual environment (section 3.2 subsections “modification” , “administration”). Noll describes modification of a user’s model in the three-dimensional environment as being controlled by the agent. This is analogous to the action comprising modifying a three-dimensional model in the virtual environment. Regarding claim 11, Noll teaches the computer-implemented method of claim 1, further comprising displaying a three-dimensional model that represents the AI agent within the virtual reality environment (Section 4 subsection An Example: The GuideAgent - "The guide fulfills all properties of the above mentioned agent-object pair: it has its own distinct representation by the use of a special avatar and an underlying agent to control this graphical representation."). Apparatus claims 12, 13, 17 and 18 are drawn to the method claimed in claims 1, 2, 6 and 7. Therefore, the apparatus claims 12, 13, 17 and 18 correspond to the method claims 1, 2, 6 and 7, and are rejected for the same reasons of anticipation as used above. CRM claim 20 is drawn to the method claimed in claim 1. Therefore, the CRM claim 20 corresponds to the method claim 1, and is rejected for the same reasons of anticipation as used above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of Zhe Sun, Qixuan Liang, Meng Wang, and Zhenliang Zhang. 2023. "Neighbor-Environment Observer: An Intelligent Agent for Immersive Working Companionship". In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 112, 1–14. https://doi.org/10.1145/3586183.3606728 (hereinafter "Sun"). Regarding claim 3, Noll teaches The computer-implemented method of claim 1. Noll fails to teach wherein the trigger behavior comprises physical movement within the virtual reality environment. However, Sun teaches wherein the trigger behavior comprises physical movement within the virtual reality environment (Section 2.1, 2.2, 4.3.2, 4.4, Figs. 1- 4). Sun describes tracking movement of objects and the user for use in understanding the perception of a user’s demands. The understanding of perception is used in determining whether a user’s immersion in a virtual reality environment is broken or maintained. This can be further used to enhance the immersion of a user in virtual reality by, for example, rendering a phone that is ringing in the virtual environment, or other actions in the virtual environments. In other words, when a physical action is detected that would lead the user to break immersion in the virtual reality environment, some sort of action is taking to remedy break of immersion. This is detection which causes another action can be considered analogous to a trigger. Sun is considered analogous to the claimed invention as it is in the same field of virtual reality. Therefore it would have been obvious to one of ordinary skill in the art to combine the teachings of Sun with Noll in order to improve immersion in a virtual environment. Apparatus claim 14 is drawn to the method claimed in claim 3. Therefore, the Apparatus claim 14 corresponds to the method claim 3, and is rejected for the same reasons of obviousness as used above. Claim(s) 4, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of A. N. V. González, K. Kapalo, S. Koh, R. Sottilare, P. Garrity and J. J. Laviola, "A Comparison of Desktop and Augmented Reality Scenario Based Training Authoring Tools," 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 2019, pp. 1199-1200, doi: 10.1109/VR.2019.8797973. (hereinafter Gonzalez). Regarding claim 4, Noll teaches The computer-implemented method of claim 1. Noll fails to teach wherein: the trigger behavior comprises a reference to a digital file; and the action comprises displaying a user interface within the virtual reality environment to one or more of the participants that comprises an option to open the digital file. However, Gonzalez teaches wherein: the trigger behavior comprises a reference to a digital file (Section 3 paragraph 1, Fig. 2, 6 & 7); and the action comprises displaying a user interface within the virtual reality environment to one or more of the participants (Figs 6-8) that comprises an option to open the digital file (sections 5.1, 5.3.2, figs. 6-8). Gonzalez describes a system which involves interacting with objects in an augmented reality environment. In this environment a user may select an object in the virtual environment, this object having reference to a digital file. In the example of a phone in the environment the user can set or change an associated ring tone with the phone. This process involves displaying a user interface. Additionally the user can determine an interaction with the object which can open the digital file. In the case of the phone the user can determine when the ring tone is played. Gonzalez is considered analogous to the claimed invention as it is in the same field of virtual reality. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Gonzalez with Noll in order to improve the user experience of building a three-dimensional scene (section 7.4 “usability of the authoring tools). Apparatus claim 15 is drawn to the method claimed in claim 4. Therefore, the Apparatus claim 15 corresponds to the method claim 4, and is rejected for the same reasons of obviousness as used above. Claim(s) 5, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of Gonzalez and in further view of Hassan (US 20230205904 A1). Regarding claim 5, Noll in view of Gonzalez teaches the computer-implemented method of claim 4, wherein displaying the user interface within the virtual environment comprises: identifying a subset of the participants with permissions to view the digital file (Section 4 subsection “An Example: The GuideAgent” paragraph 1 – “DoorAgent” provides various types of access control); Noll describes providing access to certain area of the virtual environment based on a user’s permission. It would have been obvious to apply the teachings of Gonzalez with Noll and apply this permission structure to a digital file instead of the area of the virtual environment present in Noll for the same motivation as that of claim 4. Noll in view of Gonzalez fails to teach displaying the user interface to the subset of the participants with the permissions; and avoiding displaying the user interface to the participants not in the subset. However Hassan teaches displaying the user interface to the subset of the participants with the permissions; and avoiding displaying the user interface to the participants not in the subset (paragraph [0104]). Hassan describes a method of access control for user interface, only displaying to the users who have permission to access the user interface. Hassan is considered analogous to the claimed invention as it is in the same field of multiuser computational systems. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Hassan with Noll in view of Gonzalez to improve security and privacy. Apparatus claim 16 is drawn to the method claimed in claim 5. Therefore, the Apparatus claim 16 corresponds to the method claim 5, and is rejected for the same reasons of obviousness as used above. Claim(s) 8, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of Haijun Xia, Tony Wang, Aditya Gunturu, Peiling Jiang, William Duan, and Xiaoshuo Yao. 2023. “CrossTalk: Intelligent Substrates for Language-Oriented Interaction in Video-Based Communication and Collaboration”. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 60, 1–16. https://doi.org/10.1145/3586183.3606773 (hereinafter “Xia”). Regarding claim 8, Noll teaches the computer-implemented method of claim 1. Noll fails to teach wherein: monitoring the meeting comprises creating a transcript of the meeting; and the action comprises creating and displaying a summary of at least a portion of the transcript of the meeting. However, Xia teaches wherein: monitoring the meeting comprises creating a transcript of the meeting (section 5.1.1, section 6 paragraph 2, fig 4); and the action comprises creating and displaying a summary of at least a portion of the transcript of the meeting (section 6.1, section 6.3, figs 4-9). Xia describes a system which provides a real time transcript of a meeting. This transcript is used in suggesting and displaying relevant content to the conversation. This content can be considered a summary of at least a portion of the transcript of the meeting. Xia is considered analogous to the claimed invention as it is in the same field of virtual communication. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Xia with Noll in order to enhance the functionality of Noll and provide “a more fluid and flexible communication and collaboration experience” (Xia, Asbstract). Apparatus claim 19 is drawn to the method claimed in claim 8. Therefore, the Apparatus claim 19 corresponds to the method claim 8, and is rejected for the same reasons of obviousness as used above. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of Xia and in further view of P. Qi, Z. Huang, Y. Sun and H. Luo, "A Knowledge Graph-Based Abstractive Model Integrating Semantic and Structural Information for Summarizing Chinese Meetings," 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Hangzhou, China, 2022, pp. 746-751, doi: 10.1109/CSCWD54268.2022.9776298. (hereinafter "Qi") and Asthana (US 2022/0109585 A1). Regarding claim 9, Noll in view of Xia teaches the computer-implemented method of claim 8. Noll in view of Xia fail to teach wherein creating the summary comprises: identifying a job category of a participant in the meeting; and tailoring the summary to the job category of the participant. However, Qi teaches wherein creating the summary comprises: identifying a job category of a participant in the meeting (section 2 "Word-level transformer and role-level transformer" - "for a participant Ui and his spoken transcripts {S1, S2, . . . ,SK}, we input the corresponding semantic representation {V ecS1, V ecS2, . . . ,V ecSK} to the transformer model to learn Ui’s role information MUi role.”); Qi describes a system which provides a summary of a meeting, taking into account the various roles of participants in the meeting. In doing this Qi identifies the roles of the various participants. Qi is considered analogous to the claimed invention as it is in the same field of meeting recap systems. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date to combine the teachings of Qi with Noll in view of Xia to improve the participant’s understanding . Noll in view of Xia and Qia fail to explicitly teach tailoring the summary to the job category of the participant. However Asthana teaches tailoring the summary to the job category of the participant (paragraph [0014] – “ Embodiments of the present invention also recognize that efficiency may be gained by customizing meeting summaries to participants based on criteria such as preferences and/or job roles”). Asthana is considered analogous to the claimed invention as it is in the same field of meeting recap systems. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date to combine the teachings of Asthana with Noll in view of Xia and Qi in order to improve the efficiency of the summarization system. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Noll in view of Xia and in further view of N. Kern, B. Schiele, H. Junker, P. Lukowicz and G. Troster, "Wearable sensing to annotate meeting recordings," Proceedings. Sixth International Symposium on Wearable Computers,, Seattle, WA, USA, 2002, pp. 186-193, doi: 10.1109/ISWC.2002.1167247 (hereinafter "Kern"). Regarding claim 10, Noll in view of Xia teaches the computer-implemented method of claim 8. Xia further teaches wherein creating the summary comprises: detecting a physical action performed a participant within the virtual environment (section 10.4); Noll in view of Xia fail to teach and annotating the transcript of the meeting with a description of the physical action. However Kern teaches annotating the transcript of the meeting with a description of the physical action (Introduction paragraph 4, Section 3 subsections “Using Wearable Sensors to Annotate Meetings”, section 6.3, figure 8). Kern describes annotating a meeting transcript using wearable sensors which can track an attendees physical actions such as head movement, posture change, and more. Kern is considered analogous to the claimed invention as it is in the same field of software enhanced meeting systems. Therefore it would have been obvious to one of ordinary skill in the art to incorporate the teachings of Kern with Noll in view of Xia to utilize physical actions (as suggested by Xia section 10.4) in order to improve a user’s recollection of meeting events (Kern, introduction). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. G. Tur et al., "The CALO Meeting Assistant System," in IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 6, pp. 1601-1611, Aug. 2010, doi: 10.1109/TASL.2009.2038810. LAFRENIERE (WO 2024227132 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN W MCCOY/Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month