Prosecution Insights
Last updated: April 19, 2026
Application No. 18/806,845

DEFINING AND MODIFYING CONTEXT AWARE POLICIES WITH AN EDITING TOOL IN EXTENDED REALITY SYSTEMS

Non-Final OA §103
Filed
Aug 16, 2024
Examiner
PATEL, JITESH
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
312 granted / 398 resolved
+16.4% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
14 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 398 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Drouin et al (US 20180322706 A1). Regarding claim 1, Drouin discloses a head-mounted device (Drouin fig. 2) comprising: a frame configured to be mounted on a head of a user (Drouin fig. 2); a pair of lenses coupled to the frame (Drouin fig. 2 - 222); a speaker configured to output audio content (Drouin [0098], “acoustic components (e.g., speakers)”); a microphone configured to receive audio input (Drouin [0098], “input components (e.g., a microphone)”); a camera configured to capture image data (Drouin fig. 2 -230); one or more processors (Drouin [0034], “a central processor 224”); and at least one memory storing instructions that, when executed by the one or more processors, cause the head-mounted device to perform operations (Drouin [0034], “a central processor 224 that may execute some of the operations and methods described herein (e.g., executing the client module 124).”) comprising: capturing the image data from the camera, wherein the image data is analyzed to obtain information about a plurality of real-world objects in a field of view of the camera (Drouin [0035], “the camera 230 captures or “sees” an angle of view of the real world based on the orientation of the HMD device 220 (e.g., similar to what the wearer 210 sees in the wearer 210's FOV when looking through the visor 222) … detecting types of objects near the wearer 210”); identifying, based on the information, a particular real-world object of the plurality of real-world objects (Drouin [0035], “The digital video from the camera device 230 may be analyzed to detect various trigger conditions, such as detecting types of objects near the wearer”); determining a context (Drouin [0021], “a cylindrical object may be determined to be a canned good based on its location within a kitchen (a context for a canned good) of the user”); determining an action based on the context (Drouin [0022], “the MR system determines that an application trigger implicates one of the nearby objects (e.g., as one of the example criteria) and otherwise meets all of the contextual criteria (based on a context) for the application trigger … the MR system initiates the contextual application identified by the application trigger (an action)”); and causing the action to be performed by the head-mounted device (Drouin [0022], “the application trigger may be configured to initiate various actions within the triggered application. For example, the example application trigger described above may be configured to initiate a recipe application that provides recipes in which one or more of the detected cooking ingredients are used. As such, the MR system provides context-based triggering (causing) of applications within the MR environment.”; [0050], “the MR environment 500 includes several recipe indicator objects 510A, 510B, 510C (collectively, objects 510) … the objects 510 are virtual objects presented by the HMD 220 (action performed by the head-mounted device)”). Drouin does not expressly disclose (highlighted) determining a context based on at least one factor, the at least one factor comprising the identification of the particular real-world object However, Drouin suggests (highlighted) determining a context based on at least one factor, the at least one factor comprising the identification of the particular real-world object (Drouin [0021], “The MR system may identify objects based on their size, shape, texture, location, and various visual markings that may appear on the object. For example, a cylindrical object may be determined to be a canned good based on its location within a kitchen of the user (“a cylindrical object based on its location within a kitchen of the user”, is interpreted as reading on a factor that helps determine a context/type for the cylindrical object)”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize object shapes in an environment, as suggested by Drouin, as a factor to determine a context for objects. This would have been done to accurately determine the types of objects in an environment and to provide appropriate processing of objects thereof. See for example, Drouin [0059]. Regarding claim 2, Drouin discloses the head-mounted device of claim 1, wherein the action comprises outputting specific audio content from the speaker (Drouin [0030], “each application trigger configuration includes one or more trigger conditions, application logic or interaction mechanics for how the application interacts with the user, and an asset bundle that may include, for example, 2D/3D visuals and audio (action comprises outputting specific audio content from the speaker) that are displayed to the user to provide an MR experience”). Regarding claim 3, Drouin discloses the head-mounted device of claim 2, wherein the specific audio content comprises an instruction for performing a task associated with the context (Drouin [0039], “The client module 124 executes the app's rules (e.g., scripts that define interaction and rendering rules) and renders audio”; [0056], “the contextual application provides step-by-step cooking instructions to the user 102 via the MR system 100 … the pre-defined interaction action involves the user utilizing the physical can”). Regarding claim 4, Drouin discloses the head-mounted device of claim 1, wherein the operations further comprise: receiving the audio input from the microphone, the audio input comprising speech (Drouin [0041], “Interaction rules define how the app responds to user actions. User actions may be, for example, voice commands”); and identifying a voice command based on at least a portion of the speech, wherein the at least one factor further comprises the voice command (Drouin [0041], “User actions may be, for example, voice commands (the at least one factor further comprises the voice command) … Responses to user actions may be, for example, … audio”; [0099], “identify a person (e.g., voice identification (a portion of speech) …”). Regarding claim 5, Drouin discloses the head-mounted device of claim 1, further comprising: a display configured to present visual information to the user, wherein the action comprises presenting specific visual information on the display (Drouin [0039], “The client module 124 executes the app's rules (e.g., scripts that define interaction and rendering rules) and renders audio and visual assets accordingly (action comprises presenting specific visual information on the display)”). Regarding claim 6, Drouin discloses the head-mounted device of claim 5, wherein the specific visual information comprises a virtual object presented in a mixed reality environment, and the virtual object is determined based on the context (Drouin [0055], “Each recipe card 610 is a virtual object, and may be displayed on or near the associated real-world object (e.g., can of soup …”). Regarding claim 7, Drouin discloses the head-mounted device of claim 1, wherein the instructions comprise a virtual assistant application that determines the context based on an artificial intelligence engine and the at least one factor (Drouin [0059], “the detecting trigger conditions 320 includes an artificial intelligence agent (a virtual assistant application) that completes the image matching process using artificial intelligence (determines the context based on an artificial intelligence engine) for image recognition/matching … In example embodiments, the one or more attributes incorporated into the machine-learning techniques may include … shape (a factor) … context (e.g., in relation to other virtual objects in the real-world environment)”). Regarding claim 11, Drouin discloses a computer-implemented method for operating a head-mounted device (Drouin [0018], “A mixed reality (MR) system and associated methods are described herein. The MR system is configured to discover and present contextual applications to a user within an MR environment. In an example embodiment, a user (e.g., a wearer of an HMD”), the method comprising: capturing image data from a camera disposed on the head-mounted device, wherein the image data is analyzed to obtain information about a plurality of real-world objects in a field of view of the camera (Drouin [0018], “a forward-facing camera configured to capture digital video or images of the real world around the user”; [0035], “The digital video from the camera device 230 may be analyzed to detect … detecting types of objects near the wearer”); identifying, based on the information, a particular real-world object of the plurality of real-world objects (Drouin [0035], “The digital video from the camera device 230 may be analyzed to detect various trigger conditions, such as detecting types of objects near the wearer”); determining a context (Drouin [0021], “a cylindrical object may be determined to be a canned good based on its location within a kitchen (a context for a canned good) of the user”); determining an action based on the context (Drouin [0022], “the MR system determines that an application trigger implicates one of the nearby objects (e.g., as one of the example criteria) and otherwise meets all of the contextual criteria (based on a context) for the application trigger … the MR system initiates the contextual application identified by the application trigger (an action)”); causing the action to be performed by the head-mounted device (Drouin [0022], “the application trigger may be configured to initiate various actions within the triggered application. For example, the example application trigger described above may be configured to initiate a recipe application that provides recipes in which one or more of the detected cooking ingredients are used. As such, the MR system provides context-based triggering (causing) of applications within the MR environment.”; [0050], “the MR environment 500 includes several recipe indicator objects 510A, 510B, 510C (collectively, objects 510) … the objects 510 are virtual objects presented by the HMD 220 (action performed by the head-mounted device)” wherein the action comprises at least one of: outputting audio content from a speaker and/or presenting visual information on a display of the head-mounted device speaker (Drouin [0030], “each application trigger configuration includes one or more trigger conditions, application logic or interaction mechanics for how the application interacts with the user, and an asset bundle that may include, for example, 2D/3D visuals and audio (action comprises outputting specific audio content from the speaker) that are displayed to the user to provide an MR experience”). Drouin does not expressly disclose (highlighted) determining a context based on at least one factor, the at least one factor comprising the identification of the particular real-world object. However, Drouin suggests (highlighted) determining a context based on at least one factor, the at least one factor comprising the identification of the particular real-world object (Drouin [0021], “The MR system may identify objects based on their size, shape, texture, location, and various visual markings that may appear on the object. For example, a cylindrical object may be determined to be a canned good based on its location within a kitchen of the user (“a cylindrical object based on its location within a kitchen of the user”, is interpreted as reading on a factor that helps determine a context/type for the cylindrical object)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to utilize object shapes in an environment, as suggested by Drouin, as a factor to determine a context for objects. This would have been done to accurately determine the types of objects in an environment and to provide appropriate processing of objects thereof. See for example, Drouin [0059]. Regarding claim 12, Drouin discloses the computer-implemented method of claim 11, wherein the action comprises: outputting specific audio content from the speaker or presenting specific visual information on the display speaker (Drouin [0030], “each application trigger configuration includes one or more trigger conditions, application logic or interaction mechanics for how the application interacts with the user, and an asset bundle that may include, for example, 2D/3D visuals and audio (action comprises outputting specific audio content from the speaker) that are displayed to the user to provide an MR experience”).. Regarding claim 13, Drouin discloses the computer-implemented method of claim 12, wherein the specific audio content comprises an instruction for performing a task associated with the context (Drouin [0039], “The client module 124 executes the app's rules (e.g., scripts that define interaction and rendering rules) and renders audio”; [0056], “the contextual application provides step-by-step cooking instructions to the user 102 via the MR system 100 … the pre-defined interaction action involves the user utilizing the physical can”).. Regarding claim 14, Drouin discloses the computer-implemented method of claim 11, wherein the operations further comprise: receiving audio input from a microphone, the audio input comprising speech (Drouin [0041], “Interaction rules define how the app responds to user actions. User actions may be, for example, voice commands”); and identifying a voice command based on at least a portion of the speech, wherein the at least one factor further comprises the voice command (Drouin [0041], “User actions may be, for example, voice commands (the at least one factor further comprises the voice command) … Responses to user actions may be, for example, … audio”; [0099], “identify a person (e.g., voice identification (a portion of speech) …”). Regarding claim 15, Drouin discloses the computer-implemented method of claim 12, wherein the specific visual information comprises a virtual object presented in a mixed reality environment, and the virtual object is determined based on the context (Drouin [0022], “the example application trigger described above may be configured to initiate a recipe application that provides recipes in which one or more of the detected cooking ingredients are used. As such, the MR system provides context-based triggering (causing) of applications within the MR environment”; [0050], “the MR environment 500 includes several recipe indicator objects 510A, 510B, 510C (collectively, objects 510) … the objects 510 are virtual objects presented by the HMD 220”). Regarding claim 16, Drouin discloses the computer-implemented method of claim 11, wherein: But does not disclose the context is determined based on an artificial intelligence engine and the at least one factor, user information about the user is acquired by the artificial intelligence engine, and the at least one factor comprises the user information about the user. However, Wang discloses the context is determined based on an artificial intelligence engine and the at least one factor, user information about the user is acquired by the artificial intelligence engine, and the at least one factor comprises the user information about the user (Wang fig. 1; [0045], “enable the AI system to provide contextually relevant and personalized responses to users”; Wang [0056], “the AI system initially processes the data to extract significant information such as user intents, behaviors, and preferences (the artificial intelligence engine is configured to acquire user information about the user) … After the data has been processed, it is added to the OKB in a structured format … This format can include relevant contextual information such as the time and location of the interaction (the at least one factor comprises the user information about the user; user intent is also interpreted as reading on a factor that comprises the user information about the user)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Drouin with Wang to utilize an AI system to determine and factor in user information. This would have been done to accurately determine user data which would be used to generate and output useful and relevant information for users. Claim 17 recites one or more non-transitory computer-readable media which corresponds to the function performed by the head-mounted device of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the one or more non-transitory computer-readable media of claim 17. Additionally, Drouin discloses one or more non-transitory computer-readable media storing computer-readable instructions that, when executed by at least one processing system, cause a system to execute operations (Drouin [0015], “a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the VR methodologies discussed herein.”). Regarding claim 18, Drouin discloses the one or more non-transitory computer-readable media of claim 17, wherein the action comprises: outputting specific audio content from the speaker or presenting specific visual information on the display (Drouin [0030], “each application trigger configuration includes one or more trigger conditions, application logic or interaction mechanics for how the application interacts with the user, and an asset bundle that may include, for example, 2D/3D visuals and audio (action comprises outputting specific audio content from the speaker) that are displayed to the user to provide an MR experience”).. Regarding claim 19, Drouin discloses the one or more non-transitory computer-readable media of claim 18, wherein the specific audio content comprises an instruction for performing a task associated with the context (Drouin [0039], “The client module 124 executes the app's rules (e.g., scripts that define interaction and rendering rules) and renders audio”; [0056], “the contextual application provides step-by-step cooking instructions to the user 102 via the MR system 100 … the pre-defined interaction action involves the user utilizing the physical can”). Regarding claim 20, The one or more non-transitory computer-readable media of claim 17, wherein the operations further comprise: receiving audio input from a microphone, the audio input comprising speech (Drouin [0041], “Interaction rules define how the app responds to user actions. User actions may be, for example, voice commands”); and identifying a voice command based on at least a portion of the speech, wherein the at least one factor further comprises the voice command (Drouin [0041], “User actions may be, for example, voice commands (the at least one factor further comprises the voice command) … Responses to user actions may be, for example, … audio”; [0099], “identify a person (e.g., voice identification (a portion of speech) …”). Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Drouin in view of Wang (US20230245651A1). Regarding claim 8, Drouin discloses the head-mounted device of claim 7, but does not disclose wherein the artificial intelligence engine is part of a remote system, and the virtual assistant application communicates with the artificial intelligence engine over a network. However, Wang discloses the artificial intelligence engine is part of a remote system, and the virtual assistant application communicates with the artificial intelligence engine over a network (Wang fig. 1; [0041], “The intelligent system 100 also includes data storage 106, cloud-based server 107 (comprising a remote system), application programming interfaces (APIs) 108, and network 109.”; [0109], “AI applications 102 are specific implementations of the AI system designed to solve particular problems or perform specific tasks”; [0110], “AI applications include conversational AI agents 103, virtual assistants”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Drouin with Wang to provide a server based AI assistant system. This would have been done to make the system versatile and highly intelligent and thereby improve user experience. Regarding claim 9, Drouin discloses the head-mounted device of claim 7, but does not disclose wherein the artificial intelligence engine is configured to acquire user information about the user, and the at least one factor comprises the user information about the user. However, Wang discloses the artificial intelligence engine is configured to acquire user information about the user, and the at least one factor comprises the user information about the user (Wang [0056], “the AI system initially processes the data to extract significant information such as user intents, behaviors, and preferences (the artificial intelligence engine is configured to acquire user information about the user) … After the data has been processed, it is added to the OKB in a structured format … This format can include relevant contextual information such as the time and location of the interaction (the at least one factor comprises the user information about the user; user intent is also interpreted as reading on a factor that comprises the user information about the user)”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Drouin with Wang to utilize an AI system to determine and factor in user information. This would have been done to accurately determine user data which would be used to generate and output useful and relevant information for users. Regarding claim 10, Drouin discloses the head-mounted device of claim 7, but does not wherein the virtual assistant application is configured to acquire user information about the user from a social-networking system, and the at least one factor comprises the user information about the user. However, Wang discloses wherein the virtual assistant application is configured to acquire user information about the user from a social-networking system, and the at least one factor comprises the user information about the user (Wang [0056], “the AI system initially processes the data to extract significant information such as user intents, behaviors, and preferences (the artificial intelligence engine is configured to acquire user information about the user) … After the data has been processed, it is added to the OKB in a structured format … This format can include relevant contextual information such as the time and location of the interaction (the at least one factor comprises the user information about the user; user intent is also interpreted as reading on a factor that comprises the user information about the user)”; [0070], “the AI system can utilize automated techniques to analyze data from diverse sources, such as social media (acquire user information about the user from a social-networking system)”; [0110], “AI applications include conversational AI agents 103, virtual assistants”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Drouin with Wang to enable an AI assistant to determine user information from social media and factor it in. This would have made Drouin more versatile by accurately determining user data from a wide variety of sources, which would be used to generate and output useful and relevant information for users. Conclusion See the notice of references cited (PTO-892) for prior art made of record, including art that is not relied upon but considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JITESH PATEL whose telephone number is (571)270-3313. The examiner can normally be reached 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JITESH PATEL/Primary Examiner, Art Unit 2612
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602866
DIGITAL TWIN AUTHORING AND EDITING ENVIRONMENT FOR CREATION OF AR/VR AND VIDEO INSTRUCTIONS FROM A SINGLE DEMONSTRATION
2y 5m to grant Granted Apr 14, 2026
Patent 12597245
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586313
DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12579739
2D CONTROL OVER 3D VIRTUAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12579765
DEFINING AND MODIFYING CONTEXT AWARE POLICIES WITH AN EDITING TOOL IN EXTENDED REALITY SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
91%
With Interview (+12.4%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 398 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month