Prosecution Insights
Last updated: April 19, 2026
Application No. 17/934,898

Presenting Attention States Associated with Voice Commands for Assistant Systems

Final Rejection §103
Filed
Sep 23, 2022
Examiner
TRACY JR., EDWARD
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Meta Platforms Inc.
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
81 granted / 105 resolved
+15.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
131
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
71.9%
+31.9% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
3.7%
-36.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 105 resolved cases

Office Action

§103
DETAILED ACTION Introduction 1. This office action is in response to Applicant’s submission filed on 11/26/2025. Claims 21-40 are pending in the application and have been examined. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 8/26/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment 4. The amendment filed 8/11/2025 has been entered and fully considered. With regard to the rejection under 35 USC 103, the amendments filed have been entered and fully considered. The arguments are rendered moot by the new ground of rejection based on U.S. Pat. App. Pub. No. 20160163314 (Fujii et al.). Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 21, 24-26, 28-33, and 35-40 are rejected under 35 U.S.C. 103 as unpatentable over U.S. Pat. App. Pub. No. 20230230303 (Yang et al., hereinafter “Yang”) in view of U.S. Pat. App. Pub. No. 20160163314 (Fujii et al., hereinafter “Fujii”). With regard to Claim 21, Yang describes: A method, comprising: causing presentation, at a head-mounted XR device, of a first representation associated with an assistant system, the first representation indicating that the assistant system is interactable via a first set of voice commands; (Paragraph 42 describes that an avatar can be provided to a user using an augmented reality device.) detecting one or more voice inputs from a user of the head-mounted XR device; (Paragraph 44 describes that the device receives voice inputs from a user.) [[in accordance with a determination that the assistant system understood the one or more voice inputs from the user as being directed to the assistant system,]] causing presentation at the head-mounted XR device of a second representation of the assistant system, wherein the second representation is distinct from the first representation, [[and the second representation indicates the assistant system understood a portion of the one or more voice inputs;]] (Paragraphs 59 and 60 describes that the representations of the avatar may change based on detected events or user reactions.) detecting one or more further voice inputs from the user; (Paragraph 44 describes that the device receives voice inputs from a user.) and [[in accordance with another determination that the assistant system misunderstood the one or more further voice inputs,]] causing presentation at the head-mounted XR device of a third representation of the assistant system, wherein the third representation is distinct from the first and second representations, [[and the third representation indicates the assistant system misunderstood the one or more further voice inputs, and includes an identification of one or more valid voice inputs for the assistant system.]] (Paragraphs 59 and 60 describes that the representations of the avatar may change based on detected events or user reactions.) Yang does not explicitly describe changing the representations based on whether or not the assistant system understood or misunderstood the voice inputs, or providing a list of one or more valid voice inputs if the command is misunderstood. However, Fujii describes that an assistant can determine when a user input is misunderstood (paragraph 52), save a list of possibly misunderstood commands (paragraph 75), and provide a list of valid commands when a command is misunderstood (paragraph 80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the misunderstood user command detection and valid command list display as described by Fujii into the system of Yang to ensure the correction of misunderstandings, as described in paragraph 80 of Fujii. With respect to Claim 24, Yang describes “the first representation, the second representation, and/or the third representation include an XR assistant avatar, (Paragraph 45 describes that the representation is an avatar with a face.) and a respective form of the XR assistant avatar presented in one of the first representation, the second representation, and/or the third representation, respectively, is based on one or more of its voice, speech, emotion, tone, pitch, appearance, size, shape, clothing, orientation, position, depth, movement, gesture, facial expression, color, shading, outline, brightness, luminescence, transparency, or an icon associated with the XR assistant avatar.” (Paragraph 45 describes that the facial expression can depend on the voice of the avatar.) With respect to Claim 25, Yang describes “the XR assistant avatar has a first pose while the first representation is presented, and wherein the XR assistant avatar has a second pose, distinct from the first pose, while the second representation is presented.” Paragraph 49 describes that the avatar’s facial expression (pose) changes based on user input. With respect to Claim 26, Yang does not explicitly describe this subject m atter. However, Fujii describes “the third representation includes an indication that the voice input was not detected by the assistant system.” Paragraph 52 of Fujii describes that the system does not activate any command if no command can be detected. The fact that no command has been activated would be apparent to the user. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the failure of detection of a command as described by Fujii into the system of Yang to detect and ensure the correction of misunderstandings, as described in paragraph 80 of Fujii. With respect to Claims 28-33, storage medium Claim 28 and method Claim 21 are related as a storage medium programmed to perform the same method, with each claimed storage medium function corresponding to each claimed method step. Accordingly, Claims 28-33 are similarly rejected under the same rationale as applied above with respect to Claims 21-26. With respect to Claims 35-40, system Claim 35 and method Claim 1 are related as a system programmed to perform the same method, with each claimed system function corresponding to each claimed method step. Accordingly, Claims 35-40 are similarly rejected under the same rationale as applied above with respect to Claims 21-26. 7. Claims 22, 23, 29, 30, 36, and 37 are rejected under 35 U.S.C. 103 as unpatentable over Yang in view of Fujii and further in view of U.S. Pat. App. Pub. No. 20220303703 (VanBlon et al., hereinafter “VanBlon”). With regard to Claim 22, Yang in view of Fujii does not explicitly describe this subject matter. However, VanBlon describes “before detecting the one or more voice inputs and the one or more further voice inputs, causing presentation, with the first representation, of an indication that a microphone in electronic communication with the head-mounted XR device is active for receiving voice commands from the user.” Figures 4-6 illustrate that a display of the device can include an indication that a microphone is active. Further, paragraph 52 describes that the device may be a virtual reality headset. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the microphone active indicator as described by VanBlon into the system of Yang in view of Fujii to let the user know which microphone is active, as described in paragraph 71 of VanBlon. With regard to Claim 29, Yang in view of Fujii does not explicitly describe this subject matter. However, VanBlon describes “the indication that the microphone is active is distinct from another portion of the first representation.” Figure 5 shows that the microphone active indicator is separate from other portions, such as a portion used to switch microphones. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the separate microphone active indicator as described by VanBlon into the system of Yang in view of Fujii to separately let the user know which microphone is active, as described in paragraph 74 of VanBlon. With respect to Claims 29 and 30, storage medium Claim 28 and method Claim 21 are related as a storage medium programmed to perform the same method, with each claimed storage medium function corresponding to each claimed method step. Accordingly, Claims 29 and 30 are similarly rejected under the same rationale as applied above with respect to Claims 22 and 23. With respect to Claims 36 and 37, system Claim 35 and method Claim 1 are related as a system programmed to perform the same method, with each claimed system function corresponding to each claimed method step. Accordingly, Claims 36 and 37 are similarly rejected under the same rationale as applied above with respect to Claims 22 and 23. 8. Claims 27 and 34 are rejected under 35 U.S.C. 103 as unpatentable over Yang in view of Fujii and further in view of U.S. Pat. App. Pub. No. 20210303861 (Becorest et al., hereinafter “Becorest”). With regard to Claim 27, Yang describes “the first representation is presented via the head-mounted XR device.” Paragraph 42 of Yang describes that the display device may be an augmented reality device. Yang in view of Fujii does not explicitly describe “presenting a proactive suggestion of voice commands for the user that are valid voice inputs for the assistant system.” However, Becorest describes an avatar guided system. Paragraph 69 of Becorest specifically describes that an avatar system can display suggested commands to a user. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the suggested commands as described by Becorest into the system of Yang in view of Fujii to facilitate user interaction, as described in paragraph 69 of Becorest. With respect to Claim 34, storage medium Claim 28 and method Claim 21 are related as a storage medium programmed to perform the same method, with each claimed storage medium function corresponding to each claimed method step. Accordingly, Claim 34 is similarly rejected under the same rationale as applied above with respect to Claim 27. Conclusion 9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pat. App. Pub. No. 20070036117 (Taube et al.) also describes a voice activated system that provides a list of valid commands if a voice command is misunderstood. 10. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD TRACY whose telephone number is (571)272-8332. The examiner can normally be reached Monday-Friday 9 AM- 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD TRACY JR./Examiner, Art Unit 2656 /ANDREW C FLANDERS/Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Sep 23, 2022
Application Filed
Sep 30, 2024
Non-Final Rejection — §103
Jan 06, 2025
Interview Requested
Jan 17, 2025
Examiner Interview Summary
Jan 17, 2025
Applicant Interview (Telephonic)
Feb 21, 2025
Response Filed
Mar 06, 2025
Final Rejection — §103
Aug 11, 2025
Request for Continued Examination
Aug 11, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Examiner Interview Summary
Aug 12, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103
Nov 25, 2025
Examiner Interview Summary
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 26, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566969
METHOD AND APPARATUS FOR TRAINING MACHINE READING COMPREHENSION MODEL, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12561524
TRAINING MACHINE LEARNING MODELS TO AUTOMATICALLY DETECT AND CORRECT CONTEXTUAL AND LOGICAL ERRORS
2y 5m to grant Granted Feb 24, 2026
Patent 12548552
DYNAMIC LANGUAGE SELECTION OF AN AI VOICE ASSISTANCE SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12548554
SYSTEM AND METHOD FOR ACTIVE LEARNING BASED MULTILINGUAL SEMANTIC PARSER
2y 5m to grant Granted Feb 10, 2026
Patent 12536374
METHOD FOR CONSTRUCTING SENTIMENT CLASSIFICATION MODEL BASED ON METAPHOR IDENTIFICATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.7%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month