Prosecution Insights
Last updated: April 19, 2026
Application No. 18/126,717

ADAPTING AUTOMATED ASSISTANT BASED ON DETECTED MOUTH MOVEMENT AND/OR GAZE

Final Rejection §103
Filed
Mar 27, 2023
Examiner
NUNEZ, JORDANY
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
4 (Final)
60%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
284 granted / 474 resolved
+4.9% vs TC avg
Strong +33% interview lift
Without
With
+33.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
8 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 474 resolved cases

Office Action

§103
Continued Examination Under 37 CFR 1.114 Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Note: In order to better show what is and is not taught by the references, Examiner shows some words underlined. Words that are underlined indicate teachings of the cited reference, and may not specifically be claimed. Claims 1, 5-9, 20, 24-28 are rejected under 35 U.S.C. 103 as being unpatentable over Vasilieff et al. (US20130021459, hereinafter Vasilieff) in view of DeVaul et al. (US20150138333, hereinafter DeVaul). As to claims 1, 20: Vasilieff shows a method, and corresponding device, implemented by one or more processors of a client device that facilitates touch-free interaction between a user and an automated assistant, the method comprising: receiving a stream of image frames that are based on output from one or more cameras of the client device (¶ [0056], [0059]) (e.g., A processor 1202 receives image data via a camera 1204; monitoring an image feed of a user interacting with the computing device); processing the image frames of the stream using at least one trained machine learning model stored locally on a gaze of the user that is directed toward the client device (¶ [0060]) (e.g., detect face of the user looking at the computing device; the user looking at the computing device can include a user looking at a specific region of the computing device or graphical user interface of the computing device), and movement of a mouth of the user that is indicative of the user speaking (¶ [0057], [0004]) (e.g., The face recognizer 1208 and mouth recognizer 1210 process the image data and provide a trigger or notification to the processor 1202 when a face directed to the camera with a moving mouth is detected. Upon receiving a trigger or notification, the processor 1202 passes audio data received via the microphone 1206 to a speech processor 1212 or to a speech processing service 1216 via a network 1214, which provide speech recognition results to the processor 1202’ the systems can attempt to determine where the user's speech starts and stops); initially detecting, based on the monitoring, occurrence of both: the gaze of the user, and the movement of the mouth of the user (¶ [0057], [0004]) (e.g., The face recognizer 1208 and mouth recognizer 1210 process the image data and provide a trigger or notification to the processor 1202 when a face directed to the camera with a moving mouth is detected. Upon receiving a trigger or notification, the processor 1202 passes audio data received via the microphone 1206 to a speech processor 1212 or to a speech processing service 1216 via a network 1214, which provide speech recognition results to the processor 1202’ the systems can attempt to determine where the user's speech starts and stops); and in response to determining there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user: transmitting sensor data, from one or more sensors of the client device, to one or more remote automated assistant components (¶ [0060]-[0062]) (e.g., identifying an audio start and end event, and based on the audio start event, the system 100 initiates processing of a received audio signal (1306). Processing the audio signal can include performing speech recognition of the received audio signal. Processing the audio signal can occur on a second device separate from the computing device that receives the audio signal.). Vasilieff fails to specifically show: and in response to initially detecting the occurrence of both the gaze of the user and the movement of the mouth of the user: rendering a human perceptible cue; and determining, based on the monitoring and subsequent to rendering the human perceptible cue, whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user; wherein no sensor data, from the one or more sensors, is transmitted between rendering of the human perceptible cue and prior to determining that there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user. In the same field of invention, DeVaul teaches: agent interfaces for interactive electronics that support social cues. DeVaul further teaches: and in response to initially detecting the occurrence of cue. Detecting the social cue may involve the camera detecting a gaze of a user directed toward the anthropomorphic device): rendering a human perceptible cue (¶ [0108], [0081]) (e.g., in response to detecting the social cue, the anthropomorphic device may transition from the sleep mode to the active mode; while transitioning from the sleep mode to the active mode the anthropomorphic device 402 may greet the detected user, perhaps addressing the user by name and/or asking the user if he or she would like any assistance); and determining, based on the monitoring and subsequent to rendering the human perceptible cue, whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user (¶ [0109]) (e.g., while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may receive an audio signal via the microphone); and in response to determining there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user: transmitting sensor data, from one or more sensors of the client device, to one or more remote automated assistant components (¶ [0110]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may (i) transmit a media device command to a media device); wherein no sensor data, from the one or more sensors, is transmitted between rendering of the human perceptible cue and prior to determining that there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user (¶ [0109], [0110]) (e.g., while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may receive an audio signal via the microphone; based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may (i) transmit a media device command to a media device). In a different embodiment, DeVaul further teaches: in response to initially detecting the occurrence of both the gaze of the user and the movement of the mouth of the user (¶ [0110]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device,): rendering a human perceptible cue (¶ [0110], [0112]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may …(ii) provide an acknowledgement of the audio signal, wherein the media device command is based on the audio signal; the acknowledgment may involve the anthropomorphic device producing a sound via the speaker). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Vasilieff and DeVaul before the effective filing date of the invention, to have combined the teachings of DeVaul with the method, and corresponding device, as taught by Vasilieff. One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). As to claims 5, 24, DeVaul further teaches: further comprising: in response to determining there is not the continued occurrence of both the gaze of the user and the movement of the mouth of the user: preventing transmitting of the sensor data to the one or more remote automated assistant components (¶ [0109], [0110]) (e.g., while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may receive an audio signal via the microphone; based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may (i) transmit a media device command to a media device). One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). As to claims 6, 25, DeVaul further teaches: wherein the human perceptible cue comprises audible output and wherein rendering the human perceptible cue comprises: rendering the audible output via a speaker of the client device (¶ [0108], [0081]) (e.g., in response to detecting the social cue, the anthropomorphic device may transition from the sleep mode to the active mode; while transitioning from the sleep mode to the active mode the anthropomorphic device 402 may greet the detected user, perhaps addressing the user by name and/or asking the user if he or she would like any assistance). One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). As to claims 7, 26, DeVaul further teaches: wherein the audible output comprises a spoken output from the automated assistant (¶ [0108], [0081]) (e.g., in response to detecting the social cue, the anthropomorphic device may transition from the sleep mode to the active mode; while transitioning from the sleep mode to the active mode the anthropomorphic device 402 may greet the detected user, perhaps addressing the user by name and/or asking the user if he or she would like any assistance). One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). As to claims 8, 27, DeVaul further teaches: wherein the human perceptible cue comprises a visual output and wherein rendering the human perceptible cue comprises: rendering the visual output via a visual display of the client device (¶ [0087]) (e.g., anthropomorphic device 402 may acknowledge reception and/or acceptance of the voice command. This acknowledgement may take various forms, such as an audio signal (e.g., a spoken word or phrase, a beep, and/or a tone) and/or a visual signal (e.g., anthropomorphic device 402 may nod and/or display a light).). One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). As to claims 9, 28, DeVaul further teaches: wherein the visual output is a symbol (¶ [0087]) (e.g., anthropomorphic device 402 may acknowledge reception and/or acceptance of the voice command. This acknowledgement may take various forms, such as an audio signal (e.g., a spoken word or phrase, a beep, and/or a tone) and/or a visual signal (e.g., anthropomorphic device 402 may nod and/or display a light).). One would have been motivated to make such combination because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). Claims 2, 3, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Vasilieff et al. (US20130021459, hereinafter Vasilieff) in view of DeVaul et al. (US20150138333, hereinafter DeVaul), further in view of Divakaran et al. (US20170160813, Divakaran). As to claims 2, 21: Vasilieff and DeVaul show a method, and corresponding device, substantially as claimed, as specified above. Vasilieff and DeVaul fail to specifically show: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises the image frames or additional image frames that are based on additional output from the one or more cameras. In the same field of invention, Divakaran teaches: Virtual personal assistant with integrated object recognition. Divakaran further teaches: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises the image frames or additional image frames that are based on additional output from the one or more cameras (¶ [0071], [0040]) (e.g., the virtual personal assistant system 400 can include a software and/or hardware interface that enables a device to access (e.g., over a network), a virtual personal assistant system 400 running “in the cloud.” . A multi-modal virtual personal assistant can also accept visual input, including video or still images, and determine information such as facial expressions, gestures, and iris biometrics (e.g., characteristics of a person's eyes)). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Vasilieff, DeVaul, and Divakaran before the effective filing date of the invention, to have combined the teachings of Divakaran with the method, and corresponding device, as taught by Vasilieff and DeVaul. One would have been motivated to make such combination because a way to enable a virtual personal assistant to determine a person's intent, as well as the person's emotional, mental or cognitive state. to engage in an interactive dialog would have been obtained and desired, as expressly taught by Divakaran (¶ [0002]). As to claims 3, 22: Vasilieff and DeVaul show a method, and corresponding device, substantially as claimed, as specified above. Vasilieff and DeVaul fail to specifically show: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises audio data that is based on output from one or more microphones of the client device. In the same field of invention, Divakaran teaches: Virtual personal assistant with integrated object recognition. Divakaran further teaches: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises audio data that is based on output from one or more microphones of the client device (¶ [0071], [0040]) (e.g., the virtual personal assistant system 400 can include a software and/or hardware interface that enables a device to access (e.g., over a network), a virtual personal assistant system 400 running “in the cloud.” A multi-modal virtual personal assistant can accept audio input, including natural language and non-verbal sounds such as grunts or laughter.). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Vasilieff, DeVaul and Divakaran before the effective filing date of the invention, to have combined the teachings of Divakaran with the method, and corresponding device, as taught by Vasilieff and DeVaul. One would have been motivated to make such combination because a way to enable a virtual personal assistant to determine a person's intent, as well as the person's emotional, mental or cognitive state. to engage in an interactive dialog would have been obtained and desired, as expressly taught by Divakaran (¶ [0002]). Claims 4, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Vasilieff et al. (US20130021459, hereinafter Vasilieff) in view of DeVaul et al. (US20150138333, hereinafter DeVaul), further in view of Scanlon (US20200286484). As to claims 4, 23: Vasilieff and DeVaul show a method, and corresponding device, substantially as claimed, as specified above. Vasilieff and DeVaul fail to specifically show: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises buffered sensor data buffered prior to subsequently detecting the continued occurrence of both the gaze of the user and the movement of the mouth of the user. In the same field of invention, Scanlon teaches: a method for speech detection. Scanlon further teaches: wherein the sensor data, transmitted to the one or more remote automated assistant components, comprises buffered sensor data buffered prior to subsequently detecting the continued occurrence of both the gaze of the user and the movement of the mouth of the user (¶ [0090], [0092], [0020]) (e.g., after the user is prompted to speak, the system starts to record audio in a circular buffer, step 66. In this way, the system will have a few seconds of past audio continually in the buffer after the prompt occurs. Any delay arising from a delay in face detection 54 or gaze detection 64 can be compensated for, by retrieving the audio from the buffer, step 68, and using the retrieved audio to populate the start of the audio recording and the speech processing; FIG. 5 shows a method similar to that of FIG. 4, but with the inclusion of an additional verification condition after face detection step 54 and in parallel with the gaze detection step 64. In step 70, the mouth analysis movement function verifies whether the user's mouth is moving; The method can be implemented in a distributed fashion, with different elements of the system, responsible for different functionality, provided in different devices. A common implementation is to have face detection and audio capture local to a single user device, with audio being streamed to a remote system for recording and processing). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Vasilieff, DeVaul and Scanlon before the effective filing date of the invention, to have combined the teachings of Scanlon with the method, and corresponding device, as taught by Vasilieff, and DeVaul. One would have been motivated to make such combination because a more effective way to determine when the user is trying to use the system would have been obtained and desired, as expressly taught by Scanlon (¶ [0004]). It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)). Response to Arguments Applicant’s arguments have been fully considered but are not persuasive. Examiner reiterates that references to specific columns, figures or lines should not be limiting in any way. The entire reference provides disclosure related to the claimed invention. Applicant argues: The Office Action, pp. 9-10, alleges that the following features of independent claim 1, as well as similar features of independent claim 20, are rendered obvious by paragraph [0109] of DeVaul: "determining, based on the monitoring and subsequent to rendering the human perceptible cue, whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user" The cited portions of DeVaul fail to teach or suggest that DeVaul's "gaze" and "audio signal" are detected "subsequent to rendering the human perceptible cue", as set forth in independent claim 1. Accordingly, the cited portions of DeVaul fail to teach or suggest "determining whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user" where the "continued occurrence" is "subsequent to rendering the human perceptible cue", as set forth in independent claim 1. Examiner disagrees. The cited portions of DeVaul are relied upon to show that one of ordinary skill in the art, having the teachings of Vasilieff and DeVaul before the effective filing date of the invention, would have combined the teachings of DeVaul with the method, and corresponding device, as taught by Vasilieff to show "determining whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user" where the "continued occurrence" is "subsequent to rendering the human perceptible cue." For example, DeVaul teaches: in response to initially detecting the occurrence of the gaze of the user (¶ [0106]) (e.g., an anthropomorphic device may detect a social cue. Detecting the social cue may involve the camera detecting a gaze of a user directed toward the anthropomorphic device): rendering a human perceptible cue (¶ [0108], [0081]) (e.g., in response to detecting the social cue, the anthropomorphic device may transition from the sleep mode to the active mode; while transitioning from the sleep mode to the active mode the anthropomorphic device 402 may greet the detected user, perhaps addressing the user by name and/or asking the user if he or she would like any assistance); and determining, based on the monitoring and subsequent to rendering the human perceptible cue, whether there is continued occurrence of both: the gaze of the user, and the movement of the mouth of the user (¶ [0109]) (e.g., while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may receive an audio signal via the microphone); and in response to determining there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user: transmitting sensor data, from one or more sensors of the client device, to one or more remote automated assistant components (¶ [0110]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may (i) transmit a media device command to a media device); wherein no sensor data, from the one or more sensors, is transmitted between rendering of the human perceptible cue and prior to determining that there is the continued occurrence of both the gaze of the user and the movement of the mouth of the user (¶ [0109], [0110]) (e.g., while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may receive an audio signal via the microphone; based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may (i) transmit a media device command to a media device). In a different embodiment, DeVaul further teaches: in response to initially detecting the occurrence of both the gaze of the user and the movement of the mouth of the user (¶ [0110]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device,): rendering a human perceptible cue (¶ [0110], [0112]) (e.g., based on receiving the audio signal while the gaze is directed toward the anthropomorphic device, the anthropomorphic device may …(ii) provide an acknowledgement of the audio signal, wherein the media device command is based on the audio signal; the acknowledgment may involve the anthropomorphic device producing a sound via the speaker). One of ordinary skill in the art, having the teachings of Vasilieff and DeVaul before the effective filing date of the invention, would have combined the teachings of DeVaul with the method, and corresponding device, as taught by Vasilieff to arrive at applicant’s claimed invention because a way to make it less daunting or complex for a user to use new media technologies would have been obtained and desired, as expressly taught by DeVaul (¶ [0117]). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Rochford et al. [U.S. 20190019508], determining an area of an eye focus on the display, and associating the area of the eye focus with the object. receiving a verbal command and deriving a command based on a detected set of lip movements. Scheessele [U.S. 20150033130], A computing device detects a user viewing the computing device and outputs a cue if the user is detected to view the computing device. Rajendran [U.S. 2017026764], controlling the volume level of an audio signal in a vehicle includes the computer-implemented steps of detecting human speech within a passenger compartment of the vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jordany Núñez whose telephone number is (571)272-2753. The examiner can normally be reached M-F 8:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORDANY NUNEZ/Primary Examiner, Art Unit 2177 3/4/2026
Read full office action

Prosecution Timeline

Mar 27, 2023
Application Filed
Aug 09, 2024
Non-Final Rejection — §103
Jan 03, 2025
Applicant Interview (Telephonic)
Jan 10, 2025
Response Filed
Jan 13, 2025
Examiner Interview Summary
Apr 19, 2025
Final Rejection — §103
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Aug 28, 2025
Non-Final Rejection — §103
Dec 03, 2025
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579455
ANALYZING MESSAGE FLOWS TO SELECT ACTION CLAUSE PATHS FOR USE IN MANAGEMENT OF INFORMATION TECHNOLOGY ASSETS
2y 5m to grant Granted Mar 17, 2026
Patent 12578835
Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
2y 5m to grant Granted Mar 17, 2026
Patent 12530430
Detecting a User's Outlier Days Using Data Sensed by the User's Electronic Devices
2y 5m to grant Granted Jan 20, 2026
Patent 12481723
Intelligent Data Ranking System Based on Multi-Facet Intra and Inter-Data Correlation and Data Pattern Recognition
2y 5m to grant Granted Nov 25, 2025
Patent 12430533
NEURAL NETWORK PROCESSING APPARATUS, NEURAL NETWORK PROCESSING METHOD, AND NEURAL NETWORK PROCESSING PROGRAM
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
60%
Grant Probability
93%
With Interview (+33.1%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 474 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month