Prosecution Insights
Last updated: April 19, 2026
Application No. 18/646,064

INNER SPEECH SIGNAL DETECTION USING ONLINE LEARNING

Final Rejection §103
Filed
Apr 25, 2024
Examiner
WOZNIAK, JAMES S
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
227 granted / 385 resolved
-3.0% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
42 currently pending
Career history
427
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the Non-final Office Action from 12/9/2025, Applicant has filed an amendment on 1/15/2026. In this reply, Applicant has amended independent claims 1, 19, and 20 to further recite the generation of a predicted speech output comprising one or more words, phrases, sentences, or phonemes, presenting, on a graphical user interface (GUI), the predicted speech output receiving user input verifying whether the predicted speech output is correct or incorrect associating the user input with the combination of signals as ground-truth information, and then using such predicted speech output and ground truth information to update the collection of training data for retraining the machine learning model. A number of the dependent claims have also been amended while removing their previously claimed subject matter, claim 10 has been cancelled, and new claim 21 was added. Applicant also argues that the prior art of record fails to teach the limitations added via the instant amendment (Remarks, Pages 8-9). These arguments have been fully considered, however, are moot with respect to the new grounds of rejection, necessitated by the amended claims and further in view of Kothari, et al. (U.S. PG Publication: 2024/0221718 A1). Applicant argues that the instant amendments resolve the rejection of claims 13-17 under 35 U.S.C. 112(b) (Remarks, Page 8). In response to the correction of the antecedent basis issue in claim 13 via the instant amendment, the 35 U.S.C. 112(b) rejection is now moot and has been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-9, 13-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al (U.S. PG Publication: 2024/0221741 A1; hereinafter Kothari) in view of Kothari, et al. (U.S. PG Publication: 2024/0221718 A1; hereinafter Kothari2). With respect to Claim 1, Kothari discloses: A method comprising: accessing a machine learning (ML) model that has been trained based on a collection of training data to detect presence of speech (Paragraph 0008- "analyze the signal using a trained machine learning model to determine whether the user is speaking;" Paragraph 0058- accessing stored machine learning models "process the signals to determine if the user is speaking silently or voiced and may determine one or more words or phrases from the signals;" See also Paragraphs 0064, 0080 (discussing various types of collected training data), and 0084); collecting, by a speech signal detection device, a combination of signals comprising electromyograph (EMG) data signals and one or more non-EMG data signals (sensors that capture and measure signals associated with speech including an EMG sensor and non-EMG sensors such as a microphone/IMU, Paragraph 0066; Fig. 3, Elements 311-313); processing the combination of signals by the ML model to predict presence of speech (signals are passed to the ML model to predict whether a using is speaking in a silent or voiced manner where the signals are from a combination of sensors (EMG and non-EMG), Paragraphs 0008, 0056, 0058, 0066, and 0084); updating the collection of training data based on the combination of signals and prediction made by the ML model ("generating training data" that is collected during spontaneous speech in continual training, Paragraph 0143; see also iterative training using the multi-modal gathered data discussed at Paragraphs 0158-0160); and retraining the ML model in an online learning approach using the updated collection of training data (continual updating/tuning of the ML model while the user is operating using the device "in their usual environment" (i.e., the approach is online), Paragraphs 0143 and 0158-0160). Although Kothari does describe continual updating/tuning of the ML model while the device is in use in the user environment, Kothari does not teach the particular online approach set forth in amended claim 1. Kothari2, however, discloses a silent speech interface including the processing to: generate a predicted speech output comprising one or more words, phrases, sentences, or phonemes (0093 and 0114-"the feedback signal may include text transcribing the user's speech that can be displayed at display 148"; note that such a predicted text transcription can correspond to a segmentation unit, e.g., a sentence, a phrase, a word, a syllable etc., Paragraph 0116); presenting, on a graphical user interface (GUI), the predicted speech output (0093 and 0114-"the feedback signal may include text transcribing the user's speech that can be displayed at display 148"); receiving user input verifying whether the predicted speech output is correct or incorrect associating the user input with the combination of signals as ground-truth information (Paragraphs 0124-0125 and 0209-0210- "In some embodiments, based on the feedback signal, the user may determine that the system mis-heard what the user said (silently) and invoke the system to calibrate” and discussion of user “correcting words” wherein such words are taken as correct or ground-truth information for training and including a combination EMG and non-EMG signal data); and updating the collection of training data based on the combination of signals, the predicted speech output, and the ground-truth information (machine learning fine-tuning/re-training data collection is updated with “the audio signal and the EMG signal associated with one or more correcting words to be used as training samples” corresponding to the predicted transcription, Paragraphs 0067 and 0124-0125). Kothari and Kothari2 are analogous art because they are from a similar field of endeavor in silent speech detection. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to utilize the particular approach to continual updating of the silent speech ML model taught by Kothari2 for the updating/tuning taught by Kothari to provide a predictable result of allowing for real-time feedback that allows a user to immediately understand whether the system has correctly recognized the user's silent speech to tune the model to improve performance over time (Kothari2, Paragraphs 0003 and 0125). With respect to Claim 2, Kothari further discloses: The method of claim 1, wherein the ML model is implemented by an individual device external to the speech signal detection device (various configuration of system components including an "external device" that implements a trained ML model outside of a speech detection device having sensors, Paragraphs 0054-0055; Fig. 2, Elements 210, 211, 220 and 221), the speech comprising inner speech, silent speech, or any other form of speech (silent or voiced speech, Paragraph 0008, 0058, and 0086). With respect to Claim 3, Kothari further discloses: The method of claim 2, further comprising: converting the combination of signals into a digital signature (additional processing where analog signals from the sensors are converted into a digital signature suing an analog-to-digital converter (ADC), Paragraphs 0076 and 0083); and wirelessly transmitting the digital signature from the speech signal detection device to the individual device (after digital processing, signals from the sensor device are sent to the external device via a communication module that include wireless modalities (Bluetooth, wi-fi, etc.), Paragraphs 0058-0059; Fig. 2, Elements 216 and 220). With respect to Claim 4, Kothari further discloses: The method of claim 1, wherein the non-EMG data signals represent movement of certain muscles in a face and neck region, physical movements associated with inner speech, and muscle twitches (IMU sensor that "measures facial movements" including "muscle" strain and frequencies (i.e., a measurement of twitches) for "sub-vocalized" or “silent speech”, Paragraphs 0031-0032 and 0038; see also "neck muscle movement" at Paragraphs 0047 and 0050-0051). With respect to Claim 5, Kothari further discloses: The method of claim 1, wherein the non-EMG data signals comprise at least one of inertial measurement unit (IMU) movement or audio data (the system of Kothari features both types of non-EMG signal sensor data- IMU movement or audio data from a microphone, Paragraph 0038). With respect to Claim 6, Kothari further discloses: The method of claim 1, wherein the non-EMG data signals are received from at least one of an array of biopotential sensors, motion sensors, sound sensors, or photonic sensors that are independent of the EMG data signals (generation of separate, non-EMG data signals for silent speech detection including an array of biopotential sensors (e.g., EEG sensors), motion sensors (e.g., IMUs filtered at different frequencies), and sound sensors (e.g., ultrasound, multiple individual microphones), Paragraphs 0038, 0056, and 0073). With respect to Claim 8, Kothari2 discloses: The method of claim 1, wherein the ML model is trained in real time (real-time/low-latency feedback and "immediately provide" calibration/training, Paragraphs 0067). With respect to Claim 9, Kothari further discloses: The method of claim 1, wherein the combination of signals is collected during a first portion of a recording session in which input comprising inner speech for a word or phrase is received, further comprising: collecting, by the speech signal detection device, during a second portion of the recording session, an additional combination of signals comprising EMG data signals and one or more non-EMG data signals associated with inner speech for the word or phrase; and processing the additional combination of signals by the ML model to predict additional presence of inner speech (embodiment where there is a first portion of a recording session for input indicative of silent/subvocal/inner speech using fewer signals and second portion of a recording session where a combination of signals from EMG sensors along with additional sensors (e.g., microphone, IMU, etc.) are relied upon and processed by a machine learning model to additionally detect words and phrases in the silent speech, Paragraphs 0031, 0077-0078, 0080 (describing a neural network trained to process EMG, IMUs and microphones), 0081-0082, and 0087). With respect to Claim 13, Kothari further discloses: The method of claim 1, further comprising: prompting a user to produce a signal for a set of inner speech words or phrases (training subject/user is presented with a prompt comprising phrases, Paragraphs 0139 and 0142); and triggering collection of the combination of signals representing the set of inner speech words or phrases in response to receiving input for initiating a recording session based on prompting of the user (responsive to the prompt training data is collected for the silent speech words and/or phrases using a combination of sensors (EMG and others), Paragraphs 0031, 0139, and 0145-0146). With respect to Claim 14, Kothari discloses: The method of claim 13, further comprising: forming a set of initial trials based on collecting multiple combinations of signals associated with production of inner speech for the set of inner speech words or phrases, wherein the collection of training data comprises the set of initial trials (initial training using various combinations of signals from different sensors (see "subset of modalities") for words or phrases is a silent speech domain, Paragraphs 0139 and 0159-0160). With respect to Claim 15, Kothari further discloses: The method of claim 14, further comprising: after training the ML model using the collection of training data, presenting results comprising prediction of the presence of inner speech for an additional set of trials associated with a portion of the combination of signals (feedback that is presented to a training subject related to the prediction of silent speech that is used in additional training iterations "during training data collection," Paragraph 0141 and 0159). With respect to Claim 17, Kothari further discloses: The method of claim 1, further comprising: applying binary labels to portions of the combination of signals, wherein each binary label comprises either a positive label representing inner speech or a negative label representing signals other than inner speech; under-sampling portions of the combination of signals associated with negative labels; and over-sampling portions of the combination of signals associated with positive labels to balance the collection of training data (positive label that serves to "confirm the silent speech from the user" to move to an active state wherein a negative/non-silent speech does not move to a second state, Paragraphs 0092-0094; in the positive/confirmed state the EMG portions of the combination of signals are sampled with a "high frequency" or oversampled, Paragraph 0095 wherein in the negative state those samples are undersampled at a "lower sampling rate" and do not proceed to the active state, Paragraph 0095-0096; note that the “to balance” is an intended result of a step positively recited wherein the claimed step that is actually recited is addressed by the teachings of Kothari). With respect to Claim 18, Kothari further discloses: The method of claim 1, wherein processing the combination of signals comprises: generating a plurality of buffers, each buffer having a specified length and containing a subset of the combination of signals (one or more components such as memory buffers to temporarily store signals recorded by the wearable device, Paragraph 0120; see also that buffer length is discussed "memory buffers may store the last 5 seconds of recorded signals, the last 10 seconds of recorded signals, the last 20 seconds of recorded signals, the last 30 seconds of recorded, or the last minute of recorded signals" and note that these signals include the combination of signals, Paragraph 0066; Fig. 3, Elements 311-313):extracting features from each buffer using at least one of spectral features, temporal features, or muscle activation sequences over a specified time period; and generating a plurality of feature vectors based on the extracted features (recorded buffer signals are passed on/accessed for "processing and analysis" that includes feature extraction over the recorded periods with respect to at least muscle activation patterns, Paragraphs 0058, 0079, and 0132; plurality of features that together comprise a vector are obtained via feature extraction, Paragraph 0079). Claim 19 involves an embodiment of the method of claim 1 practiced using a system comprising at least one processor and at least one memory storing processor-executable instructions, and thus, is rejected under similar rationale. Furthermore, Kothari teaches system implementation of the method of claim 1 comprising at least one processor and at least one memory storing processor-executable instructions (Paragraphs 0162-0164). Claim 20 involves an embodiment of the method of claim 1 practiced using processor-executable instructions stored on a non-transitory computer-readable storage medium, and thus, is rejected under similar rationale. Furthermore, Kothari teaches system implementation of the method of claim 1 as a program stored on a non-transitory computer-readable storage medium (Paragraphs 0162-0164). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al. in view of Kothari2, et al. and further in view of Garg, et al. (U.S. PG Publication: 2024/0221738 A1). With respect to Claim 7, Kothari in view of Kothari2 teaches the silent speech detection method using machine learning model updating and a combination of EMG and non-EMG sensor signals, as applied to Claim 1. Kothari further discloses: the EMG communication device being positioned adjacent to and underneath a neck region (EMG communication device portion positions adjacent to and underneath a neck region, Paragraph 0050 and see Fig. 1A, Element 121), and the EMG communication device comprising a plurality of electrodes configured to collect the combination of signals (electrodes "contact" this zone, Paragraphs 0050 and 0146). Kothari in view of Kothari2 only fails to teach that the EMG communication device is connected to an augmented reality (AR) headset. Garg, however, discloses that an EMG silent speech communication device may be connected to an AR headset/glasses (Paragraphs 0006 and 0094). Kothari, Kothari2, and Garg are analogous art because they are from a similar field of endeavor in silent speech detection. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to connect the AR headset taught by Garg to the EMG communication device taught by Kothari in view of Kothari2 to provide a predictable result of enabling the use of silent speech for different applications such as those that allow a user to see the result of silent speech processing (Garg, Paragraph 0094). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al. in view of Kothari2, et al. and further in view of Thomaz, et al. (U.S. PG Publication: 2024/0420728 A1). With respect to Claim 11, Kothari in view of Kothari2 teaches the silent speech detection method using machine learning model updating and a combination of EMG and non-EMG sensor signals, as applied to Claim 1. Although Kothari teaches that a machine learning model can be a “convolutional neural network”, Kothari in view of Kothari2 leaves out the structural details of such a network as is set forth in claim 11. Thomaz, however, discloses: a convolutional neural network (CNN) comprising two convolutional two-dimensional (2D) layers with max pooling followed by two fully-connected layers (CNN comprising two 2D convolutional layers with “max-pooling” followed by plural (i.e., comprising 2) fully connected layers in a speech detector, Paragraph 0037). Kothari, Kothari2, and Thomaz are analogous art because they are from a similar field of endeavor in speech detection. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to utilize the known form of a CNN network structure taught by Thomaz in the CNN of Kothari in view of Kothari2 to provide a predictable result of enabling data processing (e.g., pattern detection, dimensionality reduction, making an overall prediction, etc.) that enables a CNN to make a prediction of silent speech detection. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al. in view of Kothari2, et al. and further in view of Wang, et al. ("Silent Speech Decoding Using Spectrogram Features Based on Neuromuscular Activities," 2020). With respect to Claim 12, Kothari in view of Kothari2 discloses teaches the silent speech detection method using machine learning model processing of EMG signals as applied to Claim 1. Kothari also discloses an LLM model as a type of transformer (Paragraph 0065). Kothari in view of Kothari2 do not teach the image-based processing of spectrograms by the machine learning model according to the approach set forth in claim 12. Wang, however, discloses: converting the EMG data signals into one or more time-frequency spectrograms, wherein each frequency band in the spectrograms represents a single feature (Abstract- “transforming the sEMG data into spectrograms that contain abundant information in time and frequency domains and are regarded as channel-interactive; note that the absence of a spectrogram feature at a particular band (measured in Hz) or its presence over time relates to a spectrogram feature, Fig. 5; Sections 3.1-3.2, Page 5; Fig. 6b features of six-channel sEMG); processing the one or more spectrograms by the ML model as image data (by relying upon spectrogram images "the silent speech decoding becomes a video classification, explored by deep learning methods;" see ML decoding shown in Fig. 6); concatenating spectrograms from different EMG channels or different electrodes to form a composite image (use of multiple sEMG (e.g., six channel), Section 2.1, Pages 2-3 and Fig. 1 showing electrode sites; wherein the output channels are concatenated, Section 3.2, Page 5); and processing the composite image by the ML model (the silent speech decoding becomes a video classification, explored by deep learning methods;" see ML decoding shown in Fig. 6 with neural network processing leading to a prediction output). Kothari, Kothari2, and Wang are analogous art because they are from a similar field of endeavor in speech detection. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to utilize the spectrogram-based detection taught by Wang in the silent speech detection taught by Kothari in view of Kothari2 to provide a predictable result of enabling multi-channel silent speech detection that takes spatial correlation into account (Wang, Section 1, Page 2). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al. in view of Kothari2, et al. and further in view of Benster, et al (U.S. Patent: 12,198,698; having provisional application 63/611031). With respect to Claim 16, Kothari in view of Kothari2 discloses teaches the silent speech detection method using machine learning model processing of EMG signals including training procedures as applied to Claim 1. Kothari in view of Kothari2 does not teach that the training involves identical words or phrases in a first sequence and a sequence of the uses to produce signals for the identical words or phrases in a second sequence different from the first sequence during a second training iteration. Benster, however, discloses: the set of inner speech words or phrases comprises a plurality of identical words or phrases presented in a first sequence during a first training iteration and the method further comprises prompting the user to produce signals for the plurality of identical words or phrases presented in a second sequence different from the first sequence during a second training iteration (see the discussion of a training protocol described in Col. 60, Line 31- Col. 62, Line 13 involving prompting a user to repeat one or more sounds, words, phrases, sentences, or the like wherein the user repeats "similar sounding words or phrases" and "prompting a user to, say one or more variations of a sentence"; thus, the training process involves the repetition of similar utterances/sentences/phrases with variations in the repeated iterations). Kothari, Kothari2, and Benster are analogous art because they are from a similar field of endeavor in speech detection. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to utilize the training protocol taught by Benster in the silent speech detection taught by Kothari in view of Kothari2 to provide a predictable result of training a silent speech decoder for a particular speaking style of a user while accounting for variation for more robust silent speech detection. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Kothari, et al. in view of Kothari2, et al. and further in view of Rajbhandari, et al. (U.S. Patent: 11,151,986). With respect to Claim 21, Kothari in view of Kothari2 discloses teaches the silent speech detection method using machine learning model processing of EMG signals including online/continual training procedures as applied to Claim 1. Kothari in view of Kothari2 does not teach that recency weighting based upon the temporal order in which corresponding speech was produced by a user with higher weights going to more recent portions of less recent portions and using such weighting to retrain the ML model. Rajbhandari, however, discloses machine learning model training/updating where "older data may be weighted less than newer data" (Col. 19, Lines 1-5). Kothari, Kothari2, and Rajbhandari are analogous art because they are from a similar field of endeavor in speech machine learning model training. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date to utilize recency weighting taught by Rajbhandari in the online ML whisper speech training taught by Kothari in view of Kothari to provide a predictable result of better accounting for decaying reliability for legacy training data. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Ayyad (U.S. PG Publication: 2023/0162719 A1)- user is presented with a displayed list of words in a GUI in training to retrain an EMG recognition deep learning model (Paragraph 0071). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAMES S. WOZNIAK Primary Examiner Art Unit 2655 /JAMES S WOZNIAK/ Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Dec 04, 2025
Non-Final Rejection — §103
Jan 15, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597422
SPEAKING PRACTICE SYSTEM WITH RELIABLE PRONUNCIATION EVALUATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586569
Knowledge Distillation with Domain Mismatch For Speech Recognition
2y 5m to grant Granted Mar 24, 2026
Patent 12511476
CONCEPT-CONDITIONED AND PRETRAINED LANGUAGE MODELS BASED ON TIME SERIES TO FREE-FORM TEXT DESCRIPTION GENERATION
2y 5m to grant Granted Dec 30, 2025
Patent 12512100
AUTOMATED SEGMENTATION AND TRANSCRIPTION OF UNLABELED AUDIO SPEECH CORPUS
2y 5m to grant Granted Dec 30, 2025
Patent 12475882
METHOD AND SYSTEM FOR AUTOMATIC SPEECH RECOGNITION (ASR) USING MULTI-TASK LEARNED (MTL) EMBEDDINGS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+40.1%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month