Prosecution Insights
Last updated: April 19, 2026
Application No. 17/098,371

Artificial Intelligence Engine

Non-Final OA §103
Filed
Nov 14, 2020
Examiner
MAIDO, MAGGIE T
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Chaoyang Semiconductor Jiangyin Technology Co. Ltd.
OA Round
5 (Non-Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 3m
To Grant
85%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
23 granted / 36 resolved
+8.9% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
51 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
25.6%
-14.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 14 November 2025 has been entered. Response to Amendment The amendment filed on 11 August 2025 has been entered. Claims 1-6 are pending. Claims 1, 5 are amended. Claims 3, 6 are cancelled. Claims 1-2, 4-5 will be pending. Applicant’s amendments to the Claims have overcome each and every rejection under 35 USC 112(a) and 35 USC 112(b) previously set forth in the Final Office Action mailed 14 May 2025. Response to Arguments Applicant's arguments filed on 11 August 2025 have been fully considered, but they are not persuasive. Applicant contends that none of the references cited by the Examiner teach or suggest the limitation of having the VAD module trigger only one of the AED or AKR module to be active at any one time. Applicant points to paragraph [0028] for support of this limitation. Applicant points out that there is a substantial difference between the purpose of the presently claimed invention and the disclosure of the cited prior art in that the present invention is intended to primarily analyze none speech audio inputs and to only have speech analyzed for the limited purpose of controlling the analysis of the non-speech audio data. In contrast, both Rand and Czyryba are primarily directed to analyzing the speech data and using the non-speech audio data for the purposes of enhancing the analysis of the speech data, such as to allow the processor to wake from a low power state. Examiner respectfully disagrees that the prior art made of record fails to disclose the newly amended limitations of Claim 1 for the reasons presented in the updated grounds of rejection below. Examiner submits Rand teaches a voice activity detector, coupled with audio detection technologies, which are directly triggered to switch between a variety of modes, including a hands-free mode detecting wake word and speech and a music mode detecting music or audio. Both modes are triggered to activate separately or in combination with other given modes, influencing parameters, weights, thresholds, and decision logic of the system. Therefore, Rand effectively teaches the newly amended limitations of Claim, as claimed. In response to Applicant's argument that there is a substantial difference between the purpose of the presently claimed invention and the disclosure of the cited prior art in that the present invention is intended to primarily analyze non speech audio inputs and to only have speech analyzed for the limited purpose of controlling the analysis of the non-speech audio data, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. The rejection of Claim 1 under 35 USC 103 has been maintained. Rejections of Claims 2, 4-5, which depend directly from Claim 1, under 35 USC 103 have been maintained. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Rand et al. (U.S. Pre-Grant Publication No. 2020/0150919, hereinafter ‘Rand'), in view of Czyryba et al. (U.S. Pre-Grant Publication No. 2019/0221205, hereinafter 'Czyryba'). Regarding claim 1, Rand teaches An Artificial Intelligence Network (AIN) architecture including a cognitive audio smart microphone, comprising ([0004] Due to the proliferation of voice interfaces in new and portable form factors such as smart headphones, wireless earbuds, smart speakers, smart watches, etc. (“audio devices”), there is a growing need for improved architecture audio detection and audio processing techniques. By analyzing signals received through the audio smart microphone microphones embedded in such devices it is possible to determine valuable contextual information such as what type of environment the user is in (e.g. indoors, outdoors), whether there are other people nearby (e.g. detecting speech), whether specific people are nearby (e.g. detecting a specific speaker's voice), whether someone is trying to get the user's attention (e.g. detecting the utterance of a user's name or a specific phrase), whether specific objects are nearby (e.g. recognizing a siren suggests an emergency vehicle is nearby), and more.; [0074] Existing methods also exist to process these features in ways that are useful to the present invention, especially with respect to voice detection. Each of these ADTs and similar ADTs could be included (or not, as desired) in the ADT component of the overall system (see FIG. 2). Artificial Intelligence Network Machine Learning based VADs: Machine Learning based VADs (ML VAD) operate by computing features based on an audio segment and predicting whether the segment is speech or not using a trained ML model.): (a) an audio feature extractor (AFE) module having an input and an output, the input connecting to a plurality of microphones ([0013] According to a further embodiment of the invention, the method may further comprise providing a set of ADT Parameters that determine how each of the ADTs will be run (e.g. for how many buffers); providing a set of Output-Based Gates which determine which of the ADTs will be run (e.g. energy intensive or time consuming ADTs could be ignored when they are not required); and receiving an audio signal through a microphone or microphone array; and executing one or more of the following steps: preprocessing the input audio data and optimizing it for analysis; extracting features from the audio signal, copying them, and routing them to multiple ADTs to be processed in parallel; [0019] The present invention comprises an audio input device (such as a microphone or a microphone array) that audio feature extractor (AFE) module receives an having an input audio signal stream from the environment of the audio device and and an output provides the audio signal stream to an audio detection system.; [0103] The steps performed in FIG. 2 may comprise: [0104] Providing a set of ADT Parameters that determine how each of the ADTs will be run (e.g. for how many buffers); [0105] Providing a set of Output-Based Gates, which determine which of the ADTs will be run (e.g. energy intensive or time consuming ADTs could be ignored when they are not required); [0106] An audio signal the input connecting to a plurality of microphones received through a microphone or microphone array; [0107] Preprocessing the input audio data and optimizing it for analysis; [0108] The relevant features for the selected ADTs (as determined from the Output-Based Gates) are extracted from the audio signal (“feature extraction”), copied, and routed to multiple ADTs to be processed in parallel; [0109] The ADTs process the features and output ADT data); and (b) an artificial intelligence (Al) platform coupled to the AFE, the Al platform generating event descriptors (EDs) ([0086] Al platform generating event descriptors (EDs) Automatic Environment Classifier: technologies and methods exist to recognize ambient sounds and noises and correlate them with known environments. For example, it is possible to determine that the user (or microphone) is likely present in a quiet room, outdoors, or in another space for which sufficient audio data has been collected to be recognized as similar to the input audio signal.), the AI platform including (i) an automatic keyword recognition (AKR) module having an input coupled to the output of the AFE module and configured to detect keywords and to activate features in response to such keyword detection ([0155] The rationale for this is privacy based. If there has been a long period of silence, it would potentially come as a surprise to a remote user that his microphone begins transmitting as a result of voice detection on another device. It may be desirable to force voice detection on each device independently before it begins transmitting audio.; [0156] Whenever the second user's voice is detected (a possible trigger), the mode switches automatically into automatic keyword recognition (AKR) module Full-Duplex Conversation Mode.; [0157] An alternative configured to detect keywords and to activate features in response to such keyword detection trigger on the second device could be a wake word or voice command. For example, in Response Mode, the wake word or voice command could be “Respond”. In this mode, the system would only having an input coupled to the output of the AFE module activate the necessary ADTs to listen for the wake word/voice commands contextually after a Half-Duplex communication has begun. That is, if User B starts speaking to User A, then User A must utter the wake word before User A's audio will begin transmitting to User B (thus initiating a Full-Duplex Conversation Mode).), the AKR module comprising; (1) a set of input features ([0078] RNN-based VADs: Recurrent Neural Network based voice detectors are a focus of recent research. They take a set of input features spectral features of an audio segment, such as MFCCs and magnitude spectra, as input and output the likelihood of speech. RNNs take spectral features from a sequence of audio samples into account and analyze temporal relationships. The neural units in each layer outputs to both itself and the next layer, allowing information from past time steps to persist in the same layer. RNNs can also incorporate long short-term memory units, enabling the learning of dependencies that span multiple time steps. This property of RNNs make them ideal for machine learning in audio applications.); and (ii) an acoustic event detection (AED) module having an input coupled to the output of the AFE module and configured to detect audio events in response to the output of the AFE module ([0019] The audio detection system comprises a processor and non-transitory memory with computer instructions thereon, the acoustic event detection (AED) module having an input coupled to the output of the AFE module audio detection system configured to detect audio events in response to the output of the AFE module configured to accept the audio signal stream, process the audio signal stream, and to determine an an audio signal action. The instructions contain a set of Adjustable Parameters that can be easily configured to optimize system performance.), (1) a voice activity detection (VAD) module having an input coupled to the AFE module and configured to trigger the either the AKR module or the AED module to allow only one of either the AKR module or the AED module to operate; and (c) an amplifier that amplifies the EDs to provide audio outputs; wherein the artificial intelligence network is provided directly at the microphone by the AFE module, the AKR module and the AED module ([0021] Audio detection can serve as the basis for making decisions about all inputs and outputs of a system (hereinafter referred to as “Decision Logic”). Decision Logic may refer to a set of instructions or logical operations that determine how input audio is processed, where to route output audio, how audio channels are mixed together on audio devices, when to apply audio effects, as well as a basis for when to run processes unrelated to audio. For example, the detection of EDs specific audio events in the vicinity of a device can be used in logical operations and computations (i.e. the Decision Logic) to open and close gates, apply filters, an amplifier that amplifies modulate volumes, adjust gains, and more. In cases where the variables controlled by the Decision Logic are system outputs, we refer to them as provide audio outputs Output Controls.; [0119] One of the potential benefits associated with the above configuration is that certain loops may require less power to compute and/or include only low-latency processes. For example, many machine learning based ADTs (Audio Detection Technologies) and use more power and require more time to compute than a simple energy-based VAD (voice activity detector). Hence, a simple energy-based VAD could be continually run with all other ADTs remaining inactive until it is first until activity is detected wherein the artificial intelligence network is provided directly at the microphone by the AFE module, the AKR module and the AED module determined (from the simple VAD) that there is a reasonable likelihood that voice activity is present, or until it is first determined that an accurate computation is important given the current environment and Conversation Mode. At that point, more robust methods could be used and the paths used in FIG. 2 would change accordingly (i.e. the system would “turn on” higher energy ADTs after the energy-based VAD triggers a change in the Output-Based Gates via the threshold and Decision Logic conditions being met).; [0129] Note that combinations of Environment Settings are possible. In addition, groupings of Adjustable Parameters, weights, thresholds, and Decision Logic are also dependent on specific use cases and modes of conversation. In FIG. 2, Conversation Modes add an optional layer of customization that depends on specific modes of communication or use cases. That is, the a voice activity detection (VAD) module having an input coupled to the AFE module parameters, weights, thresholds, and Decision Logic inherent in the Environment Settings can be further influenced by Conversation Modes.; [0135] Hands-Free Mode: A mode where the hands-free features are prioritized. For example, configured to trigger the either the AKR module wake word detection, speech detection, and natural language processing might be activated or prioritized in Hands-Free mode, but not all the time. Hands-Free mode might only be activated in certain environments and use cases such as while the user is riding a bicycle or driving, for example.; [0137] Music Mode: A or the AED module to allow only one of either the AKR module or the AED module to operate mode in which it has been determined that the user is listening to music, and this must be taken into account when determining how to interact with other modes. For example, it could be the case that there is an ongoing conversation in Full-Duplex Conversation Mode, and it could further be the case the users are deliberately trying to listen to music together while connected through VoIP. In this case, music ducking and music quality would be prioritized, which could limit other functionality including hands-free controls, for example.). Rand fails to teach (2) a set of receive nodes coupled to the set of input features in accordance with a one to three mapping; the set of receive nodes being a set of receive nodes of an artificial neural network; each input feature of the set of input features being mapped to three consecutive receive nodes; the AED module comprising a plurality of input receive nodes, each having an output and fully connected to the artificial neural network and Czyryba teaches (2) a set of receive nodes coupled to the set of input features in accordance with a one to three mapping ([0060] For example, input layer 401 may have 143 a set of receive nodes coupled to the set of input features nodes corresponding to each of the 143 dimensions of feature vectors 220. In other examples, feature vectors may have fewer or more elements or dimensions and input layer 401 may have a corresponding number of nodes.; As shown in FIG. 4, neural network 400 may include an input layer 401, hidden layers 402-406, and an output layer 407. Neural network 400 is illustrated as having three input nodes, hidden layers with four nodes each, and six output nodes for the sake of clarity of presentation, however, neural network 400 in accordance with a one to three mapping may include any such input, hidden, and output nodes.); the set of receive nodes being a set of receive nodes of an artificial neural network ([0060] Referring to FIG. 4, an example acoustic model neural network 400 may be arranged in accordance with at least some implementations of the present disclosure. For example, neural network 400 may be the structure for acoustic model 216 and may be implemented as NN unit 222 by acoustic scoring module 206 in some implementations. the set of receive nodes being a set of receive nodes of an artificial neural network Neural network 400 may include any suitable neural network such as an artificial neural network, a deep neural network, a convolutional neural network, or the like.); each input feature of the set of input features being mapped to three consecutive receive nodes ([0060] As shown in FIG. 4, neural network 400 may include an input layer 401, hidden layers 402-406, and an output layer 407. Neural network 400 is illustrated as having three input nodes, hidden layers with four nodes each, and six output nodes for the sake of clarity of presentation, however, neural network 400 each input feature of the set of input features being mapped to three consecutive receive nodes may include any such input, hidden, and output nodes. Input layer 401 may include any suitable number of nodes such as a number of nodes equal to the number of elements in each of feature vectors 220.); the AED module comprising a plurality of input receive nodes, each having an output and fully connected to the artificial neural network ([0060] Referring to FIG. 4, an example acoustic model neural network 400 may be arranged in accordance with at least some implementations of the present disclosure. For example, neural network 400 may be the structure for acoustic model 216 and may be implemented as NN unit 222 by acoustic scoring module 206 in some implementations. Neural network 400 may include any suitable neural network such as an fully connected to the artificial neural network artificial neural network, a deep neural network, a convolutional neural network, or the like. As shown in FIG. 4, neural network 400 may include an input layer 401, hidden layers 402-406, and an output layer 407. Neural network 400 is illustrated as having three comprising a plurality of input receive nodes input nodes, hidden layers with four nodes each, and six each having an output output nodes for the sake of clarity of presentation, however, neural network 400 may include any such input, hidden, and output nodes. Input layer 401 may include any suitable number of nodes such as a number of nodes equal to the number of elements in each of feature vectors 220. For example, input layer 401 may have 143 nodes corresponding to each of the 143 dimensions of feature vectors 220. In other examples, feature vectors may have fewer or more elements or dimensions and input layer 401 may have a corresponding number of nodes.) and Rand and Czyryba are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Rand, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Czyryba to Rand before the effective filing date of the claimed invention in order to reduce computational loads, memory capacity requirements, power consumption, and error rates (cf. Czyryba, [0041] To resolve these issues, a number of techniques disclosed herein can be used either alone or together to reduce computational loads, memory capacity requirements, power consumption, and error rates. A centerphone selection technique disclosed herein uses a classification data structure where a phoneme inventory, or lexicon of monophones that represent a language, are iterated through, and the output triphones where each phoneme appears as center-phone (a center HMM-state) are selected. The selected center-phone triphones are sorted according to the number of occurrences, and the triphone with the most occurrences, or N most occurrences, for each phoneme at the centerphone of triphone, are selected to be rejection output from the acoustic model, input to the rejection model, and for that specific phoneme. This process is performed for each phoneme so that the final rejection model may have as many rejections as there are phonemes in the inventory, by one example. When the acoustic model is pruned so only these outputs are provided on the acoustic model for rejected speech, this may substantially reduce the number of speech rejection outputs to the number of monophones, thereby significantly reducing the computational load, memory requirements, and power consumption. Additionally, this technique has found to provide a substantial increase in accuracy by reducing the error rate by 36% especially in noisy and reverberant conditions over conventional ASR systems that do not reduce the number of acoustic model outputs in this way. This appears to be due to the data-driven nature of the approach such that the center-phone tracked triphone has a relatively high probability of rejection with regard to a single monophone or phoneme, and therefore having a surprisingly very good representation over a wide range of input audio data such as different triphones with the same centerphone. This permits a reduction of speech (or non-keyphrase) rejection outputs to have only one neural network output per monophone while still providing excellent coverage with a competitive false rejection rate. Also in the disclosed method, since one rejection output may be selected for each phonetic unit/phoneme based on the center-triphone selection method, the most important output can be selected from the center-triphone statistics, which significantly increases the accuracy as well.). Regarding claim 2, Rand, as modified by Czyryba, teaches The AIN architecture of claim 1. Rand teaches wherein the AFE module has high dynamic range to enable detection of a wide range of acoustic events ([0068] As discussed above, embodiments of the present invention may incorporate existing ADTs AFE. Such ADTs often use techniques whereby range to enable detection of a wide range of acoustic events certain features are extracted from an audio signals including:; [0069] Energy: Energy is the sum of magnitude spectrum values over a preset range of bins; it has high dynamic range represents the loudness of an audio segment. A signal-to-noise ratio, calculated by dividing the energy of the current sample to an average calibrated energy of background noise, can be used as part of a simple, effective VAD.). Rand and Czyryba are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 4, Rand, as modified by Czyryba, teaches The AIN architecture of claim 1. Rand teaches further comprising the AI platform classifying audio data ([0013] automatically selecting the Environment Settings using an Automatic Environment AI platform classifying audio data Classifier that computes the environment from features extracted from the audio signal). Rand and Czyryba are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 5, Rand, as modified by Czyryba, teaches The AIN architecture of claim 1. Rand teaches the Al platform further comprising a front end configured to perform additional feature extraction ([0012] According to still another embodiment of the present invention, there is provided a method of Contextual Audio Detection, comprising: providing an input audio signal; passing the audio input to one or more Audio Detection Technologies (ADTs); computing the Audio Detection Technologies; and using the results obtained from the ADTs to update instructions comprising one or more of: providing a set of Adjustable Parameters & additional Decision Logic that modulates one or more Output Controls; how the Audio Detection Technologies are processed; and a combination thereof; and providing Modes and Triggers that are used to modulate the Adjustable Parameters and Decision Logic.; [0074] Each of these ADTs and similar ADTs could be included (or not, as desired) in the ADT component of the overall system (see FIG. 2).; [0076] Machine Learning based VADs: the Al platform further comprising a front end Machine Learning based VADs (ML VAD) operate by configured to perform additional feature extraction computing features based on an audio segment and predicting whether the segment is speech or not using a trained ML model. The probability output from the ML algorithm is compared against a threshold. A ConvNet is a specific example of a neural network that can be used in the design of an ML VAD.). Rand and Czyryba are combinable for the same rationale as set forth above with respect to claim 1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Babaee et al. (NPL: “An Overview of Audio Event Detection Methods from Feature Extraction to Classification”) teaches reviewing and categorizing existing AED schemes into preprocessing, fea-ture extraction, and classification methods. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAGGIE MAIDO whose telephone number is (703) 756-1953. The examiner can normally be reached M-Th: 6am - 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MM/ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Nov 14, 2020
Application Filed
Nov 14, 2023
Non-Final Rejection — §103
Apr 22, 2024
Response Filed
May 28, 2024
Final Rejection — §103
Sep 23, 2024
Response after Non-Final Action
Oct 10, 2024
Request for Continued Examination
Oct 24, 2024
Response after Non-Final Action
Dec 30, 2024
Non-Final Rejection — §103
Mar 06, 2025
Response Filed
May 01, 2025
Final Rejection — §103
Aug 11, 2025
Response after Non-Final Action
Nov 14, 2025
Request for Continued Examination
Nov 20, 2025
Response after Non-Final Action
Dec 09, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602603
MULTI-AGENT INFERENCE
2y 5m to grant Granted Apr 14, 2026
Patent 12596933
CONTEXT-AWARE ENTITY LINKING FOR KNOWLEDGE GRAPHS TO SUPPORT DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12579463
GENERATIVE REASONING FOR SYMBOLIC DISCOVERY
2y 5m to grant Granted Mar 17, 2026
Patent 12579452
EVALUATION SCORE DETERMINATION MACHINE LEARNING MODELS WITH DIFFERENTIAL PERIODIC TIERS
2y 5m to grant Granted Mar 17, 2026
Patent 12566941
EXTENSION OF EXISTING NEURAL NETWORKS WITHOUT AFFECTING EXISTING OUTPUTS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
85%
With Interview (+20.7%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month