Prosecution Insights
Last updated: April 19, 2026
Application No. 16/708,751

EDGE INFERENCE FOR ARTIFICAL INTELLIGENCE (AI) MODELS

Final Rejection §103
Filed
Dec 10, 2019
Examiner
BREENE, PAUL J
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
6 (Final)
56%
Grant Probability
Moderate
7-8
OA Rounds
4y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
29 granted / 52 resolved
+0.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
29 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 52 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed October 31st, 2025 have been fully considered but they are not persuasive. Applicant’s arguments with respect to claims 1, 3, 5, 7, 11, 14-16, 21-22, 24-27 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3. 5, 7, 11, 16, 21-22, 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over US Pre-Grant Patent 2019/0325308 (Chung et al; Chung) in view of US Pre-Grant Patent 2018/0150770 (Shaoib et al; Shaoib), further in view of US Pre-Grant Patent 2021/0012194 (Laskaridis et al; Laskaridis). Chung teaches: 1. A computer-implemented method of locally predictive forwarding for artificial intelligence (Al) models, the computer-implemented method comprising: receiving, by a program and from a client, a request specifying an input; (Chung, ¶0084) “In some embodiments, a server transmits data, e.g. an HTML page, to a user device, e.g. for purposes of displaying data to and receiving user input from a user interacting with the user device, which acts as a client.” 2. the program being associated with a first Al model stored locally to the program and further associated with a second Al model stored remotely to the program, the second Al model having a greater computational overhead than the first; (Chung, ¶0022) “In addition, in some cases the reduction in architecture complexity may allow the student machine learning model to run locally on a private user device, e.g., a smartphone or laptop computer [i.e. the program being associated with a first Al model stored locally to the program].” (Chung, ¶0084) “The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network.” (Chung, ¶0083) “Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components [i.e. associated with a second Al model stored remotely to the program,].” 3. determining whether to process the first input using the first Al model or the second Al model, comprising: (Chung, ¶0045) “The generated student machine learning model output may be compared to the generated teacher machine learning output, and used to determine an updated set of student machine learning model parameters that minimizes the difference between the generated student machine learning model output and the teacher machine learning output [i.e. determining whether to process the first input using the first Al model or the second Al model, comprising:].” 4. [determining, based on the input and via a third AI model,] if a first output generated by the first Al model based on the input is predicted to match a second output generated by the second Al model based on the input, (Chung, ¶0032) “A small, student machine translation model [i.e. if a first output generated by the first Al model] may be trained to translate between multiple languages using several larger, teacher machine translation models [i.e. a second output generated by the second Al model based on the input,] that have each been trained to translate a respective language pair. As another example, the methods and systems described in this specification may be used to perform multi-sentiment prediction of given text segments—namely, predicting multiple different sentiments of a given text segment using a single machine learning model [i.e. is predicted to be]. A small, student machine learning model may be trained to predict multiple different sentiments using several larger, teacher machine learning models that have each been trained to predict a single respective sentiment [i.e. the same as a second output].” Examiner interprets the match between outputs as the same sentiment of a given text sentiment. See previously attached NPL: Semantic similarity. 4. by the program and in response to the first and second outputs being predicted to be the same, causing the first Al model, in lieu of the second Al model, to generate the first output based on the input; (Chung, ¶0068) “The system may process an augmented subset using the student machine learning model to generate a respective student machine learning model output. The system may then adjust the values of student machine learning model parameters to match the generated student machine learning model to a corresponding generated teacher machine learning model output [i.e. by the program and in response to the first and second outputs being predicted to be the same, causing the first Al model, in lieu of the second Al model, to generate the first output based on the input;].” 5. receiving a second input; (Chung, ¶0008) “In some implementations training the single student machine learning model to perform the machine learning task using (i) the selected one or more subsets, and (ii) respective generated teacher machine learning model outputs… [i.e. receiving a second input;].” Shaoib teaches: 1. determining, based on the input and via a third AI model, [if a first output generated by the first Al model based on the input is predicted to be the same as a second output generated by the second Al model based on the input,] (Shaoib, ¶0046. Fig. 5) “FIG. 5 is a schematic representation of an SE classifier 500 of a machine learning model, according to various example embodiments. SE classifier 500 includes a complexity assessment (CA) module 502 that determines complexity of input data 504 received by the SE classifier [i.e. determining, based on the input and via a third AI model,].” 2. wherein the third Al model is trained using, as input data, (i) outputs generated by the first Al model based on training data, (ii) outputs generated by the second Al model based on the training data, and (iii) the training data; (Shaoib, ¶0022) “A traditional approach that may be used by a fixed-effort machine learning is now described to illustrate benefits provided by SE machine learning. A binary support-vector machine (SVM) classifier, for example, may incorporate a specific learning algorithm to build a decision boundary (model) based, at least in part, on input training data, hereinafter called training instances [i.e. the given Al model comprising a third Al model, wherein the third Al model is trained using, as inp ut data, (i) outputs generated by the first Al model based on training data]. The decision boundary may be used to separate data into two categories or classes in a features space.” (Shaoib, Fig. 7, ¶0052) “FIG. 7 shows a number of features plotted in a feature space 700, according to various example embodiments. Feature space 700 may have dimensions for a feature 1 and a feature 2, for example. Each “+” and “−” may represent a feature resulting from a feature extraction operation of a test instance. + and − may be two classes for a binary classification algorithm [i.e. and (ii) outputs qenerated by the second Al model based on the training data;].” (Shaoib, ¶0021) “In contrast, during the training phase SE machine learning uses subsets of data to build a number of relatively simple decision models. During test time, depending on the difficulty of input data, SE machine learning may apply one or more decision models to the input data [i.e. and (iii) the training data;].” Neither Chung nor Shaoib teach: 1. determining not to transmit the first input to the second Al model; 2. and causing the first Al model, in lieu of the second Al model, to generate the first output based on the first input; and providing the first output to the client 3. determining, based on the second input and via the third Al model, that a third output generated by the first Al model based on the second input is not predicted to match a fourth output generated by the second Al model based on the second input; 4. and in response to determining that the third and fourth outputs are not predicted to match, causing the second Al model to generate the fourth output based on the second input. Laskaridis teaches: 1. determining not to transmit the first input to the second Al model; (Laskaridis, ¶0053) “In various example embodiments, the apparatus may, prior to outputting a processing result, compare a confidence associated with the processing result generated using the first portion of the neural network with a required confidence. If the confidence associated with the processing result is greater than or equal to the required confidence, the apparatus may output the processing result. However, if the confidence associated with the processing result is lower than the required result, the apparatus may not output a processing result from the selected exit point in the first portion of the neural network [i.e. determining not to transmit the first input to the second Al model;] 2. and causing the first Al model, in lieu of the second Al model, to generate the first output based on the first input; and providing the first output to the client (Laskaridis, ¶0053) “In various example embodiments, the apparatus may, prior to outputting a processing result, compare a confidence associated with the processing result generated using the first portion of the neural network with a required confidence. If the confidence associated with the processing result is greater than or equal to the required confidence, the apparatus may output the processing result. [i.e. and causing the first Al model, in lieu of the second Al model, to generate the first output based on the first input; and providing the first output to the client.]” 3. determining, based on the second input and via the third Al model, that a third output generated by the first Al model based on the second input is not predicted to match a fourth output generated by the second Al model based on the second input; (Laskaridis, ¶¶0093- 0094, Figure 5) “The processor 120 may obtain the processing result as output data if the confidence of the processing result is greater than or equal to a predetermined confidence level. The processing result may, for example, include data output from the intermediate exit point of the neural network, and the processor 120 may not perform a computation using the remaining layers of the neural network…The processor 120 may further process the input data via the neural network after the identified exit point if the confidence of the processing result is below a predetermined confidence level [i.e. determining, based on the second input and via the third Al model, that a third output generated by the first Al model based on the second input is not predicted to match a fourth output generated by the second Al model based on the second input;].” Examiner interprets the fourth output as generated as a threshold to determine if the third output is in accordance with the system’s objectives. 4. and in response to determining that the third and fourth outputs are not predicted to match, causing the second Al model to generate the fourth output based on the second input. (Laskaridis, ¶¶0093- 0094, Figure 5) “The processor 120 may obtain the processing result as output data if the confidence of the processing result is greater than or equal to a predetermined confidence level. The processing result may, for example, include data output from the intermediate exit point of the neural network, and the processor 120 may not perform a computation using the remaining layers of the neural network…The processor 120 may further process the input data via the neural network after the identified exit point if the confidence of the processing result is below a predetermined confidence level [i.e. and in response to determining that the third and fourth outputs are not predicted to match, causing the second Al model to generate the fourth output based on the second input].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis and Laskaridis. One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis. The motivation is to improve the student-teacher deployment on various devices with the addition of scalable-effort systems that “dynamically adjusts the amount of computational effort applied to input data based on the complexity of the input data. As used herein, effort refers to the amount of time or energy expended by a computing device, the amount of area required for implementing a computing function in hardware, and so on (Shaoib, ¶0019).” Further, the use of Laskaridis’ early-exit outputs “achieving in this way lower latency for a large percentile than previously possible with conventional methods, and in turn enhancing the overall system's throughput (Laskaridis, ¶0041).” Regarding claim 3 and analogous claim 21: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. Chung teaches: 1. wherein the first Al model is a simplified version of the second Al model. (Chung, ¶0041) “In some implementations the student machine learning model 104 is smaller than the teacher machine learning models 102a-102d. That is, the student machine learning model 104 may include less trainable parameters than the teacher machine learning models 102a-102d.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis. The motivation is the same as claim 1. Regarding claim 5, and analogous claim 22: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. Chung teaches: 1. wherein the second Al model is remotely stored and accessible over a computer communications network. (Chung, ¶0083) “The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis. The motivation is the same as claim 1. Regarding claim 7, and analogous claim 24: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. 1. wherein at least one of: the first Al model has a faster response time than the second Al model; (Chung, ¶0031) “The student machine learning model is smaller in size compared to the teacher machine learning models, and is therefore faster to serve than the teacher machine learning models.” 2. or the second Al model has a greater accuracy than the first Al model. (Chung, ¶0042) “For example, a large student machine learning model 104 may be more accurate in settings where the student machine learning model 104 is to be deployed on a server hosting one or many GPUs, or hardware accelerators such as tensor processing units.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis. The motivation is the same as claim 1. Regarding claim 25: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. Chung teaches: 1. in response to a determination that the first and second outputs being predicted to be different, cause the second Al model, in lieu of the first Al model, to generate the second output based on the input, thereby preventing an output, different from the second output, from being provided to the client. (Chung, ¶0054) “The system trains a single student machine learning model to perform the plurality of machine learning tasks (step 206). The system trains the student machine learning model using (i) the configured teacher machine learning models, and (ii) the obtained training data, e.g., the union of training examples included in the sets of training data [i.e. in response to a determination that the first and second outputs being predicted to be different, cause the second Al model, in lieu of the first Al model, to generate the second output based on the input, thereby preventing an output, different from the second output, from being provided to the client].” Examiner notes that teacher-student training would involve the outputs of two AI models being different, and would not involve a third-party client receiving any input. See attached NPL: Knowledge distillation. One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis. The motivation is the same as claim 1. Regarding claim 26: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. 1. wherein the third Al model is trained usinq a traininq process separate from one or more traininq processes used in traininq the first and second Al models, wherein the third Al model is trained to determine conditions under which the outputs of the first Al model and the results of the second Al model based on the same input will match, wherein the third Al model is a binary classifier that is trained by: (Shaoib, ¶0055) “SE classifier stage 800 may, for example, be used for a binary classification algorithm with two possible class outcomes + and −. + biased classifier 802 and − biased classifier 804 may be trained to detect one particular class with high accuracy.” 2. obtaining a training data set comprising client requests; (Chung, ¶0084) “In some embodiments, a server transmits data, e.g. an HTML page, to a user device, e.g. for purposes of displaying data to and receiving user input from a user interacting with the user device, which acts as a client [i.e. obtaining a training data set comprising client requests;].” 3. inputting the training data set into each of the first and second Al models; (Chung, ¶0044) “The student machine learning model 104 may be configured to perform each of the multiple machine learning tasks 114 using the configured multiple teacher machine learning models 102a-102d and the training data 108.” 4. identifying, for each input of the training data, whether the outputs from each model match or do not match; (Chung, ¶0032) “A small, student machine translation model may be trained to translate between multiple languages using several larger, teacher machine translation models that have each been trained to translate a respective language pair. As another example, the methods and systems described in this specification may be used to perform multi-sentiment prediction of given text segments—namely, predicting multiple different sentiments of a given text segment using a single machine learning model. A small, student machine learning model may be trained to predict multiple different sentiments using several larger, teacher machine learning models that have each been trained to predict a single respective sentiment [i.e. identifying, for each input of the training data, whether the outputs from each model match or do not match;].” 5. and using the client requests and their respective matching results, training the third Al model to recognize the types of client requests where the first Al model is suitable for response, and those types of client requests for which it is not. (Shaoib, ¶0040) “Support vector machine block 304 can function as a supervised learning model with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. For example, given a set of training data, each marked as belonging to one of two categories, a support vector machine training algorithm builds a machine learning model that assigns new training data into one category or the other.” Examiner notes that the categorization of data based on a threshold is known in the discipline. Claims 4, 6, 15, 19-20, 23, 27 are rejected under 35 U.S.C. 103 as being unpatentable over US Pre-Grant Patent 2019/0325308 (Chung et al; Chung) in view of US Pre-Grant Patent 2018/0150770 (Shaoib et al; Shaoib) further in view of US Pre-Grant Patent 2021/0012194 (Laskaridis et al; Laskaridis), still further in view of US Patent 11,948,563 (Liu et al; Liu). Regarding claim 4, and analogous claims 15 and 19: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. 1. wherein the first Al model is generated from the second Al model using at least one of transfer learning or model compression. (Liu, col. 18:32-37) “As an example and not by way of limitation, federated learning may be used by the reasoning module 222. Federated learning is a specific category of distributed machine learning approaches which trains machine learning models using decentralized data residing on end devices such as mobile phones [i.e. wherein the first Al model is generated from the second Al model using at least one of transfer learning].” Examiner notes that federated learning is distributed across all AI models and would be considered generated from the second AI model, as well as from the first AI model to the second. See attached NPL: Federated learning, Wikipedia. One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis in view of Liu. The motivation is to improve the system by leveraging federated learning as it can be “can personalize models in federated learning by learning task-specific user representations (i.e., embeddings) or by personalizing model weights. Federated user representation learning is a simple, scalable, privacy-preserving, and resource-efficient (Liu, col. 18: 42-46).” Regarding claim 6, and analogous claims 20 and 23: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. 1. wherein the second Al model is also locally stored, (Chung, ¶0077) “A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code.” 2. and wherein the method is performed by an Al-enabled load balancer. (Liu, col. 11:26-31) “The social-networking system 160 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis in view of Liu. The motivation is to leverage the benefits of load balancers, as “improve the overall performance of applications by decreasing the burden on individual services or clouds, and distribute the demand across different computer surfaces to help maintain application and network sessions (F5, pg. 1, ¶1).” Regarding claim 27: The combination of Chung, Shaoib, and Laskaridis teach the method of claim 1. 1. wherein the first and second Al models are trained usinq separate traininq processes, wherein the method is performed by an edge device, the edge device comprising an Al-enabled load balancer, (Liu, col. 11:26-31) “The social-networking system 160 may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof.” 2. wherein the training data set is generated by: comparing the output of at least the first Al model with the output of the second Al model; (Chung, ¶0072) “In some implementations the system may adjust the values of the parameters of the single student machine learning model by comparing each student machine learning model output to a respective teacher machine learning model output.” 3. determining the conditions under which their outputs match; (Chung, ¶0068) “The system may process an augmented subset using the student machine learning model to generate a respective student machine learning model output. The system may then adjust the values of student machine learning model parameters to match the generated student machine learning model to a corresponding generated teacher machine learning model output.” 4. and outputting training data to include in the training data set. (Chung, ¶0066) “The system trains the single student machine learning model to perform each of the multiple machine learning tasks using (i) the selected one or more subsets, and (ii) respective generated teacher machine learning model outputs (step 306). Since the generated teacher machine learning model outputs may include soft target probability distributions, as described above, training the student machine learning model using the teacher machine learning model outputs may enable the student machine learning model to learn more information from the teacher models, e.g., indicative of similarities between possible teacher machine learning model outputs.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Chung with Shaoib and Laskaridis in view of Liu. The motivation is the same as claim 4. Conclusion Applicant's amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL JUSTIN BREENE whose telephone number is (571)272-6320. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web- based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786 9199 (IN USA OR CANADA) or 571-272-1000. /P.J.B./ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Dec 10, 2019
Application Filed
Jan 24, 2023
Non-Final Rejection — §103
Apr 25, 2023
Interview Requested
May 01, 2023
Response Filed
May 01, 2023
Examiner Interview Summary
Jul 31, 2023
Final Rejection — §103
Oct 09, 2023
Response after Non-Final Action
Nov 03, 2023
Applicant Interview (Telephonic)
Nov 10, 2023
Response after Non-Final Action
Nov 22, 2023
Request for Continued Examination
Dec 01, 2023
Response after Non-Final Action
Mar 08, 2024
Non-Final Rejection — §103
Jun 17, 2024
Response Filed
Aug 07, 2024
Applicant Interview (Telephonic)
Aug 09, 2024
Examiner Interview Summary
Sep 11, 2024
Final Rejection — §103
Jan 16, 2025
Response after Non-Final Action
Feb 14, 2025
Request for Continued Examination
Feb 18, 2025
Response after Non-Final Action
Jul 22, 2025
Non-Final Rejection — §103
Oct 31, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103
Mar 26, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary
Mar 27, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585959
Framework for Learning to Transfer Learn
2y 5m to grant Granted Mar 24, 2026
Patent 12579427
EMBEDDING OPTIMIZATION FOR MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12578718
MODEL CONSTRUCTION SUPPORT SYSTEM AND MODEL CONSTRUCTION SUPPORT METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572792
GOAL-SEEK ANALYSIS WITH SPATIAL-TEMPORAL DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12505356
DATA ENRICHMENT ON INSULATED APPLIANCES
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
56%
Grant Probability
90%
With Interview (+34.6%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 52 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month