Prosecution Insights
Last updated: April 18, 2026
Application No. 17/205,763

Automatic Identification of Improved Machine Learning Models

Final Rejection §103
Filed
Mar 18, 2021
Examiner
BREENE, PAUL J
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
4 (Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
29 granted / 52 resolved
+0.8% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
29 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 52 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent 10,209,974 (Patton et al; Patton) in view of US Pre-Grant Patent 2020/0125956 (Ravi et al; Ravi), further in view of US Pre-Grant Patent 2022/0121718 (Barron et al; Barron). Regarding claim 1 and analogous claims 10 and 15: Patton teaches: 1. A computer-implemented method for identifying new machine learning models with improved metrics, the computer-implemented method comprising: searching, by a computer, for a new machine learning model that has improved metrics over current metrics of the current machine learning model, wherein improved metrics are deemed improved if at least one metric is improved as compared to an equivalent metric in the current metrics; (Patton, col. 2: 38-44) “As shown in FIG. 1, the method for model management includes, within a testing platform: building candidate model(s) S200, validating the candidate model(s) S300, and selectively deploying the candidate model(s) into a production environment S400. The method functions to automatically generate (e.g., train), test, and deploy new models into the production environment [i.e. A computer-implemented method for identifying new machine learning models with improved metrics, the computer-implemented method comprising: searching, by a computer, for a new machine learning model that has improved metrics over current metrics of the current machine learning model].” (Patton, col. 10: 4-8) “In one variation, the evaluation system determines evaluation metric values for the new and old models, and replaces the old model with the new model when the new model has better evaluation metric values [i.e. wherein improved metrics are deemed improved if at least one metric is improved as compared to an equivalent metric in the current metrics;].” 2. determining, by the computer, whether a new machine learning model having improved metrics over the current metrics of the current machine learning model was found in the searching; (Patton, col. 10: 4-8) “In one variation, the evaluation system determines evaluation metric values for the new and old models, and replaces the old model with the new model when the new model has better evaluation metric values [i.e. 2. determining, by the computer, whether a new machine learning model having improved metrics over the current metrics of the current machine learning model was found in the searching;].” 3. responsive to the computer determining that a new machine learning model having improved metrics over the current metrics of the current machine learning model was found in the searching, determining, by the computer, whether the new machine learning model is compatible with the current machine learning model and compatible with the client device; (Patton, col. 10: 61-67; col. 11: 1-5) “The orchestration system can schedule different processes of the method (e.g., generate a resource allocation schedule), automatically select processing resources or nodes (e.g., computing hardware) for different processes of the method, automatically instruct the selected resources to execute the assigned processes, or otherwise orchestrate method execution [i.e. responsive to the computer determining that a new machine learning model having improved metrics over the current metrics of the current machine learning model was found in the searching,]. For example, the orchestration system can send images to GPUs for feature extraction and labeling, and send text to CPUs for feature extraction and labeling. The orchestration system can manage method performance using a directed acyclic graph, cost optimization, or using any other suitable method [i.e. whether the new machine learning model is compatible with the current machine learning model and compatible with the client device;].” Examiner notes that although the specification does not expressly disclose a discrete step of determining whether a new model is “compatible with the current model and compatible with the client device,” under the broadest reasonable interpretation the orchestration and deployment disclosures reasonably encompass the functionality of verifying a new model with existing infrastructure and client environments. 4. responsive to the computer determining that the new machine learning model is compatible with the current machine learning model and compatible with the client device, implementing, by the computer, the new machine learning model having the improved metrics automatically in the client device of the user to increase performance of the client device; (Patton, col. 8: 60-66) “The deployment system preferably deploys candidate models to the production environment when the respective evaluation metric values satisfy a set of deployment conditions (and does not deploy the candidate model when the evaluation metric values fail the deployment conditions) [i.e. responsive to the computer determining that the new machine learning model is compatible with the current machine learning model and compatible with the client device,], but can otherwise control candidate model deployment to the production environment [i.e. implementing, by the computer, the new machine learning model having the improved metrics automatically].” (Patton, col. 3: 37-42) “The event can optionally be associated with one or more endpoints, wherein notifications can be sent to the endpoints when new instances of the event (e.g., new instances of the event class) are detected. The endpoints are preferably users, but can alternatively be computing systems, display devices, or any suitable endpoint [i.e. in the client device of the user].” Examiner notes that under BRI the “endpoint” reasonably encompasses a client device of the user. (Patton, col. 12: 64-67) “In a specific example, the run condition is met when the benefit to running the method (e.g., based on increased detection accuracy, revenue per accurate detection, etc.) [i.e. to increase performance of the client device;].” Patton does not teach: 1. generating, by the computer, an extensible machine learning model database that comprises functional factors, nonfunction factors, and metrics for stored machine learning models; 2. and adding, by the computer, the new machine learning model and the improved metrics to the extensible machine learning model database. Ravi teaches: 1. generating, by the computer, an extensible machine learning model database that comprises functional factors, nonfunction factors, and metrics for stored machine learning models; (Ravi, ¶0076) “As another example, the application development platform can provide monitoring of and dashboards that display evaluations of models and system health (i.e., a model evaluation service) [i.e. and metrics for stored machine learning models]. For example, the platform can provide analytics on model usage, performance, download status, and/or other measures [i.e. generating, by the computer, an extensible machine learning model database that comprises]. Statistics of performance can include descriptions of accuracy [i.e. functional factors], accuracy under curve, precision vs recall, confusion matrix, speed (e.g., # FLOPs, milliseconds per inference), and model size (e.g., before and after compression) [i.e. nonfunction factors]. The analytics can be used by the developer to make decisions regarding model retraining and compression.” 2. and adding, by the computer, the new machine learning model and the improved metrics to the extensible machine learning model database. (Ravi, ¶0070) “According to another aspect of the present disclosure, the machine intelligence SDK can further include a dedicated machine learning library that can be implemented by the application to run and/or train the models included in the machine intelligence SDK on-device [i.e. and adding, by the computer, the new machine learning model].” (Ravi, ¶0076) “As another example, the application development platform can provide monitoring of and dashboards that display evaluations of models and system health (i.e., a model evaluation service). For example, the platform can provide analytics on model usage, performance, download status, and/or other measures [i.e. and the improved metrics to the extensible machine learning model database].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is to improve the core of Patton (comparing a candidate model’s metrics against the currently-deployed model) with Ravi by using a structured, extensible storage mechanism for persisting models with associated metadata. The outcome is predictable: a unified system capable of selecting improved models and maintaining a database of the models and their metrics. As Ravi states, “In particular, the application development platform and SDKs can provide or otherwise leverage a unified, cross-platform application programming interface (“API”) that enables access to all of the different machine learning services needed for full machine learning functionality within the application (Ravi, Abstract).” Neither Patton nor Ravi teaches: 1. and wherein the extensible machine learning model database further comprises a user profile for the user, and wherein the user profile comprises current machine learning models with corresponding current metrics of the user, use case of each respective machine learning model, data sets of the user, and user-specified preferences regarding certain machine learning model metrics the user wants improved; Barron teaches: 1. and wherein the extensible machine learning model database further comprises a user profile for the user, and wherein the user profile comprises current machine learning models with corresponding current metrics of the user, use case of each respective machine learning model, data sets of the user, and user-specified preferences regarding certain machine learning model metrics the user wants improved; (Barron, ¶0058) “The memory 112 and processor 110 are configured to store and classify, in a database, the profile information of the user as a user profile table. The memory 112 and processor 110 are configured to transmit, by the database, the user profile table to a machine learning database [i.e. and wherein the extensible machine learning model database further comprises a user profile for the user,]… The memory 112 and processor 110 are configured to create and identify, by the machine learning database, a plurality of user classifications related to the user profile table [i.e. and wherein the user profile comprises current machine learning models with corresponding current metrics of the user,]… The memory 112 and processor 110 are configured to gather, by the browser extension, the browsing data of the user while the user is browsing the internet. The memory 112 and processor 110 are configured to store, in a cloud database server, the browsing data of the user [i.e. data sets of the user,].” (Barron, ¶0059) “In an embodiment, the profile information of the user is classified as the user profile table by using one or more of a plurality of machine learning algorithms and a plurality of artificial intelligence algorithms in a storage mechanism [i.e. use case of each respective machine learning model].” (Barron, ¶0081) “Dagda will use machine learning and custom algorithms to weigh the above elements and signals to determine the following: 1. Which business vertical the user should belongs to; and 2. The user's current phase within that vertical. Phase defines which stage a user is currently in during a user journey/buying cycle. Phases: I. Announce—User is unaware of the product/service and/or offering in a particular vertical; II. Research I Consideration—User is aware of product/service and/or offering and educating themselves on the product/service and/or offering and other options; III. Intention—User is actively searching for product/service and or offering and shows behaviors indicating about to purchase product/service and or offering; IV. Action—User purchases product/service and or offering. Dagda further determines thee specific segment or subsegments that the user should belong to [i.e. and user-specified preferences regarding certain machine learning model metrics the user wants improved;].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton and Ravi with Barron. The motivation is to improve the system with the organization of a user profile containing user preferences and a machine learning database to store said user profile in. As stated in Barron, “one advantage of the present disclosure is that it provides a computer-implemented method and system to gather user information via a browser extension, browser module, or browser application (browser extension) (Barron, ¶0038)” Regarding claim 2 and analogous claims 11, and 16: Patton, Ravi, and Barron teach the method of claim 1. Patton teaches: 1. responsive to the computer determining that the new machine learning model is not compatible with the current machine learning model, sending, by the computer, a recommendation to the user regarding the new machine learning model having the improved metrics. (Patton, col. 10: 1-7) “The evaluation system can optionally: benchmark new vs. old models for the same class in live performance and/or determine when an old model (e.g., prior model) should be replaced with a candidate model. In one variation, the evaluation system determines evaluation metric values for the new and old models, and replaces the old model with the new model when the new model has better evaluation metric values [i.e. responsive to the computer determining that the new machine learning model is not compatible with the current machine learning model, sending, by the computer].” (Patton, col. 10: 12-15) “In a second variation, the evaluation system tracks the class detections that are raised by the deployed models, and tracks the detections (e.g., event notifications) that are converted (e.g., used) by an endpoint or end user [i.e. a recommendation to the user regarding the new machine learning model having the improved metrics].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Regarding claim 3 and analogous claims 12 and 17: Patton, Ravi, and Barron teach the method of claim 1. Patton teaches: 1. identifying, by the computer, the current machine learning model running on the data set within the client device of the user; and tracking, by the computer, the current metrics corresponding to the current machine learning model running on the data set within the client device of the user. (Patton, col. 10: 12-15) “In a second variation, the evaluation system tracks the class detections that are raised by the deployed models [i.e. identifying, by the computer, the current machine learning model running on the data set within the client device of the user], and tracks the detections (e.g., event notifications) that are converted (e.g., used) by an endpoint or end use [i.e. and tracking, by the computer, the current metrics corresponding to the current machine learning model running on the data set within the client device of the user].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Regarding claim 4 and analogous claim 18: Patton, Ravi, and Barron teach the method of claim 1. Patton teaches: 1. wherein the computer compares the improved metrics of the new machine learning model with the current metrics of the current machine learning model (Patton, col. 12: 58-61) “In a sixth variation, the run condition is met when the anticipated candidate model improvement over the prior model exceeds a predetermined threshold (e.g., 10%, 50%, 70%, 90% improvement, etc.) [i.e. wherein the computer compares the improved metrics of the new machine learning model with the current metrics of the current machine learning model].” 2. and provides the user with a predicted performance increase of the new machine learning model over the current machine learning model based on comparison of the improved metrics with the current metrics. (Patton, col. 12: 64-67; col. 13: 1-3) “In a specific example, the run condition is met when the benefit to running the method (e.g., based on increased detection accuracy, revenue per accurate detection, etc.) exceeds the cost of executing the method (e.g., beyond a predetermined return threshold, such as 0% return, 10% return, 50% return, 70% return, 90% return, etc.) [i.e. and provides the user with a predicted performance increase of the new machine learning model over the current machine learning model based on comparison of the improved metrics with the current metrics.].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Regarding claim 5 and analogous claims 14 and 19: Patton, Ravi, and Barron teach the method of claim 1. Patton teaches: 1. wherein the current metrics include at least one of precision, recall, F1 score, F2 score, transparency, and explainability. (Patton, col. 8: 48-54) “Examples of evaluation metrics that can be determined include: recall or sensitivity, precision, F.sub.1-score, support, confusion matrix, accuracy, specificity, conversion metrics (e.g., conversion rate), validation metrics (e.g., whether predicted subsequent events are detected), speed, latency, cost, or any other suitable evaluation metric [i.e. wherein the current metrics include at least one of precision, recall, F1 score, F2 score, transparency, and explainability].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Regarding claim 6 and analogous claim 20: Patton, Ravi, and Barron teach the method of claim 1. Ravi teaches: 1. wherein the improved metrics are user-specified metrics. (Ravi, ¶0071) “For example, the SDK can detect and exclude anomalies from being logged into the training data. The on-device training can enable personalization of models based on user-specific data.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Regarding claim 7: Patton, Ravi, and Barron teach the method of claim 1. Ravi teaches: 1. wherein the computer maintains a mapping of type of machine learning model needed for each particular data set of the user and a list of different types of metrics corresponding to each respective machine learning model. (Ravi, ¶0052-0055) “The training pipeline can create a schema to specify how one or more trainings for the compact machine-learned model will proceed. The training pipeline can further provide an experiment (e.g., tf.Experiment) based API to construct a network. The training pipeline can invoke the training (e.g., starting the training in a wrapper code and/or training infra). The training pipeline can train the compact model until a desired number of steps is achieved. The training pipeline can export the trained compact machine-learned model in a specific format (e.g., TF-Lite format) [i.e. wherein the computer maintains a mapping of type of machine learning model needed for each particular data set of the user. ]In some implementations, the trained compact machine-learned model can be then used to run on a computing device (e.g., on-device) In some implementations, the created schema can include several fields, such as experiment name, features (e.g., name of a field, type of a feature, one or more dimensions of a feature, etc.). hyperparameters (e.g., learning rate, number of steps, optimizer, activation layer, loss weight for a pre-trained model, loss weight for the compact model, cross loss weight, etc. [i.e. and a list of different types of metrics corresponding to each respective machine learning model.]), a model specification of the compact model that contains multiple fields to construct the compact model, a model specification of the pre-trained model that contains multiple fields to construct the pre-trained model (Ravi, ¶0117-0119) “In some implementations, the model specification can include some or all of the following example information: id: A unique identifier for this model instance, model_type: (e.g., “feed_forward”, “projection”).” Examiner notes that by storing for each data set, a schema that including the corresponding model_type, the system maintains a computer-held mapping between each particular data set and the type of machine learning model used for that data set. One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton with Ravi. The motivation is the same as claim 1. Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent 10,209,974 (Patton et al; Patton) in view of US Pre-Grant Patent 2020/0125956 (Ravi et al; Ravi), further in view of US Pre-Grant Patent 2022/0121718 (Barron et al; Barron), further in view of US Pre-Grant Patent 2021/0081836 (Polleri et al; Polleri). Regarding claim 8: Patton, Ravi, and Barron teach the method of claim 1. Neither Patton nor Ravi nor Barron teach: 1. wherein the computer maintains a user profile that contains current machine learning models with corresponding current metrics of the user, use case of each respective machine learning model, data sets of the user, and user-specified preferences regarding certain machine learning model metrics the user wants improved, and wherein the computer recommends new machine learning models with improved metrics to the user based on the user profile. Polleri teaches: 1. wherein the computer maintains a user profile that contains current machine learning models with corresponding current metrics of the user, use case of each respective machine learning model, data sets of the user, and user-specified preferences regarding certain machine learning model metrics the user wants improved, and wherein the computer recommends new machine learning models with improved metrics to the user based on the user profile. (Polleri, ¶0009) “The machine learning platform can generate and store one or more library components that can be used for other machine learning applications. The machine learning platform can allow users to generate a profile which allows the platform to make recommendations based on a user's historical preferences [i.e. wherein the computer maintains a user profile that contains current machine learning models with corresponding current metrics of the user,]. (Polleri, ¶0049) “The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data [i.e. data sets of the user,], a desired prediction of machine learning application [i.e. and user-specified preferences regarding certain machine learning model metrics the user wants improved,], and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112 [i.e. use case of each respective machine learning model,].” (Polleri, ¶0053) “The monitoring engine 156 can provide feedback to the model composition engine 132. The feedback can include adjustments to one or more variables or selected machine learning model used in the machine learning model 112 [i.e. and wherein the computer recommends new machine learning models with improved metrics to the user based on the user profile].” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton, Ravi, and Barron with Polleri. The motivation is to improve the system by incorporating a user interface that “can monitor and evaluate the outputs of the machine learning model to allow for feedback and adjustments to the model (Polleri, ¶0008).” Regarding claim 9: Patton, Ravi, and Barron teach the method of claim 1. Neither Patton nor Ravi teach: 1. wherein the computer defines the current machine learning model based on a set of parameters that includes artificial intelligence domain for the current machine learning model, technology of the current machine learning model, type of the current machine learning model, library needed for the current machine learning model, and current version of the library being used for the current machine learning model. Polleri teaches: 1. wherein the computer defines the current machine learning model based on a set of parameters that includes artificial intelligence domain for the current machine learning model, technology of the current machine learning model, type of the current machine learning model, library needed for the current machine learning model, and current version of the library being used for the current machine learning model. (Polleri, ¶0049) “The model composition engine 132 can interface with library components 168 [i.e. library needed for the current machine learning model, and current version of the library being used for the current machine learning model] to identify various pipelines 136, micro service routines 140 [i.e. wherein the computer defines the current machine learning model based on a set of parameters that includes artificial intelligence domain for the current machine learning model], software modules 144 [i.e. technology of the current machine learning model,], and infrastructure models 148 [i.e. type of the current machine learning model,] that can be used in the creation of the machine learning model 112.” One of ordinary skill in the art, at the time the invention was filed, would have been motivated to modify Patton, Ravi, and Barron with Polleri. The motivation is the same as claim 8. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL JUSTIN BREENE whose telephone number is (571)272-6320. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web- based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on 303-297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786 9199 (IN USA OR CANADA) or 571-272-1000. /P.J.B./ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Mar 18, 2021
Application Filed
Aug 22, 2024
Non-Final Rejection — §103
Nov 20, 2024
Interview Requested
Dec 03, 2024
Response Filed
Feb 21, 2025
Final Rejection — §103
Mar 14, 2025
Response after Non-Final Action
May 02, 2025
Request for Continued Examination
May 11, 2025
Response after Non-Final Action
Nov 24, 2025
Non-Final Rejection — §103
Jan 09, 2026
Interview Requested
Jan 23, 2026
Response Filed
Mar 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585959
Framework for Learning to Transfer Learn
2y 5m to grant Granted Mar 24, 2026
Patent 12579427
EMBEDDING OPTIMIZATION FOR MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12578718
MODEL CONSTRUCTION SUPPORT SYSTEM AND MODEL CONSTRUCTION SUPPORT METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572792
GOAL-SEEK ANALYSIS WITH SPATIAL-TEMPORAL DATA
2y 5m to grant Granted Mar 10, 2026
Patent 12505356
DATA ENRICHMENT ON INSULATED APPLIANCES
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
90%
With Interview (+34.6%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 52 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month