Prosecution Insights
Last updated: April 19, 2026
Application No. 18/013,237

LEARNING METHOD, LEARNING APPARATUS AND PROGRAM

Final Rejection §101
Filed
Dec 27, 2022
Examiner
MARU, MATIYAS T
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
NTT, Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
23 granted / 40 resolved
+2.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
79
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note In regards to the 35 USC § 103 rejection, has been withdrawn because the pending claim was amended to roll up the limitations of the previously allowed claim. Claim Objections – Unlabeled equation Claim 1 is objected to as containing an unlabeled equation that is not labeled or identified. Specifically, claim 1 recites “wherein the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning” which includes an equation without a label or identifier. The equation should be clearly identified or labeled to ensure clarity and consistency within the claim. Appropriate correction required. Response to Argument In Remarks/Arguments (Pg. 6 – 7), Applicant contends: “Step 2A, Prong One - Claims do not Recites a Judicial Exception The Office Action asserted that claim 1 falls within the "Mental Process' grouping of abstract ideas." (Office Action, at page 2). … Amended claim 1 recites the limitations of "the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning." (Emphasis added). At least the above limitations cannot be practically performed in the human mind. For example, a human mind cannot practically perform "updates parameters during machine learning process." Applying the rule in MPEP § 2106.04(a)(2)(III)(A), claim 1 does not fall into the grouping of mental process.” Regarding the above argument, the Examiner notes that, per MPEP 2106.04(a)(2) Abstract Idea Groupings, the amended limitation: "the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning." still falls under abstract idea: Mathematical concept. The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. The limitation recites mathematically evaluating a likelihood function L (Q|S;Θ) and computing a related gradient based on anomaly scores during learning, which are mathematical calculations. A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. Even though the amended limitation does not expressly recite updating parameters during a machine learning process, that limitation would be classified as “apply it”, or are mere instructions to implement an abstract idea or other exception on a computer. In Remarks/Arguments (Pg. 6 – 7), Applicant contends: “Step 2A, Prong Two - Integrated Into a Practical Application Even assuming, arguendo, amended claim 1 falls into the grouping of a mental process, amended claim 1 is still not directed to an abstract idea because amended claim 1 as a whole integrates the alleged judicial exceptions into a practical application (e.g., "the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning"). According to MPEP § 2106.04(d)(1), "[a] claim reciting a judicial exception is not directed to the judicial exception if it also recites additional elements demonstrating that the claim as a whole integrates the exception into a practical application. One way to demonstrate such integration is when the claimed invention improves the functioning of a computer or improves another technology or technical field." (Emphasis added)...” Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion that: “the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning" amounts to an improvement to the functioning of a computer or technical field. It lacks sufficient details required to support a conclusion that the claim recites a technological improvement. The claims, as amended, lacks sufficient technical details to support a conclusion that they recite a technological improvement. In particular, the amended claims do not specify how these calculations are applied in a concrete manner to achieve a technical improvement beyond the abstract analysis itself. In Remarks/Arguments (Pg. 8), Applicant contends: “… Applying the rule set forth in MPEP § 2106.05(a), amended claim 1 recites a particular solution to address the computer-centric challenge of providing methods for machine learning of a high-performance anomaly detection model. For example, claim 1 recites “the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning" (Emphasis added). These features are neither well-understood, routine, nor conventional in the field.” Regarding the above argument, the Examiner notes that, under Step 2B, the amended limitation merely applies abstract idea: mathematical concept of solving a likelihood function and computing a related gradient using anomaly scores within a generic learning context, without reciting any additional elements that impose a meaningful limit, improve computer functionality or effect a particular technological application. As such, the limitation does not amount to significantly more than the judicial exception. In Remarks/Arguments (Pg. 8), Applicant contends: “Further, according to MPEP § 2106.05, "[I]imitations that the courts have found to qualify as "significantly more" when recited in a claim with a judicial exception include: Applying the judicial exception with, or by use of. a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. V. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b))." (MPEP § 2106.05). Similarly, here, claim 1 recites and uses a particular machine, such as machine learning process. Accordingly, the limitations recited in claim 1 qualifies as "significantly more”. Regarding the above argument, the Examiner respectfully disagrees with Applicant’s assertion. The cited example in MPEP § 2106.05(b) requires the judicial exception to be applied using a particular machine with a specific structure and arrangement that results in a technical improvement, as Eibel Process Co. where the Fourdrinier machine’s physical configuration optimized paper making performance. In contrast, claim 1 merely recites performing abstract mathematical calculations within a generic “machine learning process” without specifying any particular machine, specialized architecture, or non-conventional arrangement of components. As such, the claim does not meaningful limit the judicial exception to a particular machine, nor does it reflect an improvement to machine functionality, and therefore does not qualify as “significantly more” under MPEP § 2106.05(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 – 2, 4 – 7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. In step 1, of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes). In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components: Regarding claim 1: sampling a task t from the task set {1,... ,T}, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves choosing a task from a set of tasks {1 …T}, which is selecting an item from a list. Such selection can be carried out mentally or manually (e.g.: by randomly picking from options). See (MPEP 2106.04)). sampling a first subset from a data set Dt of the task t and a second subset from a set obtained by excluding the first subset from the data set Dt (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves taking a dataset for a selected task, dividing it into a first subset and then forming a second subset from the remining data, which is splitting or grouping items. See (MPEP 2106.04)). generating a task vector representing a property of a task t corresponding to the first subset (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves forming a representation (task vector) that captures a property of a selected task based on the first subset of data, which is observing characteristics of the subset and grouping them. See (MPEP 2106.04)). calculating scores representing respective degrees of anomaly of the feature amount vectors using the nonlinearly transformed feature amount vectors and a preset center vector; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves performing mathematical calculations on feature amount vectors by applying a nonlinear transformation, comparing them to a preset center vector and deriving numerical scores that represents anomaly degrees.) learning a parameter of the first neural network and a parameter of the second neural network so as to make an anomalous performance index value representing generalized performance of anomaly detection higher using the scores. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves updating parameters of two models based on anomaly scores to improve index value representing generalized performance of anomaly detection. See (MPEP 2106.04)). wherein the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: Mathematical concept: recites mathematically evaluating a likelihood function L (Q|S;Θ) and computing a related gradient based on anomaly scores, which is mathematical calculations. A claim that recites a mathematical calculation, will be considered as falling within the "mathematical concepts" grouping. See (MPEP 2106.04(a)(2))). wherein the calculating of the score includes calculating a distance between values obtained by linearly projecting the nonlinearly transformed feature amount vectors using a linear projection vector ^w and a value obtained by linearly projecting the center vector using the linear projection vector ^w as the scores. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves determining a distance between projected values of transformed feature vectors and a projected center vector, which is performing linear projection, compare values and compute distance. See (MPEP 2106.04)). If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: As evaluated below: • The preamble is deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because the preamble generally links the use of a judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). receiving as input a set of data sets D = {Di,... ,DT} (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g))). wherein a task set is {1,... ,T} and a data set including data at least including feature amount vectors representing features of cases of a task tE{1,...,T} is denoted as Dt; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). nonlinearly transforming feature amount vectors included in data included in the second subset by a second neural network using the task vector; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (III), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (I), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Regarding limitation (II), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 6, A learning apparatus comprising: a memory; and a processor configured to execute: (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). The rest of the limitations are analogues to claim 1, so are rejected under similar rationale. Regarding claim 7, A non-transitory computer-readable recording medium having computer-readable instructions stored thereon, which when executed. cause a computer to perform the learning method according to claim 1: (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). Regarding claim 2, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the first neural network includes a first feedforward neural network and a second feedforward neural network The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. wherein the generation includes generating the task vector by generating a vector in which each item of data included in the first subset is aggregated (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves generating a task vector by combining items of data in the first subset, which is collecting and summarizing information. See (MPEP 2106.04)). by the first feedforward neural network, and then converting the generated vector by the second feedforward neural network. Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 4, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the linear projection vector Aw is a vector calculated such that a distance between anomalous data among data included in the first subset and the center vector is as long as possible, and a distance between normal data among data included in the first subset and the center vector is as short as possible. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 5, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the learning step learns the parameter of the first neural network and the parameter of the second neural network so as to make the index value higher, by using as the index value, any one of an AUC, an approximate AUC, a negative cross entropy error, or a log likelihood. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Allowable subject matter Claim(s) 1 – 2, 4 – 7 would be allowable if rewritten or amended to overcome the rejection under 35 U.S.C. 101 and Claim objection set forth in this office action, because when reading the claims in light of the specification, as per MPEP 2111.01, none of the references of record alone or in combination disclose or suggest the limitations found within the independent claim(s) 1 and 6 as a whole with regards to technical features recited by the claim limitations including directed to: Claim 1. receiving as input a set of data sets D = {Di, ... ,DT} wherein a task set is {1, ... ,T} and a data set including data at least including feature amount vectors representing features of cases of a task tE{1, ... ,T} is denoted as Dt; sampling a task t from the task set {1, ... ,T}, and sampling a first subset from a data set Dt of the task t and a second subset from a set obtained by excluding the first subset from the data set Dt; generating a task vector representing a property of a task t corresponding to the first subset by a first neural network; nonlinearly transforming feature amount vectors included in data included in the second subset by a second neural network using the task vector; calculating scores representing respective degrees of anomaly of the feature amount vectors using the nonlinearly transformed feature amount vectors and a preset center vector; and learning a parameter of the first neural network and a parameter of the second neural network so as to make an anomalous performance index value representing generalized performance of anomaly detection higher using the scores wherein the anomalous performance index value is calculated by solving for L(Q|S; Θ) and a related gradient is calculated using an anomaly scores during learning wherein the calculating of the score includes calculating a distance between values obtained by linearly projecting the nonlinearly transformed feature amount vectors using a linear projection vector ^w and a value obtained by linearly projecting the center vector using the linear projection vector ^w as the scores. Closest prior art: Vinyals, et al. "Matching networks for one shot learning." Vinyals introduces a method using deep neural features and external memories to create a network that connects a small labeled support set and an unlabeled example to its label, eliminating the need for fine-tuning. However, Vinyals does not teach calculating scores representing respective degrees of anomaly of the feature is calculated by first nonlinearly transforming the feature vectors and then linearly projecting them using a projection vector w. similarly, a central vector is linearly projected using the same w. The score is then determined as the distance between these two projected values, capturing how far the transformed feature is from the center in the projected space. Eskin et al., Pub. No.: US20160191561A1. Eskin describes a method for finding unusual data patterns without needing labeled data. It explains that data points are put into a specific space to analyze their features. Anomalies are identified as points that are spread out in this space. Two different types of feature maps are used: one for normalizing network connections and the other for analyzing system call traces. However, Eskin does not teach calculating scores representing respective degrees of anomaly of the feature is calculated by first nonlinearly transforming the feature vectors and then linearly projecting them using a projection vector w. similarly, a central vector is linearly projected using the same w. The score is then determined as the distance between these two projected values, capturing how far the transformed feature is from the center in the projected space. Wang et al., "Learning to model the tail." (2017). Wang discusses a method for learning from imbalanced datasets where some classes have very little data. The goal is to create effective "few-shot" models for these underrepresented classes by transferring knowledge from classes with more data. This is done through transfer learning, where a special network, called a meta-network, learns how to adjust model parameters based on existing data. The process involves transferring knowledge step-by-step from data-rich classes to the less represented ones. The approach shows better results in image classification tasks compared to traditional methods like data resampling or reweighting. Wang does not teach calculating scores representing respective degrees of anomaly of the feature is calculated by first nonlinearly transforming the feature vectors and then linearly projecting them using a projection vector w. similarly, a central vector is linearly projected using the same w. The score is then determined as the distance between these two projected values, capturing how far the transformed feature is from the center in the projected space. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Finn, et al., "Model-agnostic meta-learning for fast adaptation of deep networks.", (2017). Finn proposes the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. Bertinetto, et al., "Meta-learning with differentiable closed-form solvers.", (2018). Bertinetto propose to use fast convergent methods as the main adaptation mechanism for few-shot learning. The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email: matiyas.maru@uspto.gov. The examiner can normally be reached Monday - Friday (8:00am - 4:00pm) EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.T.M./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 27, 2022
Application Filed
Sep 02, 2025
Non-Final Rejection — §101
Nov 04, 2025
Interview Requested
Nov 13, 2025
Examiner Interview Summary
Nov 13, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Response Filed
Feb 06, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586114
GENERATING DIGITAL RECOMMENDATIONS UTILIZING COLLABORATIVE FILTERING, REINFORCEMENT LEARNING, AND INCLUSIVE SETS OF NEGATIVE FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12572796
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM
2y 5m to grant Granted Mar 10, 2026
Patent 12567004
METHOD OF MACHINE LEARNING TRAINING FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561588
Methods and Systems for Generating Example-Based Explanations of Link Prediction Models in Knowledge Graphs
2y 5m to grant Granted Feb 24, 2026
Patent 12561584
TEACHING DATA PREPARATION DEVICE, TEACHING DATA PREPARATION METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
70%
With Interview (+12.5%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month