Prosecution Insights
Last updated: April 19, 2026
Application No. 18/048,658

FUTUREPROOFING A MACHINE LEARNING MODEL

Final Rejection §101§103
Filed
Oct 21, 2022
Examiner
TSAI, JAMES T
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
184 granted / 297 resolved
+7.0% vs TC avg
Strong +56% interview lift
Without
With
+56.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
316
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 297 resolved cases

Office Action

§101 §103
FINAL REJECTION, SECOND DETAILED ACTION Status of Prosecution The present application 18/048,658, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The application was filed in the Office on October 21, 2022. The Office mailed a non-final rejection, first detailed action on Sept. 24, 2025. Applicant initiated an interview on Dec. 15, 2025 and subsequently filed amendments with accompanying remarks and arguments on Dec. 22, 2025. Claims 1-20 are pending and all are rejected in this rejection. Claims 1, 13 and 17 are independent claims. Status of Claims Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-3, 10-15 and 17-19 are rejected under 35 USC § 103 as being unpatentable by Moharrer et al. (“Moharrer”), United States Patent Application Publication 2022/0043681 published on February 10, 2022 in view of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022. Claims 4-5, 16 and 20 are rejected under 35 USC § 103 as being unpatentable by Moharrer in view of Johnson and in further view of Jin et al. (“Jin”), United States Patent Application Publication 2020/0311557 published on October 1, 2020. Claims 6-8 are rejected under 35 USC § 103 as being unpatentable by Moharrer in view of Johnson and in further view of non-patent literature Zliobaite, “Learning under Concept Drift: an Overview”, published on October 22, 2010. Claim 9 is rejected under 35 USC § 103 as being unpatentable by Moharrer in view of Johnson and in further view of Hospedales et al. (“Hospedales”), United Kingdom Patent Application Publication GB2597352A published on January 26, 2022. Response to Remarks and Arguments Examiner thanks Applicant’s representative for the courtesies extended during the Dec. 15, 2025 interview. First, regarding the § 112(b) based rejections, the amendments have been considered and the rejetions are withdrawn. Next, regarding the § 101 based rejections, examiner has considered the arguments and the amendments and has adjusted the rejecitons as noted below. Finally, regarding the prior art rejections, Examiner has newly rejected the claims with the application of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022. The rejections below have also been adjusted accordingly. Claim Rejections – § 101 Subject Matter Eligibility Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding representative claim 1, at step 1, the claim recites a computer-implemented method, and therefore is a process, which is a statutory category of invention. See MPEP § 2106.03. At step 2A, prong one, the claim recites a computer-implemented method that builds an ensemble model from feature models. The following limitations are the abstract idea of a mathematical calculation. See MPEP § 2106.04(a)(2)(I)(C): generating a futureproofing metric; and generating an enhanced machine learning model comprising a futureproofed version of the baseline machine learning model with the historical data and the baseline machine learning model as inputs; in response to determining that objective functions are not met to a predetermined threshold degree, providing feedback and changing an evolutionary process for a model futureproofing application; and in response to determining that objective functions are met to a predetermined threshold degree, determining that the enhanced machine learning model is to be continued to be used without changes. Therefore, the claim recites at least one abstract idea per this part of the analysis. At step 2A prong 2, the claim language is analyzed to determine whether it recites additional elements that integrate the judicial exception into a practical application. See MPEP § 2106.04(d). The limitations: receiving historical data for updates and changes to a baseline machine learning model, deploying and monitoring the enhanced machine learning model are steps that, under its broadest reasonable interpretation, are additional elements that generally links the use of the judicial exception to a particular technological environment or field of use, specifically model training. See MPEP §§ 2106.04(d), 2106.05(h). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is therefore directed to an abstract idea. Next, at step 2B of the analysis, the claim is considered if it recites additional elements that amount to significantly more than the judicial exception. See MPEP § 2106.05. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of building the models into an ensemble model and utilizing the predictive utility of the ensemble model amount to nothing more than linking the use of the judicial exception to a particular technological environment or field of use. See MPEP § 2106.05(h). Therefore, claim 1 is ineligible. As to dependent claim 2, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitation of “wherein a future use and performance of the futureproofed version of the baseline machine learning model are captured and analyzed after deployment for a predetermined period of time, to iteratively improve at least one of: the futureproofed version of the baseline machine learning model; and operations performed for generating the futureproofed version of the baseline machine learning model.” additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP § 2106.05(g). Furthermore, the additional elements are directed to receiving or transmitting data over a network and storing and retrieving information in memory which the courts have recognized as well‐understood, routine, and conventional when they are claimed in a generic manner. See MPEP § 2106.05(d)(II). The claim is also ineligible. As to dependent claim 3, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitation of “prior to receiving the historical data, in response to determining that the baseline machine learning model is not yet available for use, training the baseline machine learning model with suitable data” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer. See MPEP § 2106.05(f)(1). The claim is also ineligible. As to dependent claim 4, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitation of “wherein the historical data includes additional updates and additional changes for related models to the machine learning model that have occurred prior to the baseline machine learning model being used” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer. See MPEP § 2106.05(f)(1). The claim is also ineligible. As to dependent claim 5, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitation of “wherein the historical data includes training data that had been used to previously build related models to the baseline machine learning model,” is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer. See MPEP § 2106.05(f)(1). The claim is also ineligible. As to dependent claims 6-8, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitations of: wherein the generating of the futureproofed version of the machine learning model takes place via operations that implement an evolutionary algorithm. wherein the generating of the futureproofed version of the machine learning model takes place via operations that implement a time-series algorithm. wherein the generating of the futureproofed version of the machine learning model is based on factors that include future concept drifts, covariate shifts, and prior probability shifts. are additional element that are steps under broadest reasonable interpretations, are additional elements that generally link the use of the judicial exception to a particular technological application. See MPEP § 2106.05(h). The claims are also ineligible. As to dependent claim 9, the analysis of the parent claim is incorporated. In the step 2A, prong 2 analysis, the additional limitations of: wherein the futureproofed version of the machine learning model is neurosymbolic and differs from the baseline machine learning model by inclusion of rules, and wherein enforcement of different rules is initiated at different points in time is an additional element that are steps under broadest reasonable interpretations, are additional elements that generally link the use of the judicial exception to a particular technological application. See MPEP § 2106.05(h).The claims are also ineligible. As to dependent claim 10-12, the analysis of the parent claim is incorporated. In the step 2A, prong 1 analysis, the additional limitations of: wherein the machine learning model is a supervised model. wherein the futureproofed version of the baseline machine learning model is generated while avoiding retraining of the baseline machine learning model. wherein the machine learning model is extrapolated for generating the futureproofed version of the machine learning model. are additional element that are steps under broadest reasonable interpretations, are additional elements that generally link the use of the judicial exception to a particular technological application. See MPEP § 2106.05(h). The claims are also ineligible. As to independent claim 13, the analysis of claim 1 is incorporated. Where it differs is in the step zero analysis, in which the system includes a processor and memory is a manufacture and thus statutory. As to dependent claims 14-16, they are similarly rejected as to claims 2-4. As to independent claim 17, the analysis of the claim 1 is incorporated. Where it differs is in the step 0 analysis, in which the computer program product is a manufacture and thus statutory. As to dependent claims 18-20, they are similarly rejected as to claims 2-4. Objection Claim 1 is objected to for what appears to be a typographical error. It appears that the last claim element, “in response to determining …” should read “the predetermined threshold degree,” giving antecedent basis to the immediately preceding claim element’s predetermined threshold degree, as discussed in the last interview. Claims 13 and 17 are similarly objected to. Correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. Claims 1-3, 10-15 and 17-19 are rejected under 35 USC § 103 as being unpatentable by Moharrer et al. (“Moharrer”), United States Patent Application Publication 2022/0043681 published on February 10, 2022 in view of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022. As to Claim 1, Moharrer teaches: A computer-implemented method for futureproofing a machine learning model, the computer-implemented method comprising: receiving historical data for updates and changes to a baseline machine learning model (Moharrer: par. 0028, untrained machine learning metamodel [170] (i.e. a baseline machine learning model) is trained in a second phase by using data such as personal historic data for inferencing in a production environment (i.e. historical data for updates and changes); generating a futureproofing metric (Moharrer: par. 0072, metrics such as the actual amount of memory needed or training duration and accuracy may be recorded); and generating an enhanced machine learning model comprising a futureproofed version of the baseline machine learning model with the historical data and the baseline machine learning model as inputs (Moharrer: Fig. 2, at step [202] values of hyperparameters for configuration of a ML Model are selected (i.e. the historical data) and the ML metamodel (i.e. baseline model) is then trained to produce a trained and configured ML model from the training dataset (i.e. enhanced ML model); deploying and monitoring the enhanced machine learning model (Moharrer: Fig. 2, at step [205] the model is trained with the training dataset, and thus deployable). PNG media_image1.png 756 1021 media_image1.png Greyscale Moharrer may not explicitly teach: in response to determining that objective functions are not met to a predetermined threshold degree, providing feedback and changing an evolutionary process for a model futureproofing application; and in response to determining that objective functions are met to a predetermined threshold degree, determining that the enhanced machine learning model is to be continued to be used without changes. Johnson teaches in general concepts related to determining a surface pattern for a target object using an evolutionary algorithm (Johnson: Abstract). Specifically Johnson teaches that a fitness score is calculated, which utilizes parameters scored for the purpose of obtaining future further generations of sets of parameters (i.e. objective function parameters) (Johnson: Abstract, par. 0011). The evolutionary algorithm may continue until a termination condition is met such as the fitness score’s parameters meets a threshold (i.e. a predetermined threshold degree) (Johnson: par. 0019). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer disclosures and teachings by performing the optimization and futureproofing analysis with an evolutionary algorithm that terminates once an objective function is met as taught and suggested by Johnson. Such a person would have been motivated to do so with a reasonable expectation of success to allow for the successful termination of the algorithms saving resources and time. As to Claim 2, Moharrer and Johnson teaches the elements of claim 1. Moharrer further teaches: wherein a future use and performance of the futureproofed version of the baseline machine learning model are captured and analyzed after deployment to iteratively improve at least one of: the futureproofed version of the baseline machine learning model (Moharrer: par. 0054, the amount of memory needed for the future is determined based on the analysis of previously used training data set); and operations performed for generating the futureproofed version of the baseline machine learning model. Moharrer may not explicitly teach: wherein a future use and performance of the futureproofed version of the baseline machine learning model are captured and analyzed after deployment for a predetermined period of time. It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by performing the optimization and futureproofing analysis for a predetermined time after deployment. Such a person would have been motivated to do so with a reasonable expectation of success to ensure the optimization takes place after a rest period. As to Claim 3, Moharrer and Johnson teach the elements of claim 1. Moharrer further teaches: prior to receiving the historical data, in response to determining that the baseline machine learning model is not yet available for use, training the baseline machine learning model with data (Moharrer: par. 0067, Steps [204-205] are performed in a scenario B is to confirm that the model is ready for use by either having to train or retrain the model). As to Claim 10, Moharrer and Johnson teach the elements of claim 1. Moharrer further teaches: wherein the machine learning model is a supervised model (Moharrer: Abstract, the training of the ML metamodel may be supervised). As to Claim 11, Moharrer and Johnson teach the elements of claim 1. Moharrer further teaches: wherein the futureproofed version of the baseline machine learning model is generated while avoiding retraining of the baseline machine learning model (Moharrer: par. 0019, the retraining of the of the target ML model is done offline without retraining the base model). As to Claim 12, Moharrer and Johnson teach the elements of claim 1. Moharrer further teaches: wherein the machine learning model is extrapolated for generating the futureproofed version of the machine learning model (Moharrer: Fig. 2, at step [202] values of hyperparameters for configuration of a ML Model are selected (i.e. the historical data) and the ML metamodel (i.e. baseline model) is then trained to produce a trained and configured ML model from the training dataset (i.e. extrapolated model). As to Claim 13, it is rejected for similar reasons as claim 1. Moharrer further teaches a memory and processor coupled to the memory for performing operations (Moharrer: par. 0085-87). As to Claim 14, it is rejected for similar reasons as claim 2. As to Claim 15, it is rejected for similar reasons as claim 3. As to Claim 17, it is rejected for similar reasons as claims 1 and 13. As to Claim 18, it is rejected for similar reasons as claim 2. As to Claim 19, it is rejected for similar reasons as claim 3. B. Claims 4-5, 16 and 20 are rejected under 35 USC § 103 as being unpatentable by Moharrer et al. (“Moharrer”), United States Patent Application Publication 2022/0043681 published on February 10, 2022 in view of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022 and in further view of Jin et al. (“Jin”), United States Patent Application Publication 2020/0311557 published on October 1, 2020. As to Claim 4, Moharrer and Johnson teach the elements of claim 1. Moharrer may not explicitly teach: wherein the historical data includes additional updates and additional changes for related models to the machine learning model that have occurred prior to the baseline machine learning model being used. Jin teaches in general concepts related to evaluating and defining scope of data-driven deep learning models (Jin: Abstract). Specifically, Jin teaches that a system for facilitating the evaluation and definition of the scope of the models (Jin: par. 0070, Fig. 5). The system includes an assessment component (Jin: par. 0070, [506]). The assessment component may determine whether certain data sets and training data features to evaluate new data sets whether they can be reused for training the current model (Jin: par. 0075, some of these data sets may be new ones or historical previously extracted data features from previously tracked training). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by using additional updates and changes as appropriate for use in the datasets used to train as taught and disclosed by Jin. Such a person would have been motivated to do so with a reasonable expectation of success to enhance the processing time required to evaluate the new target data set (Jin: par. 0075). As to Claim 5, Moharrer and Johnson teach the elements of claim 1. Moharrer further teaches: wherein the historical data includes training data that had been used to previously build related models to the baseline machine learning model. Jin teaches in general concepts related to evaluating and defining scope of data-driven deep learning models (Jin: Abstract). Specifically, Jin teaches that a system for facilitating the evaluation and definition of the scope of the models (Jin: par. 0070, Fig. 5). The system includes an assessment component (Jin: par. 0070, [506]). The assessment component may determine whether certain data sets and training data features to evaluate new data sets whether they can be reused for training the current model (Jin: par. 0075, some of these data sets may be new ones or historical previously extracted data features from previously tracked training). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by using additional updates and changes as appropriate for use in the datasets used to train as taught and disclosed by Jin. Such a person would have been motivated to do so with a reasonable expectation of success to enhance the processing time required to evaluate the new target data set (Jin: par. 0075). As to Claim 16, it is rejected for similar reasons as claim 4. As to Claim 20, it is rejected for similar reasons as claim 4. C. Claims 6-8 are rejected under 35 USC § 103 as being unpatentable by Moharrer et al. (“Moharrer”), United States Patent Application Publication 2022/0043681 published on February 10, 2022 in view of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022 and in further view of non-patent literature Zliobaite, “Learning under Concept Drift: an Overview”, published on October 22, 2010. As to Claim 6, Moharrer and Johnson teaches the elements of claim 1. Moharrer and Johnson may not explicitly teach: wherein the generating of the futureproofed version of the machine learning model takes place via operations that implement an evolutionary algorithm. Zliobaite teaches in general concepts related to how concept drift is a problem for non-stationary learning (Zliobaite: Abstract). Specifically, Zliobaite teaches that different considerations are given in dealing with model generation for a concept drift problem. One of them is model adaptivity and using evolutionary algorithms (Zliobaite: Sec. 4.3, a relation between the change type and magnitude and the evolutionary algorithm). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by training the futureproofed version of the machine learning model by using evolutionary models as taught and suggested by Zliobaite. Such a person would have been motivated to do so with a reasonable expectation of success to allow for reducing the concept drift problem. As to Claim 7, Moharrer and Johnson teach the elements of claim 1. Moharrer and Johnson may not explicitly teach: wherein the generating of the futureproofed version of the machine learning model takes place via operations that implement a time-series algorithm. Zliobaite teaches in general concepts related to how concept drift is a problem for non-stationary learning (Zliobaite: Abstract). Specifically, Zliobaite teaches that different considerations are given in dealing with model generation for a concept drift problem. One of them is model adaptivity and using time-series algorithms (Zliobaite: Sec. 4.1, ARIMA models for instance). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by training the futureproofed version of the machine learning model by using time-series models as taught and suggested by Zliobaite. Such a person would have been motivated to do so with a reasonable expectation of success to allow for reducing the concept drift problem. As to Claim 8, Moharrer and Johnson teach the elements of claim 1. Moharrer and Johnson may not explicitly teach: wherein the generating of the futureproofed version of the machine learning model is based on factors that include future concept drifts, covariate shifts, and prior probability shifts. Zliobaite teaches in general concepts related to how concept drift is a problem for non-stationary learning (Zliobaite: Abstract). Specifically, Zliobaite teaches that different considerations are given in dealing with model generation for a concept drift problem. One of them is model adaptivity related to factors such as future concept drifts (Zliobaite: passim)., covariate shifts (Zliobaite: Sec. 4.2).and prior probability shifts (Zliobaite: Sec. 1.2). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer-Johnson disclosures and teachings by training the futureproofed version of the machine learning model by using the factors as recited in the claim as taught and suggested by Zliobaite. Such a person would have been motivated to do so with a reasonable expectation of success to allow for reducing the concept drift problem. D. Claim 9 is rejected under 35 USC § 103 as being unpatentable by Moharrer et al. (“Moharrer”), United States Patent Application Publication 2022/0043681 published on February 10, 2022 in view of Johnson, United States Patent Application Publication 2022/0237893, published on July 28, 2022 and in further view of Hospedales et al. (“Hospedales”), United Kingdom Patent Application Publication GB2597352A published on January 26, 2022. As to Claim 9, Moharrer and Johnson teach the elements of claim 1. Moharrer may not explicitly teach: wherein the futureproofed version of the machine learning model is neurosymbolic and differs from the baseline machine learning model by inclusion of rules, and wherein enforcement of different rules is initiated at different points in time. Hospedales teaches in general concepts related to a neural-symbolic (neurosymbolic) framekwork for training a machine learning model (Hospedales: Abstract). The machine learning model may be trained with a set of logical rules that are applied at different times points in time by a user (Hospedales: par. 0023, the logical rule is used by the logical module rule for the abduction step and may be applied at different times, for instance the next time the ML model is used for image recognition). It would have been obvious to a person having ordinary skill in the art at a time before the effective filing date of the application to have modified the Moharrer=Johnson disclosures and teachings by implementing the machine learning model as neurosymbolic with the inclusion of rules as defined by a user for training as taught and suggested by Hospedales. Such a person would have been motivated to do so with a reasonable expectation of success to improve training data for a neural network by abducting possible correct intermediate labels (Hospedales: Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES T TSAI whose telephone number is (571)270-3916. The examiner can normally be reached M-F 8-5 Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES T TSAI/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Oct 21, 2022
Application Filed
Sep 20, 2025
Non-Final Rejection — §101, §103
Dec 15, 2025
Examiner Interview Summary
Dec 15, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Response Filed
Jan 16, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585958
MMETHOD AND SYSTEM FOR TWO-STEP HIERARCHICAL MODEL OPTIMIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12577416
METHOD FOR GENERATING A COMPOSITION FOR DYES, PAINTS, PRINTING INKS, GRIND RESINS, PIGMENT CONCENTRATES OR OTHER COATING SUBSTANCES
2y 5m to grant Granted Mar 17, 2026
Patent 12579413
Method and Apparatus for Performing Convolution Neural Network Operations
2y 5m to grant Granted Mar 17, 2026
Patent 12566985
METHOD AND SYSTEM FOR PERFORMING DATA PREDICTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561569
INFORMATION PROCESSING METHOD FOR REDUCSING STORAGE REQUIREMENTS FOR WEIGHT PARAMETER VALUES OF LEARNED DATA SETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+56.0%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 297 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month