Prosecution Insights
Last updated: April 19, 2026
Application No. 18/120,895

TECHNOLOGIES FOR USING MACHINE LEARNING MODELS TO ASSESS TIME SERIES DATA

Final Rejection §101§112
Filed
Mar 13, 2023
Examiner
MAHARAJ, DEVIKA S
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Mckinsey & Company Inc.
OA Round
6 (Final)
55%
Grant Probability
Moderate
7-8
OA Rounds
5y 0m
To Grant
63%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
43 granted / 78 resolved
At TC average
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
28 currently pending
Career history
106
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§101 §112
DETAILED ACTION 1. This communication is in response to the amendments filed on November 12, 2025 for Application No. 18/120,895 in which Claims 1-3, 5-6, 9-11, 13-14, 17, and 19 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 3. The amendments filed on November 12, 2025 have been considered. Claims 1, 9, and 17 have been amended. Thus, Claims 1-3, 5-6, 9-11, 13-14, 17, and 19 are pending and presented for examination. 4. Applicant’s arguments filed November 12, 2025 with respect to the 35 U.S.C. 112(b) rejection regarding the phrase “how well equipped” have been fully considered and are persuasive. Thus, the 35 U.S.C. 112(b) rejection with respect to this particular limitation has been withdrawn. Examiner’s Note: Although the previous 35 U.S.C. 112(b) rejection was withdrawn, a new 35 U.S.C. 112(b) rejection is issued below, as necessitated by amendment. 5. Applicant’s arguments filed November 12, 2025 with respect to the 35 U.S.C. 101 rejection have been fully considered but they are not persuasive. Applicant’s Arguments on Pg. 12 of Arguments/Remarks state: “The claims are integrated into a practical application that improves the functioning of computer systems for time series forecasting. In particular, the specific technical implementations recited in the claims goes beyond abstract mental processes and provide concrete technological improvements. The "assessing a time series forecasting accuracy metric" as recited in representative claim 1 is not a mere mental process but rather a specific technical implementation comprising: "(i) generating a weighted mean absolute percentage error (WMAPE) score for each of the plurality of available machine learning models for each of the multiple time intervals, (ii) normalizing each WMAPE score according to a highest WMAPE score for a corresponding time series input to create a normalized score between zero and one, and (iii) selecting a lowest normalized score as a class label for training a classifier model." This technical process cannot be performed mentally and requires specific computer processing to generate, normalize, and select scores across multiple machine learning models and time intervals. The "training, by the one more processors, the classifier model configured to predict a time series forecasting accuracy" as recited by representative claim 1 creates an automated machine learning selection system that eliminates the need for manual model testing and selection. As disclosed in the specification at paragraph [0023], the systems and methods "circumvent time constraints and use intrinsic characteristics of time series datasets to assess the best possible machine learning model to use for assessing the time series datasets" and "result in greater accuracy as well as greatly reduce the amount of time needed, from training to deployment, in a real-world scenario." The claimed technology provides specific improvements to computer system performance. As disclosed in paragraph [0023], "the training and use of the machine learning model(s) enables the systems and methods to process large datasets that conventional systems are unable to analyze as a whole. This results in improved processing time by the systems and methods. Moreover, by virtue of employing the trained machine learning model(s) in its analyses, the systems and methods reduce the overall amount of data retrieval and communication necessary for the analyses of time series datasets, reducing traffic bandwidth and resulting in cost savings." Examiner respectfully disagrees. The newly added limitations including “(i) generating a weighted mean absolute percentage error (WMAPE) score for each of the plurality of available machine learning models for each of the multiple time intervals, (ii) normalizing each WMAPE score according to a highest WMAPE score for a corresponding time series input to create a normalized score between zero and one, and (iii) selecting a lowest normalized score as a class label for training a classifier model” may still be practically performed by a combination of mental and mathematical process. For example, generating a WMAPE score for each of the plurality of available machine learning models may be performed by mathematical process, normalizing each WMAPE score to create a normalized score between zero and one may also be performed by mathematical process using techniques for normalization, and selecting a lowest normalized score may be performed manually by a user observing/analyzing the plurality of normalized scores and accordingly using judgement/evaluation to determine and select the lowest normalized score as a class label to be used for training a classifier model. Further, regarding the limitation "training, by the one or more processors, the classifier model configured to predict a time series forecasting accuracy", this limitation is still recited at a high-level of generality and amounts to merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). The training of the classifier, as well as the recitation of training throughout the Independent claims, is merely “applied” without significantly more – the claims simply state that each classifier/machine learning model is “trained” using previously determined data, without detailing the techniques used within the training, that would reflect the supposed improvements as suggested by Applicant. The claims still recite an abstract idea without significantly more – See the updated 35 U.S.C. 101 rejection in the subsequent section below. Applicant’s Arguments on Pgs. 12-13 of Arguments/Remarks state: “The claims address the technical problem that "for predictions to work properly, a great amount of data is needed" and "a current approach to ensure an adequate-performing forecast is to try to run data in a range of different models, with the hopes that one of them might perform well enough to be used in an application. However, this solution might not be viable for time-constrained projects" (see paragraph [0019]). The claimed automated model selection system provides a technological solution by using extracted features to predict model performance without requiring exhaustive testing of all models. Similar to the claims in Enfish, LLC v. Microsoft Corp.3 and Ex parte Desjardins, the amended claims improve computer functionality by providing a more efficient method for time series forecasting model selection. The Enfish court found patent eligibility where claims were "directed to a specific improvement to the way computers operate" rather than "simply adding conventional computer components to well-known business practices." Here, the specific WMAPE scoring, normalization, and automated classifier training process represents a technological improvement to computer-based forecasting systems. Additionally, like the claims in McRO, Inc. v. Bandai Namco Games Am. Inc.4, the amended claims use a specific technological process to achieve improved results. The McRO court emphasized that claims were patent-eligible where they used "a combined order of specific rules" that improved upon previous technology. Here, the specific combination of WMAPE generation, normalization, and classifier training creates an automated model selection process that improves upon conventional manual model testing approaches. For at least these reasons, the claims are integrated into a practical application and satisfy Step 2A, Prong Two of the USPTO's subject matter eligibility test. Accordingly, Applicant respectfully submits that claims 1-3, 5-6, 9-11, 13-14, 17, and 19 are patent eligible. Therefore, Applicant respectfully submits that the rejection under 35 U.S.C. § 101 is overcome and requests that the rejection be withdrawn.” Examiner respectfully disagrees for substantially the same reasons as stated above. Further, regarding Applicant’s statement that the instant claims are similar to those presented in Ex Parte Desjardins and therefore should be eligible under the same rationale, Examiner respectfully disagrees as well. The claims of Ex Parte Desjardins specifically recite limitations which relate directly to the training of a machine learning model – moreover, the claims describe specific, practical improvements to the functioning of a machine learning model and integrated mathematical concepts into a technical solution, rather than just mere recitation of generic “training” combined with the recitation of abstract idea. Although Applicant states the technical improvement of the instant claims above, the currently drafted claim language does not reflect such improvement, as highlighted by Examiner’s assertions above. Thus, the 35 U.S.C. 101 rejection is maintained. 6. Applicant’s amendments and corresponding arguments with respect to the 35 U.S.C. 103 rejection have been fully considered and are persuasive. Thus, the 35 U.S.C. 103 rejection has been withdrawn. Examiner’s Note: While no prior art rejection is made for Claims 1-3, 5-6, 9-11, 13-14, 17, and 19, these claims are still rejected under 35 U.S.C. 112(b) rejection and 35 U.S.C. 101 – abstract idea. Claim Rejections - 35 USC § 112 7. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 8. Claims 1, 9, 17, and their respective dependents are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The limitation “[…] wherein the numerical performance score indicates a likelihood that the available machine learning model will accurately predict future time series data […]” in Claims 1, 9, and 17 is a relative term which renders the claim indefinite. The term “accurately predict” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The Independent claims mention that the classifier model outputs a “numerical performance score” which is “a likelihood that the machine learning model will accurately predict future time series data” – however, there is no requisite/threshold/degree to which a model may be evaluated as being able to “accurately predict” future time series data. Applicant should consider amending this limitation, to relate the “numeric performance score” to the preceding “time series forecasting accuracy metric” and/or the “lowest normalized score” used as the class label for training the classifier model – as these limitations are seemingly mentioned in the preceding part of the claim, but not referenced in the remaining limitations, which additionally makes the claim appear disjoint alongside being indefinite. Claim Rejections - 35 USC § 101 9. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 10. Claims 1-3, 5-6, 9-11, 13-14, 17, and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: Claim 1 is a method type claim. Therefore, Claims 1-3 and 5-6 are directed to either a process, machine, manufacture, or composition of matter. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. based on testing each of the plurality of available machine learning models, assessing a time series forecasting accuracy metric of each of the plurality of available machine learning models, for each of the multiple time intervals, wherein the time series forecasting accuracy metric of each of the plurality of available machine learning models, for each of the multiple time intervals, is embodied as a vector of results (mental process – assessing a time series forecasting accuracy metric of each of the plurality of models may be performed manually by a user observing/analyzing the results of testing each of the plurality of models and accordingly using judgement/evaluation to assess a time series forecasting accuracy metric for each of the models) (i) generating a weighted mean absolute percentage error (WMAPE) score for each of the plurality of available machine learning models for each of the multiple time intervals (mathematical process – generating a weighted mean absolute percentage error (WMAPE) score for each of a plurality of machine learning models for each of multiple time intervals may be performed by mathematical process, utilizing a formula/equation for calculating a WMAPE score and calculating the WMAPE score for each of the plurality of models for each of the time intervals) (ii) normalizing each WMAPE score according to a highest WMAPE score for a corresponding time series input to create a normalized score between zero and one (mathematical process – normalizing each WMAPE score according to a highest WMAPE score for a corresponding time series input may be performed by mathematical process, utilizing a formula/equation for normalization and normalizing each score according to an identified highest WMAPE score for a corresponding time series input, in order to create a normalized score between zero and one) (iii) selecting a lowest normalized score as a class label for training a classifier model (mental process – selecting a lowest normalized score as a class label may be performed manually by a user observing/analyzing the plurality of normalized scores and accordingly using judgement/evaluation to determine and select the lowest normalized score to be used as a class label for training a classifier model) preparing, by the one or more processors, a set of time series data (mental process – other than reciting “by the one or more processors”, preparing a set of time series data may be performed manually by a user observing/analyzing the set of time series data and accordingly using judgement/evaluation to prepare said time series data) extracting, by the one or more processors, a plurality of features from the set of time series data that was prepared (mental process – other than reciting “by the one or more processors”, extracting a plurality of features may be performed manually by a user observing/analyzing the prepared time series data and accordingly using judgement/evaluation to extract features from the prepared data) generating, by the one or more processors, a feature vector based on the plurality of features that were extracted (mental/mathematical process – other than reciting “by the one or more processors”, generating a feature vector may be performed manually by a user observing/analyzing the plurality of features and accordingly using judgement/evaluation to generate a feature vector based on the extracted features. Alternatively, generating a feature vector may also be performed by mathematical process, utilizing an algorithm/formula for feature vector generation) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: accessing, by one or more processors, a set of time series training data and a set of time series testing data, wherein each of the set of time series training data and the set of time series testing data is segmented into multiple time intervals (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) training, by the one or more processors for each of the multiple time intervals, each of a plurality of available machine learning models using the set of time series training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data) testing, by the one or more processors for each of the multiple time intervals, each of the plurality of available machine learning models that was trained using the set of time series testing data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of testing a plurality of machine learning models without significantly more – there are no further details provided regarding the testing, thus it is merely “applied”) wherein each of the plurality of available machine learning models that was trained and tested is configured to perform a time series data analysis on time series data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the models are configured to perform a time series data analysis on time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) training, by the one or more processors, the classifier model configured to predict a time series forecasting accuracy of each of the plurality of available machine learning models using (i) the vector of results in combination with (ii) a training feature vector associated with the set of time series training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data) inputting, by the one or more processors into the classifier model, the feature vector (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) wherein the classifier model outputs a numerical performance score for each of the plurality of available machine learning models (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) wherein the numerical performance score indicates a likelihood that the available machine learning model will accurately predict future time series data associated with the set of time series data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the numerical performance score indicates a likelihood a machine learning model will accurately predict future time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: accessing, by one or more processors, a set of time series training data and a set of time series testing data, wherein each of the set of time series training data and the set of time series testing data is segmented into multiple time intervals (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) training, by the one or more processors for each of the multiple time intervals, each of a plurality of available machine learning models using the set of time series training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data) testing, by the one or more processors for each of the multiple time intervals, each of the plurality of available machine learning models that was trained using the set of time series testing data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of testing a plurality of machine learning models without significantly more – there are no further details provided regarding the testing, thus it is merely “applied”) wherein each of the plurality of available machine learning models that was trained and tested is configured to perform a time series data analysis on time series data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the models are configured to perform a time series data analysis on time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) training, by the one or more processors, a classifier model configured to predict a time series forecasting accuracy of each of the plurality of available machine learning models using (i) the vector of results in combination with (ii) a training feature vector associated with the set of time series training data (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data) inputting, by the one or more processors into the classifier model, the feature vector (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) wherein the classifier model outputs a numerical performance score for each of the plurality of available machine learning models (MPEP 2106.05(d)(II) indicates that merely “Presenting offers and gathering statistics” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) wherein the numerical performance score indicates a likelihood that the available machine learning model will accurately predict future time series data associated with the set of time series data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the numerical performance score indicates a likelihood a machine learning model will accurately predict future time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-3 and 5-6. The additional limitations of the dependent claims are addressed below. Regarding Claim 2: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. performing, on the set of time series data by the one or more processors, (i) an outlier removal technique, (ii) a signal smoothing technique, and (iii) a value imputation technique (mental/mathematical process – performing an outlier removal technique, a signal smoothing technique, and a value imputation technique may be performed manually by a user and/or by mathematical process utilizing a formula/equation for outlier removal, signal smoothing, and value imputation) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 3: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 3 depends on. extracting, by the one or more processors from the set of time series data that was prepared, at least one of: entropy, linearity, trend strength, seasonality strength, instability, or lumpiness (mathematical process – other than reciting “by the one or more processors” extracting at least one of entropy, linearity, trend strength, seasonality strength, instability, or lumpiness may be performed by mathematical process utilizing a formula/equation for calculating said features based on the prepared time series data) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 5: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 5 depends on. generating, by the one or more processors, a set of stacking input data using at least a portion of the sets of univariate forecast data and a set of additional covariate data (mental process – other than reciting “by the one or more processors”, generating a set of stacking input data may be performed manually by a user observing/analyzing a portion of the sets of univariate forecast data and set of additional covariate data and accordingly using judgement/evaluation to generate a set of stacking input data based on said analysis) analyzing, by a stacking machine learning model, the set of stacking input data to output a set of final forecast data associated with the set of time series data (mental process – other than reciting “by a stacking machine learning model”, analyzing the set of stacking input data may be performed manually by a user observing/analyzing the set of stacking input data and accordingly using judgement/evaluation to determine a set of final forecast data associated with the set of time series data) Step 2A Prong 2 & Step 2B: wherein each of the plurality of available machine learning models has associated a set of univariate forecast data associated with the set of time series data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that each of the models has associated a set of univariate data associated with the set of time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) stacking machine learning model (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 6: Step 2A Prong 1: See the rejection of Claim 5 above, which Claim 6 depends on. generating, by the one or more processors, a set of stacking training data using at least a portion of the sets of training univariate forecast data and a set of additional training covariate data (mental process – other than reciting “by the one or more processors”, generating a set of stacking training data may be performed manually by a user observing/analyzing a portion of the sets of univariate forecast data and set of additional covariate data and accordingly using judgement/evaluation to generate a set of stacking training data based on said analysis) Step 2A Prong 2 & Step 2B: wherein each of the plurality of available machine learning models has associated a set of training univariate forecast data (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that each of the models has associated a set of univariate data associated with the set of time series data does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) training, by the one or more processors, the stacking machine learning model using the set of stacking training data and a set of historical data indicating known time series results (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner' s note: high level recitation of training a machine learning model with previously determined data) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Independent Claim 9 recites substantially the same limitations as Claim 1, in the form of a system, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. For the reasons above, Claim 9 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 10-11 and 13-14. The additional limitations of the dependent claims are addressed below. Claim 10 recites substantially the same limitations as Claim 2, in the form of a system, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. Claim 11 recites substantially the same limitations as Claim 3, in the form of a system, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. Claim 13 recites substantially the same limitations as Claim 5, in the form of a system, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. Claim 14 recites substantially the same limitations as Claim 6, in the form of a system, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. Independent Claim 17 recites substantially the same limitations as Claim 1, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. For the reasons above, Claim 17 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claim 19. The additional limitations of the dependent claims are addressed below. Claim 19 recites substantially the same limitations as Claim 5, in the form of a non-transitory computer-readable storage medium, including generic computer components. The claim is also directed to performing mental processes/mathematical calculations without significantly more, therefore it is rejected under the same rationale. Allowable Subject Matter 11. No prior art rejection is made for Claims 1-3, 5-6, 9-11, 13-14, 17, and 19. However, these claims are still rejected under 35 U.S.C. 112(b) rejection and 35 U.S.C. 101 – abstract idea. 12. Examiner has disclosed Amiri et al. (US PG-PUB 20230022401) and Hyndman et al. (“Meta-learning how to forecast time series”), which are the closest prior art as compared to the instant application. In particular, Amiri teaches time series forecasting comprising determining one or more forecasters (machine learning models) to be used based on a type of time series data and previous training, with the best forecaster for the time series being selected based on the output of a time series classifier used to determine a level of confidence in the forecaster. Hyndman teaches a general framework, labelled FFORMS (Feature-based FORecast Model Selection), which selects forecast models based on features calculated from each time series – more specifically, Hyndman teaches computing a standardized symmetric mean absolute percentage error (sMAPE) across forecast models and selecting the model with the lowest average value of a mean absolute scaled error (MASE) and scaled sMAPE as the output class label. However, Amiri and Hyndman seemingly do not explicitly disclose the newly added limitations “based on testing each of the plurality of available machine learning models, assessing a time series forecasting accuracy metric of each of the plurality of available machine learning models, for each of the multiple time intervals, including: (i) generating a weighted mean absolute percentage error (WMAPE) score for each of the plurality of available machine learning models for each of the multiple time intervals,(ii) normalizing each WMAPE score according to a highest WMAPE score for a corresponding time series input to create a normalized score between zero and one, and(iii) selecting a lowest normalized score as a class label for training a classifier model, wherein the time series forecasting accuracy metric of each of the plurality of available machine learning models, for each of the multiple time intervals, is embodied as a vector of results;” included in Independent Claim 1 (and Independent Claims 9 and 17 which recite substantially the same limitations), in combination with the remaining limitations of the Independent claims. Conclusion 13. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.S.M./Examiner, Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Mar 13, 2023
Application Filed
May 12, 2023
Non-Final Rejection — §101, §112
Aug 08, 2023
Examiner Interview Summary
Aug 08, 2023
Applicant Interview (Telephonic)
Aug 25, 2023
Response Filed
Nov 22, 2023
Final Rejection — §101, §112
Feb 22, 2024
Applicant Interview (Telephonic)
Feb 22, 2024
Examiner Interview Summary
Feb 28, 2024
Request for Continued Examination
Feb 29, 2024
Response after Non-Final Action
Aug 09, 2024
Non-Final Rejection — §101, §112
Nov 05, 2024
Applicant Interview (Telephonic)
Nov 07, 2024
Examiner Interview Summary
Nov 14, 2024
Response Filed
Dec 09, 2024
Final Rejection — §101, §112
Mar 10, 2025
Examiner Interview Summary
Mar 10, 2025
Applicant Interview (Telephonic)
Mar 14, 2025
Request for Continued Examination
Mar 20, 2025
Response after Non-Final Action
Aug 07, 2025
Non-Final Rejection — §101, §112
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Nov 12, 2025
Response Filed
Jan 09, 2026
Final Rejection — §101, §112
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585948
NEURAL PROCESSING DEVICE AND METHOD FOR PRUNING THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579426
Training a Neural Network having Sparsely-Activated Sub-Networks using Regularization
2y 5m to grant Granted Mar 17, 2026
Patent 12572795
ANSWER SPAN CORRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561577
AUTOMATIC FILTER SELECTION IN DECISION TREE FOR MACHINE LEARNING CORE
2y 5m to grant Granted Feb 24, 2026
Patent 12554969
METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF WHITE MATTER HYPERINTENSITIES IN MAGNETIC RESONANCE BRAIN IMAGES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
55%
Grant Probability
63%
With Interview (+7.7%)
5y 0m
Median Time to Grant
High
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month