Prosecution Insights
Last updated: April 19, 2026
Application No. 17/694,830

SMART TIME SERIES AND MACHINE LEARNING END-TO-END (E2E) MODEL DEVELOPMENT ENHANCEMENT AND ANALYTIC SOFTWARE

Non-Final OA §101§112
Filed
Mar 15, 2022
Examiner
WECHSELBERGER, ALFRED H.
Art Unit
2187
Tech Center
2100 — Computer Architecture & Software
Assignee
Kuantsol Inc.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
122 granted / 212 resolved
+2.5% vs TC avg
Strong +36% interview lift
Without
With
+36.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
42 currently pending
Career history
254
Total Applications
across all art units

Statute-Specific Performance

§101
30.0%
-10.0% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
24.0%
-16.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 212 resolved cases

Office Action

§101 §112
DETAILED ACTION Claims 1 – 10 have been presented for examination. This office action is in response to submission of the application on 03/15/2022. Priority Applicant claims benefit of provisional application 63/161351 and 63/161365 on the Application Data Sheet. The benefit is acknowledged since the provisional applications provide written description support for the claimed invention. Claim Objections Claim 1 is objected to because of the following informalities: the auto data validation steps appear to be all required since the final item in the list is “and capping values or input standardization to form outlier identification”. This is the interpretation for examination purposes. Further, the tabbing of the claims is ambiguous since the “generating implementation code for the best model” is indented the same as “a best model review step”. However, it appears to actually be part of the same “a best model review step” since it does not explicitly recite “step”. The subsequently recited “processing a set of data” also appears to have inconsistent indentation since it is indented too far to the left. Further, there is no “and” on the second to last limitation, however, there is one previously in “; and generating implementation code for the best model”. For examination purposes the recited “generating implementation code” is interpreted as being part of the “a best model review step”, and “processing a set of data” is interpreted as being related to the preamble “the process comprising the following steps”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1- 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regard to claim 1, it recites “an auto data validation step comprising”, however the elimination of duplicate data can explicitly be performed manually. The limitation is interpreted for examination purposes as the data elimination can be manual, however the remaining steps are performed automatically. Further, it recites “the raw training data” in “an auto data validation step comprising using the user input data to apply the following to the raw training data”. There is insufficient antecedent basis for this limitation in the claim since there is no previously recited “training data”. There is recited in the subsequent limitation “raw training data”, and looking to the disclosure, there is explicitly contemplated automatic data validation process feeding into feature creation processes (see the instant application Figure 1) The limitation is interpreted for examination purposes as referring to the subsequently recited “raw training data”. Further, the claim recites that “the API performs” the various steps. Examiner notes that an API by itself is merely an interface and therefore would not be usable for performing steps. A review of the specification shows that the API is related to software and computer-implemented processes (see the instant application Page 12 “Steps 1-10 are automated features of the software with the necessary configuration settings made by the user”). The limitation is interpreted for examination purposes as the software performs steps through an API. With regard to claim 7, it is recited producing analysis “that is helpful”. The term “helpful” is a relative term which renders the claim indefinite. The term “helpful” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term is interpreted for examination purposes as being in any way beneficial in decision making. With regard to claim 10, it recites “and/or” which is unclear since it is unknown which is controlling. The limitation is interpreted for examination purposes as only reciting “or”. With regard to claim 2 – 6 and 8 – 9, they are rejected by virtue of depending from a rejected parent claim, and without reciting additional limitations to overcome the deficiency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Independent claim 1 recites at Step 1 a statutory category (i.e. a process) process for building, developing, and enhancing a model for use in forecasting, the process comprising the following steps: an data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification; a feature creation step comprising using domain knowledge to extract features from raw training data; a best model review step comprising producing detailed information on the best model through statistical diagnostics, sensitivity, back-test and performance analysis; and generating implementation code for the best model; processing a set of data to be analyzed using the best model, forecasting an outcome based on processing the set of data to be analyzed with the best model. At Step 2A, Prong I the recited limitations, alone or in combination, amount to steps that, under its broadest reasonable interpretation, cover performance of the limitations in the mind in combination with using a pen and paper (see MPEP 2106.04(a)(2)(III)). For example, the “elimination” and “selection” and “identification” and “capping” and “standardization” and “extract” and “producing” and “processing” and “forecasting” require nothing more than judgements and evaluations. Accordingly, the claim recites an abstract idea. At Step 2A, Prong II this judicial exception is not integrated into a practical application since the claimed invention further claims: that the data validation step is auto; a first user input step, wherein a user to input data using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: a feature encoding step comprising using the created features and raw training data to train different candidate models; a model selection step wherein the user is prompted to select a best model from the number of trained candidate models based on user defined model rankings; providing the forecast to a user by a user interface on a user device. The “input data” and “user is prompted to select” amounts to insignificant data gathering since it is recited at a high-level of generality, and since the various steps rely on the received elements in a generic manner (see MPEP 2106.05(g)). The “providing” amounts to insignificant data outputting since it is recited at a high-level of generality. The “auto” and “user interface” and “user device” and “API” are recited at a high-level of generality such that they amount to no more than mere application of the judicial exception using generic computer components which does not amount to an improvement in computer functionality (see MPEP 2106.04(a)(I)). The “auto” is interpreted as being performed automatically with the exception of the elimination of duplicate data (see Claim Rejections - 35 USC § 112). The “API” is interpreted as comprising generic computer elements (see Claim Rejections - 35 USC § 112). The “train different candidate models” amounts to reciting the words “apply it” since it generically uses the created features. The claim is directed to an abstract idea. At Step 2B the claim does not recite additional elements that, alone or in an ordered combination, are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the recited “auto” and “user interface” and “user device” and “API” amount to no more than mere instructions to apply the judicial exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The recited “input data” and “user is prompted to select” and “providing” covers well-understood, routine, and conventional activity since it is generic and covers receiving and outputting data by any electronics means (see MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network”). The “train different candidate models” recites the idea of an outcome and amounts to reciting the words “apply it”. Considering the additional elements in combination does not add anything more than when considering them individually since the “input data” and “user is prompted to select” and “providing” and “train different candidate models” requires no more than generic computer functions. For at least these reasons, the claim is not patent eligible. Dependent claim 2 – 10 recite(s) at Step 1 the same statutory category as the parent claim(s), and further recite(s): Claim 2 the feature creation step comprising using domain knowledge to extract features from raw training data comprises at least one of the following: log, polynomial, interaction functions such as division of two inputs, multiplication of two inputs, momentum, drift, and variance functions; a feature imputation step is performed after the feature creation step, the feature imputation step comprises modeling each feature as a function of each other feature, imputing each feature sequentially, and allowing each feature to be used to predict subsequent features; wherein the feature imputation process step is repeated at least once, and wherein imputing is performed using one of: KNN, performance-based, iterative imputation, mean, median, and mode; and the feature encoding step further comprising using a categorical data encoding technique when the categorical variables are ordinal, producing labels through label encoding, ordinal coding or one hot encoding, and converting the labels into numeric values via multiple statistical techniques; Claim 3 wherein the different candidate models are selected from at least one of the following time series models: ARIMA, SARIMA, VAR, ECM, and VECM; Claim 4 a best model validation step producing a comprehensive report of the statistical diagnostics tests, performance evaluations, sensitivity analysis, and model ranking based on the configuration selected by the user; Claim 5 a model comparison step comprising comparing the best model to another model in the number of candidate models with an option to determine a new best mode; Claim 6 wherein the different candidate models are selected from at least one of the following machine learning models: Gradient Boosting, Stochastic Boosting, AdaBoost, XGBoost, LightBoost, KNN, K-Means, PCA, Logistic Regression, Decision Tree, Random Forest, Quadratic Linear Discrimination, Neural Networks, and Deep Learning; Claim 7 a data partition and segmentation step comprising partitioning the data into training data, validation data, and out-of-sample data for use in hyperparameter tuning, model selection, and performance analysis, a feature filtering step comprising leveraging variance and information values to filter or create new features; a model design step comprising selecting, automatically or manually by user input, all applicable models of the set of models, a standalone model of the set of models based on customizable ranking criteria, or applying stacking wherein a final model is based on a collective prediction of at least one model of the set of models; a hyperparameter tuning step applied to each of the number of candidate models comprising applying at least one of the following techniques: Grid, Soft Grid, Randomized and Bayesian search; and a model ranking step comprising comparing the best model to another model in the set of models based on model stability, sensitivity, and/or customizable performance evaluation that includes error distributions, bias and uncertainty calculations, and statistical diagnostics; Claim 8 wherein the feature creation step further comprising defining a selection of strongest variables in terms of explanatory power against the target selection input, and applying at least one selected from the following: Recursive Feature Elimination, Model Ranked, Variance Threshold, Missing/low frequency Threshold, F Test, Ch2 Test, Lasso, Ridge, Backward, Forward and Stepwise sequential selections, Information Value, and Variable Clustering; Claim 9 wherein the feature creation process step further comprises: wherein the user selects at least one of the features to extract potential inputs, and/or wherein the user eliminates variables deemed to be unintuitive based on domain knowledge; Claim 10 a model comparison step comprising comparing the best model to another model in the number of candidate models with an option to determine a new best model. At Step 2A, Prong I the recited limitations in part, alone or in combination, amount to steps that, under its broadest reasonable interpretation, cover performance of the limitations in the mind in combination with using a pen and paper (see MPEP 2106.04(a)(2)(III)). The “extract features” and “modelling each feature” and “imputing” and “predict” and “are selected” and “selecting” and “producing a comprehensive report” and “comparing” and “defining” and “selects” and “eliminates” amount to modeling and predicting actions recited at a high-level of generality, and requires no more than judgements and evaluations. The “partitioning the data” and “filter or create new features” amount to operations on the data which can be performed in the mind after the data is received. The recited limitations in part, alone or in combination, amount to steps that, under its broadest reasonable interpretation, cover mathematical concepts (see MPEP 2106.04(a)(2)()). The “using a categorical data encoding technique” and “label encoding, ordinal encoding, or one hot encoding” and “multiple statistical techniques” and “applying at least one of the following techniques” and “applying at least one selected from” cover specific mathematical techniques. Accordingly, the claim(s) recite(s) an abstract idea. At Step 2A, Prong II this judicial exception is not integrated into a practical application since the claimed invention further claims: Claim 5 a documentation materials step comprising saving the comprehensive report as a file; Claim 7 a feature and target analysis step comprising providing summary statistic and visual inspection of the data that is helpful in decision making with respect to a data partition and a feature creation; and providing data size statistics and industry standards for minimum size requirements, customizable clustering analysis and variable importance analysis across partitions. For example, the “saving” and “providing summary statistic and visual inspection of the data” and “providing data size statistics” amount to insignificant data outputting (see MPEP 2106.05(g)). The claim is directed to an abstract idea. At Step 2B the claim(s) do not recite additional elements that, alone or in an ordered combination, are sufficient to amount to significantly more than the judicial exception. The recited “saving” covers well-understood, routine, and conventional activity since it is generic and covers outputting data by any electronics means (see MPEP 2106.05(d)(II) iv. Storing and retrieving information in memory). The recited “providing summary statistic and visual inspection of the data” and “providing data size statistics” covers well-understood, routine, and conventional activity since it is generic and covers outputting data by any electronics means (see MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network”). Considering the additional elements in combination does not add anything more than when considering them individually since the “saving” and “providing summary statistic and visual inspection of the data” and “providing data size statistics” require no more than generic computer functions. For at least these reasons, the claim(s) are not patent eligible. Allowable Subject Matter The following is a statement of reasons for the indication of allowable subject matter, subject to overcoming the 101 and 112 rejections. None of the prior art of record taken individually or in combination discloses the claim 1 (and claims 2- 10 by incorporation) process for building, developing, and enhancing a model for use in forecasting, the process comprising the following steps: “a first user input step, wherein a user to input data using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification” (see Claim Rejections - 35 USC § 112) and “a best model review step comprising producing detailed information on the best model through statistical diagnostics, sensitivity, back-test and performance analysis”, in combination with the remaining elements and features of the claim. It is for these reasons that the applicant’s invention defines over the prior art of record. Chen et al. “Neural Feature Search: A Neural Architecture for Automated Feature Engineering” teaches using a neural architecture for automated feature engineering. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Runnels, W. “Incorporating Automated Feature Engineering Routines into Automated Machine Learning Pipelines” teaches automatically generating promising features for a dataset, and removing features with high cardinality by default. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Horn et al. “The autofeat Python Library for Automated Feature Engineering and Selection” teaches a library for automated feature engineering and selection in scikit-learn style. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Chauhan et al. “Automated Machine Learning: The New Wave of Machine Learning” teaches H2O.AI is an autoML tool that uses target encoding for replacing categorical values with mean/median/mode of the target variable. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Patton et al. (US 2019/0171428) teaches features extractors function to extract one or more data features from the raw data. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Pinto et al. (US 2005/0234760) teaches a model development platform including data assessment and model evaluation. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Conort et al. (US 2022/0076164) teaches using domain knowledge to extract features from raw data via data mining techniques, and graphical user interface depicting different categorical values within a training dataset. However, does not teach using a user interface on a user device and providing the user input data to an application program interface (API); the API performs: an auto data validation step comprising using the user input data to apply the following to the raw training data: elimination of duplicate data, either manually or standardized, selection of missing imputation functions, identification of low frequency values in categorical variables and proposing to eliminate or keep the categorical variables, and capping values or input standardization to form outlier identification. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFRED H. WECHSELBERGER whose telephone number is (571)272-8988. The examiner can normally be reached M - F, 10am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALFRED H. WECHSELBERGER/ExaminerArt Unit 2187 /EMERSON C PUENTE/Supervisory Patent Examiner, Art Unit 2187
Read full office action

Prosecution Timeline

Mar 15, 2022
Application Filed
Nov 29, 2025
Non-Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561501
SYSTEM AND METHOD FOR EXCESS GAS UTILIZATION
2y 5m to grant Granted Feb 24, 2026
Patent 12517804
GENERATING TECHNOLOGY ENVIRONMENTS FOR A SOFTWARE APPLICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12468581
INTER-KERNEL DATAFLOW ANALYSIS AND DEADLOCK DETECTION
2y 5m to grant Granted Nov 11, 2025
Patent 12462075
RESOURCE PREDICTION SYSTEM FOR EXECUTING MACHINE LEARNING MODELS
2y 5m to grant Granted Nov 04, 2025
Patent 12450145
ADVANCED SIMULATION MANAGEMENT TOOL FOR A MEDICAL RECORDS SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
94%
With Interview (+36.5%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 212 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month