Prosecution Insights
Last updated: April 19, 2026
Application No. 19/045,394

TIME-SERIES FORECASTING AND IMPUTING USING AUTOMATED FEATURE EXTRACTION AND STATIC MACHINE LEARNING MODELS

Non-Final OA §101
Filed
Feb 04, 2025
Examiner
MENGISTU, TEWODROS E
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Arti Analytics Inc. A Delaware Corporation
OA Round
3 (Non-Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
77%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
62 granted / 127 resolved
-6.2% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
34 currently pending
Career history
161
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 127 resolved cases

Office Action

§101
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 4-5, and 7-12 are pending for examination. Claim 1 is independent. Response to Amendment The office action is responsive to the amendments filed on 10/22/2025. As directed by the amendments claims 1, 4-5, 7-8, and 12 are amended. Claims 2-3, and 6 are canceled. Response to Arguments Applicant's arguments filed 10/22/2025 have been fully considered but they are not fully persuasive. Applicant arguments regarding 35 U.S.C. § 101: Regarding claim 1 2A Prong 1: Applicant respectfully traverses on the basis that the examiner has analyzed the claims at an overly high level of abstraction. Specifically, the examiner has redacted those hardware claim limitations, such as computer processor, static machine learning system, and artificial neural network (ANN) that make it clear that the claim is not a mental process. […] Examiner has not adequately explained his reasoning in removing all hardware limitations, and why, given the above examples, he has randomly chosen to assert that the claims read on a mental process. Examiner response: Examiner respectfully disagrees, the hardware claim limitations, are understood as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Example 47 from the “July 2024 Subject Matter Eligibility Examples” also describes generally training and applying a neural network. Applicant further argues: Regarding Claim 1, 2A prong 2 The examiner argues that the at least one computer processor, at least one static machine learning system comprising an artificial neural network (ANN), should be automatically understood to be generic computer elements as a tool to perform an abstract ideal as per MPEP 2106.05(f). […] In other words, the claim recited a technological solution to a technological problem. Id. In this context, applicant respectfully submits that at least in view of the many additional limitations now present in amended claim 1, the present claim provides a particular solution to a problem or a particular way to achieve a desired outcome. […] As previously discussed, in Contour IP Holding v. GoPro, US Federal Circuit 22-1654, 9/9/2024 held that when analyzed at the appropriate level of generality, claims that are directed to a means or method that improves the relevant technology are not abstract under 35 USC 101. Applicant respectfully submits that when analyzed at the appropriate level of generality, the present claims are also not abstract under 35 USC 101. Examiner response: Examiner respectfully disagrees, the hardware claim limitations, describe generic computer elements used to perform an abstract idea as detailed in the 35 USC 101 rejection below. Applicant does not provide details as to how the claims recite a technological solution to a technological problem. The claims describe using the hardware elements to perform abstract ideas, such as extracting features and forecasting time series values. The claims are directed to an abstract idea and not to an improvement to a computer or technical field. MPEP 2106.05(a) says an improvement in the abstract idea itself is not an improvement in technology. It is unclear how applicants claim relate to that described in Contour IP Holding v. GoPro and does not appear to relate to applicants claims. Applicant states “the claims are directed to a specific means that improves the relevant technology of analyzing time-series data using artificial neural networks (ANN).” without an explanation as to how exactly this improves analyzing time-series data using a neural networks. Analyzing time-series data is practically performable in the human mind and is understood to be a recitation of a mental process and using a neural network is merely applying a machine learning model as a tool to perform the abstract idea - see MPEP 2106.05(f). The claim limitations are a combination of mental steps under step 2A Prong 1, and additional elements under steps 2A Prong 2 & 2B as detailed in the 101 rejection below. Applicant arguments regarding 35 U.S.C. § 103: Examiners response: Applicant’s arguments, filed 10/22/2025, with respect to the rejection(s) of claim(s) 1, 4-5, and 7-12 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-5, and 7-12 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 According to the first part of the analysis, in the instant case, claims 1, 4-5, and 7-12 are directed to a method. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding Claim 1 2A Prong 1: An automated method for time-series data analysis of new data based on at least one time series-training dataset of previous datapoints, said method comprising: using at least one time series-training dataset, feature analysis algorithms to transform said time series-training data set to a static domain of features by automatically extracting features from said time series-training dataset; (This step for extracting features from a time series is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) for at least some later time points after said origin time point, (This step for creating a data subset is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).) for each said data subset, using feature analysis algorithms and (This step for producing a plurality of data subset individual feature vectors is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).) for each said data subset, fusing said plurality of data subset individual feature vectors by concatenation to produce a single data subset fused feature vector, thus preserving the individual information of each said individual feature vector while aligning them into a higher-dimensional feature space; (This step for fusing and producing a fused vector is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).) using a plurality of single data subset fused feature vectors, obtained over a plurality of different sliding time windows, as a machine-learning dataset; (This step for using feature vectors is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment/evaluation).) analyzing said new data by (This step for analyzing new data to create fused feature vectors is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: at least one computer processor, and at least one static machine learning system-comprising an artificial neural network (ANN), (The processor, static machine learning system, and automatically are understood to be generic computer elements and the limitation is merely using generic computer elements as a tool to perform an abstract idea . See MPEP 2106.05(f).) wherein said at least one static machine learning system predicts a target variable based on said automatically extracted features irrespective of their temporal sequence in said dataset; (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying machine learning as a tool to perform the abstract idea (i.e., predicting) - see MPEP 2106.05(f).) using said automatically extracted features to automatically train at least one machine learning model, thus producing at least one trained machine learning model; (Training a machine learning model is understood as mere instructions to implement an abstract idea (e.g., generate inferences) on a computer - see MPEP 2106.05(f).)) wherein said time-series training dataset comprises a linear array of time points starting from an origin time point, each time point having a single associated data point with a data point value; (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the time-series training dataset - See MPEP 2106.05(h).) using said at least one static machine learning system comprising an artificial neural network (ANN), (The static machine learning system and processor are understood to be generic computer elements and the limitation is merely using generic computer elements as a tool to perform an abstract idea . See MPEP 2106.05(f).) using said machine-learning dataset and said at least one static machine learning system, to automatically train at least one said machine learning model, producing at least one trained machine learning model for forecasting future time series values; (Training a machine learning model is understood as mere instructions to implement an abstract idea (e.g., generate inferences) on a computer - see MPEP 2106.05(f).)) and using at least one said trained machine learning model for forecasting future time series values to implement a time-series forecasting system for new data by the steps of; (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea (i.e., a time-series forecasting) - see MPEP 2106.05(f).) (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea - see MPEP 2106.05(f).) wherein said time-series forecasting system uses said plurality of new single data subset fused feature vectors representing said new data, and said trained machine learning model for forecasting future time series values, to forecast future time series values. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea (i.e., a time-series forecasting) - see MPEP 2106.05(f).) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are field of use in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: at least one computer processor, and at least one static machine learning system-comprising an artificial neural network (ANN), (The processor, static machine learning system, and automatically are understood to be generic computer elements and the limitation is merely using generic computer elements as a tool to perform an abstract idea . See MPEP 2106.05(f).) wherein said at least one static machine learning system predicts a target variable based on said automatically extracted features irrespective of their temporal sequence in said dataset; (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying machine learning as a tool to perform the abstract idea (i.e., predicting) - see MPEP 2106.05(f).) using said automatically extracted features to automatically train at least one machine learning model, thus producing at least one trained machine learning model; (Training a machine learning model is understood as mere instructions to implement an abstract idea (e.g., generate predictions) on a computer - see MPEP 2106.05(f).)) wherein said time-series training dataset comprises a linear array of time points starting from an origin time point, each time point having a single associated data point with a data point value; (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the time-series training dataset - See MPEP 2106.05(h).) using said at least one static machine learning system comprising an artificial neural network (ANN), (The static machine learning system, processor, and automatically are understood to be generic computer elements and the limitation is merely using generic computer elements as a tool to perform an abstract idea . See MPEP 2106.05(f).) using said machine-learning dataset and said at least one static machine learning system, to automatically train at least one said machine learning model, producing at least one trained machine learning model for forecasting future time series values; (Training a machine learning model is understood as mere instructions to implement an abstract idea (e.g., generate forecast) on a computer - see MPEP 2106.05(f).)) and using at least one said trained machine learning model for forecasting future time series values to implement a time-series forecasting system for new data by the steps of; (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea (i.e., a time-series forecasting) - see MPEP 2106.05(f).) (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea - see MPEP 2106.05(f).) wherein said time-series forecasting system uses said plurality of new single data subset fused feature vectors representing said new data, and said trained machine learning model for forecasting future time series values, to forecast future time series values. (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying a machine learning model as a tool to perform the abstract idea (i.e., a time-series forecasting) - see MPEP 2106.05(f).) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are field of use as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above. Regarding Claim 4 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said features extracted by said feature analysis algorithms comprise any of temporal, pattern, statistical, context, harmonic, and external features. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the features- See MPEP 2106.05(h).) Regarding Claim 5 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said feature analysis algorithms comprise any of lagged values, moving averages, exponential moving averages, temporal differences, cumulative sums, time delta features, moving window replicated features, seasonality indicators, autocorrelation, local maxima, local minima, mean, median, standard deviation, variance, autocovariance, skewness, kurtosis, minimum values, maximum values, percentiles, interquartile ranges, energy, entropy, cross-entropy, time values, season values, binary indicators for events, time-frequency coefficients from Fourier and wavelet transforms, dominant frequencies, spectral energy distribution, and harmonic ratios. (This limitation further specifies the algorithms and is merely indicating a field of use or technological environment - See MPEP 2106.05(h).) Regarding Claim 7 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said at least one sliding time window sliding time window used to create at least one data subset, each at least one said data subset comprising a portion of said linear array of time points, is a plurality of incrementally sliding time windows, where each successive sliding time window advances by at least one time point over a proceeding sliding time window. (This limitation further specifies the sliding time window and is merely indicating a field of use or technological environment - See MPEP 2106.05(h).) Regarding Claim 8 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said sliding time window sliding time window to create at least one data subset, each at least one said data subset comprising a portion of said linear array of time points has constant length per analyzed time-series dataset. (This limitation further specifies the sliding time window and is merely indicating a field of use or technological environment - See MPEP 2106.05(h).) Regarding Claim 9 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said features further comprise feature types comprising any of temporal, pattern, statistical, context, harmonic, and external feature types, further varying a maximum length of said sliding time windows according to said feature types per analyzed time- series dataset. (The specification of data to be stored is understood to be a field of use limitation. The limitation further specifies the features- See MPEP 2106.05(h).) Regarding Claim 10 2A Prong 1: The claim does not recite any Abstract idea. 2A Prong 2 & 2B: The method of claim 1, wherein said at least one static machine learning system used to automatically extract features from said time series-training dataset is selected from any of a Sklearn, ML.NET, TensorFlow, Keras, PyTorch, XGBoost, CatBoost or other deep learning system. (This limitation further specifies the static machine learning system and is merely indicating a field of use or technological environment - See MPEP 2106.05(h).) Regarding Claim 11 2A Prong 1: wherein said static machine learning system further optimizes either said machine learning model or said time-series forecasting system using any of a mean squared error (MSE) or other error metrics through any of iterative hyperparameter tuning and ensemble methods (This step is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., evaluation).). 2A Prong 2 & 2B: The method of claim 1, wherein using said static machine learning system to automatically train either said machine learning model or said time-series forecasting system by using any of genetic algorithms, grid search, ensemble models, stacking, linear regression, support vector regression, Bayesian regression, k-nearest neighbors, decision trees, gradient boosting algorithms, and neural networks to automatically extract features from said time series-training dataset, thus creating a plurality of data subset individual feature vectors used to build said machine learning model and said time-series forecasting system (This step is adding the words “apply it” (or an equivalent) with the judicial exception, or merely applying machine learning as a tool to perform the abstract idea - see MPEP 2106.05(f).); and Regarding Claim 12 2A Prong 1: The method of claim 11, further using said at least one computer processor and said static machine learning system to automatically optimize said algorithms by automatically iterating over a plurality of different sets of feature analysis algorithms and automatically determining which sets of feature analysis algorithms produce a better-optimized machine learning model or time-series forecasting system. (This step for determining is practically performable in the human mind and is understood to be a recitation of a mental process (i.e., judgment).). 2A Prong 2 & 2B: The claim does not recite any additional elements. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Langford (US 20250156730 A1) describes a feature extraction method for time-series.. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TEWODROS E MENGISTU whose telephone number is (571)270-7714. The examiner can normally be reached Mon-Fri 9:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ABDULLAH KAWSAR can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TEWODROS E MENGISTU/Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Feb 04, 2025
Application Filed
Apr 25, 2025
Non-Final Rejection — §101
Jun 24, 2025
Response Filed
Jul 22, 2025
Final Rejection — §101
Oct 03, 2025
Interview Requested
Oct 14, 2025
Examiner Interview Summary
Oct 14, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Request for Continued Examination
Oct 26, 2025
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566817
AUTOMATIC MACHINE LEARNING MODEL EVALUATION
2y 5m to grant Granted Mar 03, 2026
Patent 12482032
Selective Data Rejection for Computationally Efficient Distributed Analytics Platform
2y 5m to grant Granted Nov 25, 2025
Patent 12450465
NEURAL NETWORK SYSTEM, NEURAL NETWORK METHOD, AND PROGRAM
2y 5m to grant Granted Oct 21, 2025
Patent 12400252
ARTIFICIAL INTELLIGENCE BASED TRANSACTIONS CONTEXTUALIZATION PLATFORM
2y 5m to grant Granted Aug 26, 2025
Patent 12380369
HYPERPARAMETER TUNING IN AUTOREGRESSIVE INTEGRATED MOVING AVERAGE (ARIMA) MODELS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
77%
With Interview (+28.2%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 127 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month