Prosecution Insights
Last updated: April 19, 2026
Application No. 18/227,060

METHOD AND SYSTEM FOR LEARNABLE AUGMENTATION FOR TIME SERIES PREDICTION UNDER DISTRIBUTION SHIFTS

Non-Final OA §101
Filed
Jul 27, 2023
Examiner
MACKES, KRIS E
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Jpmorgan Chase Bank N A
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
400 granted / 527 resolved
+20.9% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
12 currently pending
Career history
539
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 527 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 7, 9-14, 16, and 18-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 10, and 19 Claim 1 recites “A method…” which is a series of steps and is therefore a process. Claim 10 recites “A computing apparatus…” and is a machine. Claim 19 recites “A non-transitory computer readable storage medium…” and is therefore a manufacture. Independent claims 1, 10, and 19 recite limitations of: receiving… extracting… perturbing… training… adjusting… Claims 1, 10, and 19 recite the limitations of “perturbing…”, “training…”, and “adjusting…” which are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting a processor, a memory, a communication interface, or a non-transitory computer readable storage medium; nothing in the claim elements preclude the step from practically being performed in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including observation, evaluation, judgement, and opinion). The claims recite the additional elements of “receiving…” and “extracting…”. Both these elements amount to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g)). The processor, memory, communication interface, and non-transitory computer readable storage medium are recited at a high level of generality (i.e. as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (see MPEP 2106.05(f)). The claims are directed to an abstract idea. Claims 2-5, 7, 9, 11-14, 16, 18, and 20 recite additional limitation of “selecting…” which is a process that, under its broadest reasonable interpretation, cover performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting a processor, a memory, a communication interface, or a non-transitory computer readable storage medium; nothing in the claim elements preclude the step from practically being performed in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including observation, evaluation, judgement, and opinion). The judicial exception is not integrated into a practical application. Claims 2-5, 7, 9, 11-14, 16, 18, and 20 recite no additional elements. The processor, memory, communication interface, and non-transitory computer readable storage medium are recited at a high level of generality (i.e. as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (see MPEP 2106.05(f)). The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception Claims 6, 8, 15, and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Valpola U.S. Patent No. 11,568,208 Disclosed is a computer-implemented method for estimating an uncertainty of a prediction generated by a machine learning system, the method including: receiving first data; training a first machine learning model component of a machine learning system with the received first data, the first machine learning model component is trained to generate a prediction; generating an uncertainty estimate of the prediction; training a second machine learning model component of the machine learning system with second data, the second machine learning model component is trained to generate a calibrated uncertainty estimate of the prediction. Also disclosed is a corresponding system. Cohen et al. U.S. Publication No. 2019/0236447 A method of generating a controller for a continuous process. The method includes receiving from a storage memory, off-line stored values of one or more controlled variables and one or more manipulated variables of the continuous process over a plurality of time points. The off-line stored values are used to train a first neural network to operate as a predictor of the controlled variables. Then, the method includes training a second neural network to operate as a controller of the continuous process using the first neural network after it was trained to operate as the predictor for the continuous process and employing the second neural network as a controller of the continuous process. Choi et al. U.S. Publication No. 2021/0117774 A method of machine learning model development includes building an autoencoder including an encoder trained to map an input into a latent representation, and a decoder trained to map the latent representation to a reconstruction of the input. The method includes building an artificial neural network classifier including the encoder, and a classification layer partially trained to perform a classification in which a class to which the input belongs is predicted based on the latent representation. Neural network inversion is applied to the classification layer to find inverted latent representations within a decision boundary between classes in which a result of the classification is ambiguous, and inverted inputs are obtained from the inverted latent representations. Each inverted input is labeled with a class that is its ground truth, and thereby producing added training data for the classification, and the classification layer is further trained using the added training data. Deng et al. U.S. Publication No. 2022/0383109 A system for machine learning architecture for time series data prediction. The system may be configured to: maintain a data set representing a neural network having a plurality of weights; obtain time series data associated with a data query; generate, using the neural network and based on the time series data, a predicted value based on a sampled realization of the time series data and a normalizing flow model, the normalizing flow model based on a latent continuous-time stochastic process having a stationary marginal distribution and bounded variance; and generate a signal providing an indication of the predicted value associated with the data query. Flores et al. Data Augmentation for Short-Term Time Series Prediction with Deep Learning In this paper, a hybrid data augmentation technique for short-term time series prediction is proposed in order to overcome the underfitting problem in deep learning models based on recurrent neural networks such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). The proposal hybrid technique consists of the combination of two basic data augmentation techniques that are generally used for time series classification, these are: time-warping and jittering. Time-warping allows the generation of synthetic data between each pair of values in the time series, extending its length, while jittering allows the synthetic data generated to be non-linear. To evaluate the proposal technique, it’s experimented with three non-seasonal short-term time series of Perú: CO2 emissions per capita, renewable energy consumption and Covid-19 positive cases, it is considered that predicting non-seasonal time series is more difficult than seasonal ones. The results show that the regression models based on recurrent neural networks using the selected time series with data augmentation improve results between 16.318% and 42.1426%. Wen et al. Time Series Data Augmentation for Deep Learning: A Survey Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series classification, anomaly detection, and forecasting. Finally, we discuss and highlight five future directions to provide useful research guidance. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRIS E MACKES whose telephone number is (571)270-3554. The examiner can normally be reached Monday-Friday 9:00-4:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRIS E MACKES/ Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Jul 27, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602438
COOKIELESS DELIVERY OF PERSONALIZIED CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12572501
INTEGRATED DIGITAL-ANALOG ARCHIVING SYSTEMS AND METHODS FOR DOCUMENT PRESERVATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572547
Systems and Methods for Deployment of Continuous Access Evaluation Protocol (CAEP) Hub Engine
2y 5m to grant Granted Mar 10, 2026
Patent 12563365
SYSTEMS AND METHODS FOR LOCALIZED INFORMATION PROVISION USING WIRELESS COMMUNICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541433
DATA BACKUP METHOD, ELECTRONIC DEVICE, DATA BACKUP SYSTEM, AND CHIP SYSTEM
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 527 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month