Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 9-13, and 15-17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li (Nonlinear Time Series Prediction Using Chaotic Neural Networks, Li Ke-Ping and Chen Tian-Lun 2001 Commun. Theor. Phys. 35 759).
Regarding claim 1, Li teaches a method for training a deep learning-based time series forecasting model (§2), the method comprising:
storing, in a memory, (i) a training dataset including a first plurality of time series data and (ii) a testing dataset including a second plurality of time series data (§3, “We select 500 training samples and 500 test samples from time series which is based on Mackey-Glass equation…”);
training, with a processor, the time series forecasting model using the training dataset, the time series forecasting model having a plurality of parameters that are learned during the training (§2, “The past values of time series are taken as inputs of neural network, and its future values are taken as outputs of neural network, we give a nonlinear mapping between them”, §3, “First, training the network by 500 training samples…”); and
periodically during the training, with the processor, evaluating a performance of the time series forecasting model against the testing dataset, wherein the training proceeds for a sufficient number of training epochs that, after initially improving and reaching a local maximum performance, the performance of the time series forecasting model is allowed to deteriorate and then improve again to reach a performance that exceeds the local maximum performance (Figure 3(a), the energy descent is a periodic check of performance, see §2, Equation 2, with sufficient epochs to show that the first energy trough is only a local minimum and then climbs again, i.e. deteriorates, and finally lands near 0).
Regarding claim 2, Li teaches all of the limitations of claim 1, the training including, for each iteration of each of the training epochs:
determining at least one output of the time series forecasting model by providing at least one respective training sample from the training dataset as input to the time series forecasting model (§2, “The past values of time series are taken as inputs of neural network, and its future values are taken as outputs of neural network, we give a nonlinear mapping between them”);
determining a training loss based on the at least one output; and refining the plurality of parameters of the time series forecasting model based on the training loss (§3, “First, training the network by 500 training samples…”).
Regarding claim 9, Li teaches all of the limitations of claim 1, further comprising:
setting a number of parameters in the plurality of parameters sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance (Figure 3(a), the energy descent is a periodic check of performance, see §2, Equation 2, with sufficient epochs to show that the first energy trough is only a local minimum and then climbs again, i.e. deteriorates, and finally lands near 0 – The model has parameters and functions as claimed and therefore has a sufficiently high number of parameters).
Regarding claim 10, Li teaches all of the limitations of claim 1, further comprising:
setting a maximum number of training epochs of the training of sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance (Figure 3(a), the energy descent is a periodic check of performance, see §2, Equation 2, with sufficient epochs to show that the first energy trough is only a local minimum and then climbs again, i.e. deteriorates, and finally lands near 0 – The claimed function is achieved and therefore sufficiently high number of training epochs is present).
Regarding claim 11, Li teaches all of the limitations of claim 1, wherein
the training proceeds for the sufficient number of training epochs that, after initially improving and reaching the local maximum performance, the performance of the time series forecasting model is allowed to deteriorate by at least a predetermined amount and then improve again to reach a performance that exceeds the local maximum performance (Figure 3(a), the energy descent is a periodic check of performance, see §2, Equation 2, with sufficient epochs to show that the first energy trough is only a local minimum and then climbs again, i.e. deteriorates, and finally lands near 0 – The deterioration is allowed by at least zero in order to ensure it cyclically deteriorates less and less each cycle).
Regarding claim 12, Li teaches all of the limitations of claim 1, wherein
the time series forecasting model is configured to receive a past values of a respective time series as input and determine predicted future values of the respective time series as output (§2, “The past values of time series are taken as inputs of neural network, and its future values are taken as outputs of neural network, we give a nonlinear mapping between them”).
Regarding claim 13, Li teaches all of the limitations of claim 1, wherein
the time series forecasting model is neural network model (§2).
Regarding claim 15, Li teaches all of the limitations of claim 1, the evaluating the performance of the time series forecasting model further comprising: determining at least one further output of the time series forecasting model by providing at least one respective training sample from the testing dataset as input to the time series forecasting model; and determining a testing loss based on the at least one further output (Table 1, Figure 6).
Regarding claims 16-17, Li according to claim 1 in operation provides the necessary structures in claims 16-17 which perform the method of claim 1.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 3 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (Nonlinear Time Series Prediction Using Chaotic Neural Networks, Li Ke-Ping and Chen Tian-Lun 2001 Commun. Theor. Phys. 35 759) in view of Perez (US20180232152A1).
Regarding claim 3, Li teaches all of the limitations of claim 2, the refining in each iteration of each of the training epochs further comprising:
refining the plurality of parameters of the time series forecasting model based on the training loss and a learning rate (§2, Equation 4, η).
Li does not teach wherein the learning rate decays from an initial learning rate as a function of a current training epoch until a predefined minimum learning rate is reached and stays constant at the predefined minimum learning rate after the predefined minimum learning rate is reached.
Perez teaches wherein the learning rate decays from an initial learning rate as a function of a current training epoch until a predefined minimum learning rate is reached and stays constant at the predefined minimum learning rate after the predefined minimum learning rate is reached (¶35, “A learning rate η was initially assigned a value of 0.0005 with exponential decay applied every 25 epochs by η/2 until 100 epochs were reached.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Li such that the learning rate decays from an initial learning rate as a function of a current training epoch until a predefined minimum learning rate is reached and stays constant at the predefined minimum learning rate after the predefined minimum learning rate is reached in order to provide accurate training.
Regarding claim 5, Li as modified teaches all of the limitations of claim 3, wherein the learning rate decays exponentially from the initial learning rate as a function of the current training epoch until the predefined minimum learning rate is reached (see rejection of claim 3, “A learning rate η was initially assigned a value of 0.0005 with exponential decay applied every 25 epochs by η/2 until 100 epochs were reached.”).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (Nonlinear Time Series Prediction Using Chaotic Neural Networks, Li Ke-Ping and Chen Tian-Lun 2001 Commun. Theor. Phys. 35 759) in view of Behera (Behera, Laxmidhar, Swagat Kumar, and Awhan Patnaik. "On adaptive learning rate that guarantees convergence in feedforward networks." IEEE transactions on neural networks 17.5 (2006): 1116-1125.)
Regarding claim 4, Li teaches all of the limitations of claim 3, but does not teach setting the predefined minimum learning rate sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance.
Behera teaches setting the predefined minimum learning rate sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance (“In BP algorithm, the value of learning rate is taken to be 0.95”, “Since BP algorithm is very popular among users of feedforward networks, readers will benefit from knowing that a proper adaptive learning rate can be found that may transform a locally convergent BP algorithm into a globally convergent one.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to set the learning rate in Li sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance in order to ensure global convergence.
Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (Nonlinear Time Series Prediction Using Chaotic Neural Networks, Li Ke-Ping and Chen Tian-Lun 2001 Commun. Theor. Phys. 35 759) in view of Wang (US20170228645A1).
Regarding claim 6, Li teaches all of the limitations of claim 2, but does not teach the method further comprising: ending the training in response to the performance of the time series forecasting model not improving for a predetermined number of training epochs.
Wang teaches ending the training in response to the performance of the time series forecasting model not improving for a predetermined number of training epochs (¶100 “In practice, we approximate the solution to the new subproblem, Eq. 17, with the early stopping. This avoids the huge searching time wasted on hovering around the optimal solution. A few iterations, 5 for example, are good enough to achieve the acceleration effects. Therefore, we recommend approximating the solution by the early stopping.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Li to include ending the training in response to the performance of the time series forecasting model not improving for a predetermined number of training epochs in order to avoid wasting computational resources.
Regarding claim 7, Li as modified teaches all of the limitations of claim 6, further comprising: setting the predetermined number of training epochs sufficiently high so as to enable the performance of the time series forecasting model to deteriorate and then improve again after reaching the local maximum performance (Figure 3(a), the energy descent is a periodic check of performance, see §2, Equation 2, with sufficient epochs to show that the first energy trough is only a local minimum and then climbs again, i.e. deteriorates, and finally lands near 0 – The claimed function is achieved and therefore sufficiently high number of training epochs is present).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li (Nonlinear Time Series Prediction Using Chaotic Neural Networks, Li Ke-Ping and Chen Tian-Lun 2001 Commun. Theor. Phys. 35 759) in view of Zhou (Zhou, Tian, et al. "Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting." International conference on machine learning. PMLR, 2022.)
Regarding claim 14, Li teaches all of the limitations of claim 13, but does not teach the time series forecasting model has a Transformer-based architecture.
Zhou teaches where the time series forecasting model has a Transformer-based architecture (Abstract, “Although Transformer-based methods have significantly improved state-of-the-art results for long-term series forecasting, they are not only computationally expensive but more importantly, are unable to capture the global view of time series (e.g. overall trend). To address these problems, we propose to combine Transformer with the seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series while Transformers capture more detailed structures”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize a transformer based architecture in Li in order to improve results in long-term series forecasting.
Allowable Subject Matter
Claim 8 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 8, the claims tie the deterioration and subsequent improvement of the performance to early stopping. The prior art recognizes escaping from local minima, but does not recognize ending training in response to non-improvement only after performance is allowed to deteriorate and then improve.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US20150127594A1 describes exponential learning rate decay.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCHYLER S SANKS whose telephone number is (571)272-6125. The examiner can normally be reached 06:30 - 15:30 Central Time, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCHYLER S SANKS/Primary Examiner, Art Unit 2129