DETAILED ACTION
Claims 1-20 are presented for examination.
This office action is in response to submission of application on 02-OCTOBER-2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 13-APRIL-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendment filed 02-OCTOBER-2025 in response to the non-final office action mailed DATE has been entered. Claims 1-20 remain pending in the application.
With regards to the non-final office action’s rejections under 103, the amendments to the claims necessitated a new consideration of the art. After this consideration, the examiner respectfully disagrees with the applicant’s arguments that the art referenced in the previous office action does not teach the amendment claim limitations. A new 103 rejection over the prior art has been provided.
Regarding the amended limitation wherein when the selection value at timestamp t is lower than a threshold value, the probabilistic forecast distribution prediction at timestamp t is excluded from the loss function computation, Murugesan teaches that in order to determine time-steps associated with inaccurate data, the forecast selection system will determine if any of a particular set of values associated with time-step is below an associated threshold (Paragraph 151). This value would be a selection value as it is used to select valid entries. Invalid entries are not considered in the prediction model (Paragraph 170), and therefore that time-stamp’s probabilistic forecast distribution prediction would be excluded from the loss function computation.
Claim Interpretation
The examiner interprets the equation in claim 10 to be equivalent to the following for the purposes of examination:
The system of claim 1 wherein the neural network comprises a recurrent neural network, wherein the outputs of the neural network comprise:
A mean value of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A variance of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A selection value associated with the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A hidden state vector at timestamp t+1 for the ith sample
And furthermore, wherein the inputs of the neural network comprise:
A mean value of the probabilistic forecast distribution prediction at timestamp t for the ith sample
A hidden state vector at timestamp t for the ith sample
One or more learnable parameters for the recurrent neural network.
The examiner believes that no specific operation is denoted in the equation of claim 10, and that the equation intended to describe a functional relationship between inputs and outputs of the neural network, as indicated by the RNN being used as a function.
Furthermore, this same interpretation would apply to claim 19 as well.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-9, 11, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (Pub. No. US 20210056386 A1, filed August 23rd 2019, hereinafter Murugesan) in view of Federspiel et al. (Pub. No. US 20210073636 A1, filed September 8th 2020, hereinafter Federspiel).
Regarding claim 1:
Claim 1 recites:
A computer-implemented system for training a neural network for probabilistic forecasting, the system comprising: at least one processor; memory in communication with the at least one processor; instructions stored in the memory, which when executed at the at least one processor causes the system to: maintain a data set representing a neural network having a plurality of weights; receive input data comprising a plurality of time series data sets; generate, using the neural network and based on the input data, a probabilistic forecast distribution prediction at timestamp t and a selection value associated with the probabilistic forecast distribution prediction at timestamp t; compute a loss function based on the selection value, wherein when the selection value at timestamp t is lower than a threshold value, the probabilistic forecast distribution prediction at timestamp t is excluded from the loss function computation; and update at least one of the plurality of weights of the neural network based on the loss function.
Regarding the limitation receive input data comprising a plurality of time series data sets
Murugesan teaches a forecast selection system that receives a prediction data set, wherein the prediction data set consists of values that are associated with a time-step (Paragraph 174). Data associated with a time step is time series data.
Regarding the limitation generate, using the neural network and based on the input data, a probabilistic forecast distribution prediction at timestamp t and a selection value associated with the probabilistic forecast distribution prediction at timestamp t:
Murugesan teaches that the forecast prediction system predicts an expected energy output for one or more time-steps into the future (Paragraph 176). The first time-step predicted would be the probabilistic forecast distribution prediction at timestamp t. Furthermore, the forecast selection system may also identify a value of an indicator associated with the accuracy of data at each time-step (Paragraph 156). This would be a selection value as it describes the accuracy of the selection.
Regarding the limitation compute a loss function based on the selection value, wherein when the selection value at timestamp t is lower than a threshold value, the probabilistic forecast distribution prediction at timestamp t is excluded from the loss function computation; and update at least one of the plurality of weights of the neural network based on the loss function:
Murugesan teaches that an inference model that compares the generated probabilities with the actual values of probability and adjusts its weights based on differences in the two sets of data (Paragraph 138). This describes the process of computing a loss function, which is an identification of differences between predicted and actual results, wherein the weights are updated based on it.
Furthermore, before the computation of the loss function, Murugesan teaches that in order to determine time-steps associated with inaccurate data, the forecast selection system will determine if any of a particular set of values associated with time-step is below an associated threshold (Paragraph 151). This value would be a selection value as it is used to select valid entries. Invalid entries are not considered in the prediction model (Paragraph 170), and therefore that time-stamp’s probabilistic forecast distribution prediction would be excluded from the loss function computation.
However, Murugesan does not teach maintain a data set representing a neural network having a plurality of weights:
Federspiel in the same field of endeavor of reinforcement learning teaches that the a computer system may train a machine learning model such as a neural network (Paragraph 31) and furthermore that these function may be implemented as software code that may be stored as instructions (Paragraph 62). Therefore the neural network with a plurality of weights would be maintained as data.
Murugesan, Federspiel, and the present application are all analogous art because they are in the same field of endeavor of reinforcement learning.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of a way to disseminate the system for widespread use (Federspiel, Paragraph 63).
Regarding claim 3, which depends upon claim 1:
Claim 3 recites:
The system of claim 1, wherein the instructions when executed at the at least one processor causes the system to: when the selection value is higher than or equal to a threshold value, store the probabilistic forecast distribution prediction at timestamp t as a valid prediction.
Murugesan in view of Federspiel teaches the system of claim 1 upon which claim 3 depends. Furthermore, regarding the limitation of claim 3:
Murugesan teaches the use of indicators of accuracy of data at each time-step, wherein if the indicator indicates that the data is accurate, the prediction value for the time-step is kept (Paragraph 156). The indicator would be the selection value wherein the indicator’s indication of accuracy is the threshold value as it prompts the system to keep the prediction as valid.
Regarding claim 4, which depends upon claim 3:
Claim 4 recites:
The system of claim 3, wherein the instructions when executed at the at least one processor causes the system to: process the stored probabilistic forecast distribution prediction at timestamp t to generate a predicted electricity consumption report.
Murugesan in view of Federspiel teaches the system of claim 3 upon which claim 4 depends. Furthermore, regarding the limitation of claim 4:
Murugesan teaches the forecast selection system provides expected energy outputs to users in a graphical interface (Paragraph 147). An expected energy output being displayed would be a predicted electricity consumption report.
Regarding claim 5, which depends upon claim 3:
Claim 5 recites:
The system of claim 3, wherein the instructions when executed at the at least one processor causes the system to: process the stored probabilistic forecast distribution prediction at timestamp t to generate a future financial forecasting statement.
Murugesan in view of Federspiel teaches the system of claim 3 upon which claim 5 depends. However, regarding the limitation of claim 5:
Federspiel teaches displaying changes to the cost, for example modifications to the cost over time (Paragraph 41). This would be an example of a future financial forecasting statement as the forecasts the financial impact over time in a user-readable manner.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of communicating information to the user (Federspiel, Paragraph 39).
Regarding claim 6, which depends upon claim 1:
Claim 6 recites:
The system of claim 1, wherein the instructions when executed at the at least one processor causes the system to: when the selection value is lower than a threshold value, reject the probabilistic forecast distribution prediction at timestamp t.
Murugesan in view of Federspiel teaches the system of claim 1 upon which claim 6 depends. Furthermore, regarding the limitation of claim 6:
Murugesan teaches replacing a data with an indicator of inaccuracy with a predetermined value (Paragraph 156). The indicator of inaccuracy would be a selection value lower than a threshold value and the replacement of the data would be a rejection.
Regarding claim 7, which depends upon claim 6:
Claim 7 recites:
The system of claim 6, wherein the instructions when executed at the at least one processor causes the system to: generate a signal for causing, at a display device, a display of a graphical user interface showing that the probabilistic forecast distribution prediction at timestamp t has been rejected.
Murugesan in view of Federspiel teaches the system of claim 6 upon which claim 7 depends. However, regarding the limitation of claim 7:
Federspiel teaches that a critical control error may be an indication that one or more sensors, i.e. data sources that may be faulty, have exceeded a threshold (Paragraph 16) and furthermore that critical control errors may be displayed to the user (Paragraph 40). This would be a display that shows data has been rejected. Furthermore, Murugesan has previously taught a probabilistic forecast distribution prediction that may be combined with Federspiel.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of communicating information to the user (Federspiel, Paragraph 39).
Regarding claim 8, which depends upon claim 7:
Claim 8 recites:
The system of claim 7, wherein the instructions when executed at the at least one processor causes the system to: generate a second signal for causing, at the display device, a display of a graphical user interface showing the threshold value.
Murugesan in view of Federspiel teaches the system of claim 7 upon which claim 8 depends. However, regarding the limitation of claim 8:
Federspiel teaches that a predetermined tolerance threshold may be displayed the user via a visual representation (Paragraph 55). This would be analogous to a display of a graphical user interface showing the threshold value.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of communicating information to the user (Federspiel, Paragraph 39).
Regarding claim 9, which depends upon claim 8:
Claim 9 recites:
The system of claim 8, wherein the instructions when executed at the at least one processor causes the system to: generate a third signal for causing, at the display device, a display of a graphical user interface showing a graphical user element for modifying the threshold value.
Murugesan in view of Federspiel teaches the system of claim 8 upon which claim 9 depends. However, regarding the limitation of claim 9:
Federspiel teaches a recommendation given to a user to change a tolerance threshold range via are graphical user interface (Paragraph 56). This change to the threshold range would be modifying the threshold value.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of communicating information to the user (Federspiel, Paragraph 39).
Claims 11-17 recite a method that parallels the system of claims 1-7 respectively. Therefore, the analysis discussed above with respect to claims 1-7 also applies to claims 11-17 respectively. Accordingly, claims 11-17 are rejected based on substantially the same rationale as set forth above with respect to claims 1-7 respectively.
Regarding claim 18, which depends upon claim 17:
Claim 18 recites:
The method of claim 17, further comprising: generating a second signal for causing, at the display device, a display of a graphical user interface showing the threshold value and a graphical user element for modifying the threshold value.
Murugesan in view of Federspiel teaches the system of claim 17 upon which claim 18 depends. However, regarding the limitation a display of a graphical user interface showing the threshold value:
Federspiel teaches that a predetermined tolerance threshold may be displayed the user via a visual representation (Paragraph 55). This would be analogous to a display of a graphical user interface showing the threshold value.
Furthermore, regarding the limitation a graphical user element for modifying the threshold value:
Federspiel teaches a recommendation given to a user to change a tolerance threshold range via are graphical user interface (Paragraph 56). This change to the threshold range would be modifying the threshold value.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan and the teachings of Federspiel. This would have provided the advantage of communicating information to the user (Federspiel, Paragraph 39).
Claim 20 recites a non-transitory computer readable storage memory that parallels the system of claim 1. Therefore, the analysis discussed above with respect to claim 1 also applies to claim 20. Accordingly, claim 20 is rejected based on substantially the same rationale as set forth above with respect to claim 1.
Claims 2, 10, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Murugesan in view of Federspiel, further in view of Anthony et al. (Pub. No. US 20200241545 A1, filed January 30th 2020, hereinafter Anthony).
Regarding claim 2, which depends upon claim 1:
Claim 2 recites:
The system of claim 1, wherein the probabilistic forecast distribution prediction at timestamp t comprises a mean and a variance of the probabilistic forecast distribution prediction.
Murugesan in view of Federspiel teaches the system of claim 1 upon which claim 2 depends. However, neither Murugesan or Federspiel teach the limitation of claim 2:
Anthony in the same field of endeavor of reinforcement learning teaches a model that may include as outputs a mean likelihood that an individual may act in a particular way, but also the variance of the result (Paragraph 84). This would comprise a mean and a variance of the probabilistic forecast distribution prediction.
Anthony and the present application are analogous art because they are in the same field of endeavor of reinforcement learning.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan in view of Federspiel and the teachings of Anthony. This would have provided the advantage of captured uncertainty in the prediction (Anthony, Paragraph 84).
Regarding claim 10, which depends upon claim 1:
Claim 10 recites:
The system of claim 1 wherein the neural network comprises a recurrent neural network, wherein the outputs of the neural network comprise:
A mean value of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A variance of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A selection value associated with the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample
A hidden state vector at timestamp t+1 for the ith sample
And furthermore, wherein the inputs of the neural network comprise:
A mean value of the probabilistic forecast distribution prediction at timestamp t for the ith sample
A hidden state vector at timestamp t for the ith sample
One or more learnable parameters for the recurrent neural network.
Murugesan in view of Federspiel teaches the system of claim 1 upon which claim 10 depends. Regarding the limitation wherein the inputs of the neural network comprise (a) a mean value of the probabilistic forecast distribution prediction at timestamp t for the ith sample, (b) a hidden state vector at timestamp t for the ith sample, and (c) one or more learnable parameters for the recurrent neural network.
This limitation describes an iterative process wherein further future actions can be predicted from the currently predicted future action. Murugesan teaches that its prediction steps may be configured to repeat its steps for one or more time-steps in the future (Paragraph 176), wherein the ‘more’ time-step would be the further future iterative predictions.
However, neither Murugesan or Federspiel teach a recurrent neural network, wherein the outputs of the neural network comprise (a) A mean value of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample, (b) A variance of the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample. (c) A selection value associated with the probabilistic forecast distribution prediction at timestamp t+1 for the ith sample:
Anthony teaches that a model’s output may include a mean likelihood of an individual taking a particular action, as well as the variance of that result, wherein the selection is the specific likelihood of each action (Paragraph 84).
Furthermore, regarding the limitation wherein the output comprises a hidden state vector at timestamp t+1 for the ith sample as an output
Anthony teaches that a model’s outputs may also include a hidden context represented as a vector, wherein the context are additional factors that might influence an individual’s likely next action (Paragraph 85). This could be a hidden state vector as it provides information about that predicts the next event or time step.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Murugesan in view of Federspiel and the teachings of Anthony. This would have provided the advantage of better predictions about future actions of individuals by using better context for possible actions that might be intuitive to a human observer (Anthony, Paragraph 6).
Response to Arguments
Applicant’s arguments filed 02-OCTOBER-2025 have been fully considered, but the examiner believes that not all are fully persuasive.
Regarding the applicant’s remarks on the non-final office action’s 103 rejection of the claims, the applicant argues that Federspiel in view of Murugesan do not teach the amended limitations of the independent claims. As such, the applicant argues that all claims dependent on the above would additionally not be obvious under 103. However, the examiner believes that Federspiel in view of Murugesan does teach the amended limitations and respectfully requests applicant’s consideration of the following:
Regarding the amended limitation wherein when the selection value at timestamp t is lower than a threshold value, the probabilistic forecast distribution prediction at timestamp t is excluded from the loss function computation, Murugesan teaches that in order to determine time-steps associated with inaccurate data, the forecast selection system will determine if any of a particular set of values associated with time-step is below an associated threshold (Paragraph 151). This value would be a selection value as it is used to select valid entries. Invalid entries are not considered in the prediction model (Paragraph 170), and therefore that time-stamp’s probabilistic forecast distribution prediction would be excluded from the loss function computation. While the applicant does argue that as this process occurs before the processing, not at the point of the loss function, being excluded from the processing would still exclude these values from the loss function.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.J.M./Examiner, Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142