Prosecution Insights
Last updated: April 19, 2026
Application No. 17/862,765

Extending Forecasting Models for Forecast/Evaluation Granularity Mismatch

Final Rejection §101§103§112
Filed
Jul 12, 2022
Examiner
AHMED, SYED RAYHAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+16.4% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This Office Action is sent in response to the Applicant’s Communication received on 09/18/2025 for application number 17/862,765. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/Declaration, IDS, and Claims. Claims 1 and 3-20 are pending. Claim 2 is canceled. Claims 1, 3, 6, 10-12, and 14-20 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 35 USC 112 Applicant argues that claim 3 was rejected under 35 U.S.C. § 112(b) as being indefinite due to an antecedent basis issue. In response, Applicant has amended claim 3 to address the antecedent basis issue. Therefore, Applicant respectfully requests withdrawal of the § 112 rejection. Examiner finds the Applicant’s argument persuasive. The 35 USC 112 rejection has been withdrawn. 35 USC 101 At the bottom of page 7 of the remarks section, Applicant argues Applicant has amended independent claim 1 to include that data values for performing the forecast at the target level of granularity are sparse, data values for performing the forecast at the aggregated level of granularity are not sparse, and determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity is based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity. At least these features provide for a practical application of extending forecasting models to levels of granularity where data is sparse while maintaining accuracy and reducing processing costs. See paragraphs [0002], [0039], [0052] of the specification. Examiner respectfully disagrees. The limitation of “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity, wherein data values for performing the forecast at the target level of granularity are sparse” is mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)). The limitation of “aggregating… the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse” is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. The limitation of “determining… a distribution scheme for distributing the aggregated forecast result to the target level of granularity based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity” is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. In regards to the Applicant’s cited paragraphs of the instant application, paragraphs [0002], [0039], [0052] discloses explicit steps of “a distribution engine 114 for determining a distribution scheme,” “the hyperparameter tuning tool can decide a most promising next hyperparameter trial to run based on results of previous hyperparameter trials,” “the hyperparameter tuning tool can stop after a predetermined amount of time and select the best result, such as a result with the highest score, from within that predetermined amount of time,” “The hyperparameter tuning would not require retraining the forecast model,” which are not reflected in the claim. Furthermore, with respect to Applicant’s argument that “utilizing hyperparameter tuning or a grid search can be accomplished with minimal overhead, since it would not require retraining forecasting models at various levels of granularity but would still retain accuracy at target levels of granularity, particularly where data for forecasting is sparse,” if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. See MPEP 2106.04(d)(1). As discussed above, such disclosed inventive steps are not explicitly reflected in the claim. Therefore, the 35 USC 101 rejection is maintained. 35 USC 103 In paragraph 1 of page 9 of the remarks section, Applicant argues that the cited references fail to teach the amended limitations of claim 1. Examiner respectfully disagrees. In regard to the newly amended limitations, Spiliotis teaches “the target level of granularity” in sect 3.1: “list item 3, bottom series of the hierarchy.” Zhang teaches “one or more processors” in para 0005: “the program instruction executable by a processor to cause the processor to perform a method.” The amended limitation “wherein data values for performing forecast at level of granularity are sparse” are taught by Lohia in para 0003: “Systems and methods are disclosed herein for an ensemble time series prediction system for making predictions (performing forecast) based on observed data (at level of granularity),” and para 0023: “The feature reduction module 220 may generate embedding features for the ensemble time series prediction model. In one embodiment, the feature reduction module 220 may process large and sparse datasets and performs dimensionality reduction. As referred herein, dimensionality reduction may refer to techniques that reduce the number of input variables (data values) in a dataset and generates embedding features that are high-level abstract representations extracted from the sparse dataset (are sparse).” The amended limitation “Aggregating level of granularity to an aggregated level of granularity compared to the level of granularity, wherein the data values at the aggregated level of granularity are not sparse” is taught by Athanasopoulos in sect 3: “coherent forecasts of lower level series aggregate to their corresponding upper level series (an aggregated level of granularity) and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3,” and sect 3.1.2: “In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing (Aggregating level of granularity) matrix S.” The amended limitation “wherein data values are not sparse” is taught by Lohia in the abstract: “The disclosed ensemble time series prediction system may extract time dependent features from autoregressive time dependent data, embedding features from sparse datasets, continuous features from continuous dataset (data values are not sparse).” The amended limitation “outputting the forecast result at the target level of granularity” is taught by Athanasopoulos in sect 3.1.2: “In contrast, top-down approaches involve first generating forecasts (outputting the forecast result) for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series (the target level of granularity); hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.” The amended limitation “distribution scheme” is taught by Athanasopoulos in sect 3.1.2: y ~ T + h | T T D = S p y ^ T o t , T + h | T . In regards to the newly amended claim limitation “scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics” have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In paragraph 2 of page 9 of the remarks section, Applicant further argues that Lohia does not teach or suggest that "data values for performing the forecast at the target level of granularity are sparse" and "the data values at the aggregated level of granularity are not sparse", as recited in amended claim 1. Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The amended limitations of "data values for performing the forecast at the target level of granularity are sparse" and "the data values at the aggregated level of granularity are not sparse" are taught by the combination Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte. Spiliotis teaches “the target level of granularity” in sect 3.1: “list item 3, bottom series of the hierarchy.” Athanasopoulos teaches “the aggregated level of granularity” in sect 3: “coherent forecasts of lower level series aggregate to their corresponding upper level series (aggregated level of granularity). Lohia teaches “wherein data values for performing forecast at level of granularity are sparse” in para 0003: “Systems and methods are disclosed herein for an ensemble time series prediction system for making predictions (performing forecast) based on observed data (at level of granularity),” and para 0023: “The feature reduction module 220 may generate embedding features for the ensemble time series prediction model. In one embodiment, the feature reduction module 220 may process large and sparse datasets and performs dimensionality reduction. As referred herein, dimensionality reduction may refer to techniques that reduce the number of input variables (data values) in a dataset and generates embedding features that are high-level abstract representations extracted from the sparse dataset (are sparse).” Additionally, Lohia teaches “wherein data values are not sparse” in the abstract: “The disclosed ensemble time series prediction system may extract time dependent features from autoregressive time dependent data, embedding features from sparse datasets, continuous features from continuous dataset (data values are not sparse).” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 3-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 3-11 are directed towards a method. Claims 12-16 are directed towards a system. Claims 17-20 are directed towards a non-transitory computer readable medium. Therefore, all claims are directed to one of the four statutory categories of patent eligible subject matter. Claim 1 Step 2A Prong 1: Claim 1 recites: “aggregating, [with the one or more processors,] the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse;” Aggregating the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “performing, [with the one or more processors,] the forecast at the aggregated level of granularity to generate an aggregated forecast result;” Performing the forecast at an aggregated level of granularity compared to the target level of granularity to generate an aggregated forecast result is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “determining, [with the one or more processors,] a distribution scheme for distributing the aggregated forecast result to the target level of granularity based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity;” Determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “A method for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity, wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)). “distributing, with the one or more processors, the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “A method for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept. “distributing, with the one or more processors, the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 3 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the target evaluation metric comprises a target quality for a forecasting model;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the target evaluation metric comprises a target quality for a forecasting model;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 4 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the target level of granularity comprises a level of a category, location, or time;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the target level of granularity comprises a level of a category, location, or time;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 5 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the target evaluation metric further comprises a weight, the target level of granularity being based on the weight;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the target evaluation metric further comprises a weight, the target level of granularity being based on the weight;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 6 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the target level of granularity is aggregated via an aggregation scheme;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the target level of granularity is aggregated via an aggregation scheme;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 7 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the aggregation scheme comprises one of a sum or average for numerical features of data for the forecasting;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the aggregation scheme comprises one of a sum or average for numerical features of data for the forecasting;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 8 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “wherein the aggregation scheme comprises one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “wherein the aggregation scheme comprises one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 9 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “performing, with the one or more processors, training for the forecast at the aggregated level of granularity;” the mere recitation of training, broadly recited at a high level of generality, in this case amounts to insignificant extra solution activity as per MPEP 2106.05(g), as training is necessary to effectively use a machine learning model, and therefore, without any particular details on the training, this does not impose meaningful limits on the claim (MPEP 2106.05(g)(2): “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention”). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “performing, with the one or more processors, training for the forecast at the aggregated level of granularity;” the mere recitation of training, broadly recited at a high level of generality, in this case amounts to insignificant extra solution activity as per MPEP 2106.05(g), as training is necessary to effectively use a machine learning model, and therefore, without any particular details on the training, this does not impose meaningful limits on the claim (MPEP 2106.05(g)(2): “Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention”). Furthermore, generic training with new data is well-understood, routine, and conventional activity, as per the following references cited as Berkheimer evidence: Fujii et al. (US 2022/0156641 A1; [0005]: “It is known that retraining such models with new training data can change the accuracy of the scores calculated by the models. For example, by training the model with increased training data, it is possible to replace the model with a more accurate model.”, Barry et al. (US 10,832,150 B2, Col 1 Lines 37-39: “Dynamic systems often require regular re-training and commonly are retrained every day”), and Raj et al. (US 2021/0319174 A1; [0006]: “Therefore, periodic retraining, also known as refreshing, of any model is desirable.” These references indicate that training and retraining is a common technique in the art of machine learning. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 10 Step 2A Prong Two “an accuracy of the combinations of evaluation metrics are compared at the target level of granularity using a validation dataset;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Step 2B: “an accuracy of the combinations of evaluation metrics are compared at the target level of granularity using a validation dataset;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 11 Step 2A Prong 1: Claim 11 recites: “generating heuristics to narrow the combination of evaluation metrics to compare;” Generating heuristics to narrow the combination of evaluation metrics to compare is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two and Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under step 2B. Thus, the judicial exception is not integrated into a practical application (see MPEP 2106.04(d) I.), failing step 2A prong 2. The claim is ineligible. Claim 12 Step 2A Prong 1: Claim 12 recites: “aggregating, [with the one or more processors,] the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse;” Aggregating the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “performing the forecast at the aggregated level of granularity to generate an aggregated forecast result;” Performing the forecast at an aggregated level of granularity compared to the target level of granularity to generate an aggregated forecast result is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity;” Determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “A system comprising: one or more processors; and one or more storage devices coupled to the one or more processors and storing instructions” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “to perform operations for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)). “distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “A system comprising: one or more processors; and one or more storage devices coupled to the one or more processors and storing instructions” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept. “to perform operations for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept. “distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 13 is a system claim that recites identical limitations to claim 4. Therefore, claim 13 is rejected using the same rationale as claim 4. Claim 14 Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “the target level of granularity is aggregated via an aggregation scheme, the aggregation scheme comprising one of a sum or average for numerical features of data for the forecasting or a most frequent value or a concatenate of unique values for categorical features of data for the forecast;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “the target level of granularity is aggregated via an aggregation scheme, the aggregation scheme comprising one of a sum or average for numerical features of data for the forecasting or a most frequent value or a concatenate of unique values for categorical features of data for the forecast;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claims 15 and 19 are system and non-transitory computer readable medium claims, respectively, that recite identical limitations to claim 10. Therefore, claims 15 and 19 are rejected using the same rationale as claim 10. Claims 16 and 20 are system and non-transitory computer readable medium claims, respectively, that recite identical limitations to claim 11. Therefore, claims 16 and 20 are rejected using the same rationale as claim 11. Claim 17 Step 2A Prong 1: Claim 17 recites: “aggregating, [with the one or more processors,] the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse;” Aggregating the target level of granularity to an aggregated level of granularity compared to the target level of granularity, wherein the data values at the aggregated level of granularity are not sparse is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “performing the forecast at the aggregated level of granularity to generate an aggregated forecast result;” Performing the forecast at an aggregated level of granularity compared to the target level of granularity to generate an aggregated forecast result is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. “determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity;” Determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity is an action that can be performed mentally with the aid of pen and paper, and is therefore a mental process. Step 2A Prong Two This judicial exception is not integrated into a practical application because the additional elements are as follows: “A non-transitory computer readable medium for storing instructions that, when executed by one or more processors;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). “to perform operations for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)). “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity (MPEP 2106.05(g)). “distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements are as follows: “A non-transitory computer readable medium for storing instructions that, when executed by one or more processors;” Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) which cannot provide an inventive concept. “to perform operations for forecasting independent of level of granularity;” The limitation amounts to merely indicating a field of use or technological environment in which to apply a judicial exception. This does not amount to significantly more than the exception itself (MPEP 2106.05(h)) and cannot provide an inventive concept. “receiving, with one or more processors, a target evaluation metric for performing a forecast, the target evaluation metric comprising a target level of granularity wherein data values for performing the forecast at the target level of granularity are sparse;” Mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g). The additional element of “receiving” does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of receiving steps amounts to no more than mere data gathering. This element amounts to receiving data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II (i). This cannot provide an inventive concept. “distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). “outputting, with the one or more processors, the forecast result at the target level of granularity;” Insignificant extra-solution as the limitation amounts to necessary data outputting (MPEP 2106.05(g)(3)). This falls under Well-Understood, Routine, Conventional activity -see MPEP 2106.05(d)(II)(vi). Even when considered in combination, these additional elements represent mere instructions to apply an exception and therefore do not provide an inventive concept. The claim is ineligible. Claim 18 is a non-transitory computer readable medium claim that recites identical limitations to claim 14. Therefore, claim 18 is rejected using the same rationale as claim 18. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-7, 12, 13, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Spiliotis et al. (Hierarchical forecast reconciliation with machine learning, published 2021), hereinafter Spiliotis, in view of Athanasopoulos et al. (Hierarchical Forecasting, published 2019), hereinafter Athanasopoulos, Zhang et al. (US 20220122744 A1), hereinafter Zhang, Lohia et al. (US 20230351155 A1), hereinafter Lohia, and Duarte et al. (Empirical comparison of cross-validation and internal metrics for tuning SVM hyperparameters, published 2017), hereinafter Duarte. Regarding claim 1, Spiliotis teaches, A method for forecasting independent of level of granularity [Sect 1, pg. 2, col 2, para 2, This approach is more general compared to its linear counterparts and is expected to enhance the forecasting performance across all hierarchical levels, especially when the relationships of the individual series are complex or change significantly through time], the method comprising: receiving [Sect 3, information is extracted from large time series] a target evaluation metric (Sect 3.1, list item 2, training set) for performing a forecast (Sect 3.1, list item 2, forecasts are produced), the target evaluation metric (Sect 3.1, list item 3, training set) comprising a target level of granularity (Sect 3.1, list item 3, bottom series of the hierarchy) [Sect 3.1, list item 2, A forecasting model is fitted to each series in each training set and one-step-ahead forecasts are produced for each test set; Sect 3.1, list item 3, A separate ML model (either a RF or XGB) is built for predicting each of the mk bottom series of the hierarchy. The training set of each model consists of n−p observations and m +1 variables]; Performing (Sect 3.1, list item 6, produced) the forecast (Sect 3.1, list item 6, reconciled forecasts) at aggregated level of granularity (Sect 3.1, list item 6, the rest of the hierarchical levels) to generate an aggregated forecast result [Sect 3.1, list item 5, The mk models that were built in Step 3 are used to provide forecasts for the series of the bottom level of the hierarchy, using the base forecasts produced in Step 4 as input. This process is repeated h times, each time for a different forecasting horizon; Sect 3.1, list item 6, The forecasts produced by the ML models in step 5 are aggregated (summed) so that reconciled forecasts are produced for the rest of the hierarchical levels]; Spiliotis teaches above limitations of claim 1 including the target level of granularity (Spiliotis, Sect 3.1). Spiliotis does not teach Aggregating level of granularity to an aggregated level of granularity compared to the level of granularity, wherein the data values at the aggregated level of granularity are not sparse; wherein data values for performing the forecast at the target level of granularity are sparse; determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics at the target level of granularity, distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity, and one or more processors. Athanasopoulos teaches, Aggregating level of granularity (Sect 3.1.2, aggregated up by the summing matrix S) to an aggregated level of granularity (Sect 3, upper level series) compared to the level of granularity [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3; Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S]; Determining a distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) for distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series); and distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series) based on (Sect 3.1.2, given by) the determined distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) to generate a forecast result (Sect 3.1.2, forecasts) at the target level of granularity (Sect 3.1.2, for the bottom-level series); outputting the forecast result at the target level of granularity [Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregated forecast result applied to a target level of granularity [Athansopoulos, Abstract] as applying forecast reconciliation methods results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Spiliotis-Athanasopoulos do not teach Aggregating level of granularity to an aggregated level of granularity compared to the level of granularity, wherein the data values at the aggregated level of granularity are not sparse; scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics; wherein data values for performing the forecast at the target level of granularity are sparse; one or more processors. Zhang teaches, one or more processors [Para 0005, Additional embodiments of the present disclosure include a computer program product for predicting low-frequency sensor signal predictions using a prediction model which can include computer-readable storage medium having program instructions embodied therewith, the program instruction executable by a processor to cause the processor to perform a method.]. Zhang is analogous to the claimed invention as they both relate to predictive models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis and Athanasopoulos’s teachings to incorporate the teachings of Zhang and provide one or more processors in order to perform the methodologies using hardware. Spiliotis-Athanasopoulos-Zhang teach the above limitations of claim 1 including the target level of granularity (Spiliotis, Sect 3.1) and the aggregated level of granularity (Athanasopoulos, Sect 3). Spiliotis-Athanasopoulos-Zhang do not teach wherein data values are not sparse; scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics; wherein data values for performing forecast at level of granularity are sparse. Lohia teaches, wherein data values are not sparse (Abstract, continuous dataset) [Abstract, The disclosed ensemble time series prediction system may extract time dependent features from autoregressive time dependent data, embedding features from sparse datasets, continuous features from continuous dataset]; wherein data values (Para 0023, input variables) for performing forecast (Para 0003, making predictions) at level of granularity (Para 0023, dataset) are sparse (Para 0023, sparse dataset) [Para 0023, The feature reduction module 220 may generate embedding features for the ensemble time series prediction model. In one embodiment, the feature reduction module 220 may process large and sparse datasets and performs dimensionality reduction. As referred herein, dimensionality reduction may refer to techniques that reduce the number of input variables in a dataset and generates embedding features that are high-level abstract representations extracted from the sparse dataset. The generated embedding features are more compact, which reduces time and storage space required and improves the performance of machines learning models; Para 0003, Systems and methods are disclosed herein for an ensemble time series prediction system for making predictions based on observed data]. Lohia is analogous to the claimed invention as they both relate to prediction models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, and Zhang’s teachings to incorporate the teachings of Lohia and provide sparse data [Lohia, Para 0023] in order to extract embedding features, which reduces time and storage space required and improves the performance of machines learning models. Spiliotis-Athanasopoulos-Zhang-Lohia teach the above limitations of claim 1 including Determining a distribution scheme for distributing the aggregated forecast result (Athanasopoulos, sect 3.1.2) and the target level of granularity (Spiliotis, Sect 3.1). Duarte teaches, scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics [Abstract, Hyperparameter tuning is a mandatory step for building a support vector machine classifier… We compare cross-validation (5-fold) with Xi-alpha, radius-margin bound, generalized approximate cross validation, maximum discrepancy and distance between two classes on 110 public binary data sets. Cross validation is the method that resulted in the best selection of the hyper-parameters, but it is also the method with one of the highest execution time]. Duarte is analogous to the claimed invention as they both relate to hyperparameter tuning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, Zhang, and Lohia’s teachings to incorporate the teachings of Duarte and provide hyperparameter tuning to compare combinations of evaluation metrics in order to [Duarte, Abstract and Sect 1, para 10] improve execution time by utilizing a fast selection procedure. Regarding claim 3, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1. Spiliotis further teaches, wherein the target evaluation metric (Sect 3.1, Para 1, time series) comprises a target quality (Sect 3.1, Para 1, forecast accuracy) for a forecasting model (Sect 3.1, Para 1, the ML model) [Sect 3.1, Para 1, The proposed ML reconciliation method uses time series cross-validation [30] to measure the out-of-sample forecast accuracy, which is then used in an optimization procedure to tune the ML model]. Regarding claim 4, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1. Spiliotis further teaches, wherein the target level of granularity (Sect 3.1, list item 3, bottom-level) comprises a level of a category, location, or time (Sect 3.1, list item 3, corresponding times) [Sect 3.1, list item 3, The first m variables (used as predictors or inputs) are the one-step-ahead forecasts produced during the rolling origin process for the m series of the hierarchy, and the last variable (the response or target) is the actual value of the bottom-level series at the corresponding times.]. Regarding claim 5, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1. Spiliotis further teaches, wherein the target evaluation metric (Sect 1, pg. 2, col 2, para 1, time series) further comprises a weight (Sect 1, pg. 2, col 2, para 1, combination weights), the target level of granularity (Sect 4.4, All levels) being based on the weight (Sect 4.4, weighted equally) [Sect 1, pg. 2, col 2, para 1, we propose the use of ML techniques to derive the combination weights for the forecasts across the various aggregation levels of a hierarchy. We focus on two ML models that have been shown to perform well in time series forecasting and cross-learning contexts: Random forests (RF) and XGBoost (XGB); Sect 4.4, Para 1, All levels are weighted equally since we do not focus on a particular decision-making problem, aimed at a specific hierarchical level.]. Regarding claim 6, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1 including the one or more processors (see claim 1). Athanasopoulos further teaches, Wherein the target level of granularity is aggregated (Sect 3.1.2, aggregated up by the summing matrix S) via an aggregation scheme (Sect 3, s ⊂ R3) [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3; Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregation scheme applied to the target layer of granularity [Athansopoulos, Abstract] as applying forecast reconciliation methods results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Regarding claim 7, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1 and claim 6. Anthanasopoloulos further teaches, the aggregation scheme comprises one of a sum (Sect 3.1.2, summing matrix S) or average for numerical features of data (Sect 3.1.2, m-dimensional vector consisting of a set of proportions) for the forecasting [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3; Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregation scheme summing or averages for numerical features [Athansopoulos, Abstract] as applying forecast reconciliation methods (summing) results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Regarding claim 12, Spiliotis teaches, For forecasting independent of level of granularity [Sect 3.1, list item 2, A forecasting model is fitted to each series in each training set and one-step-ahead forecasts are produced for each test set]; Receiving [Sect 3, information is extracted from large time series] a target evaluation metric (Sect 3.1, list item 2, training set) for performing a forecast (Sect 3.1, list item 2, forecasts are produced), the target evaluation metric (Sect 3.1, list item 3, training set) comprising a target level of granularity (Sect 3.1, list item 3, bottom series of the hierarchy) [Sect 3.1, list item 2, A forecasting model is fitted to each series in each training set and one-step-ahead forecasts are produced for each test set; Sect 3.1, list item 3, A separate ML model (either a RF or XGB) is built for predicting each of the mk bottom series of the hierarchy. The training set of each model consists of n−p observations and m +1 variables.]; Performing (Sect 3.1, list item 6, produced) the forecast (Sect 3.1, list item 6, reconciled forecasts) at an aggregated level of granularity (Sect 3.1, list item 6, the rest of the hierarchical levels) compared to the target level of granularity (Sect 3.1, list item 5, bottom level of the hierarchy) to generate an aggregated forecast result [Sect 3.1, list item 5, The mk models that were built in Step 3 are used to provide forecasts for the series of the bottom level of the hierarchy, using the base forecasts produced in Step 4 as input. This process is repeated h times, each time for a different forecasting horizon; Sect 3.1, list item 6, The forecasts produced by the ML models in step 5 are aggregated (summed) so that reconciled forecasts are produced for the rest of the hierarchical levels.]; Spiliotis does not teach A system comprising: one or more processors; and one or more storage devices coupled to the one or more processors and storing instructions that, when executed by the one or more processors, causes the one or more processors to perform operations for forecasting, the operations comprising: wherein data values for performing forecast at level of granularity are sparse; determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity and distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity. Athanasopoulos teaches, Aggregating level of granularity (Sect 3.1.2, aggregated up by the summing matrix S) to an aggregated level of granularity (Sect 3, upper level series) compared to the level of granularity [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3; Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S]; Determining a distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) for distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series); and distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series) based on (Sect 3.1.2, given by) the determined distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) to generate a forecast result (Sect 3.1.2, forecasts) at the target level of granularity (Sect 3.1.2, for the bottom-level series); outputting the forecast result at the target level of granularity [Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregated forecast result applied to a target level of granularity [Athansopoulos, Abstract] as applying forecast reconciliation methods results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Spiliotis-Athanasopoulos do not teach A system comprising: one or more processors; and one or more storage devices coupled to the one or more processors and storing instructions that, when executed by the one or more processors, causes the one or more processors to perform operations for forecasting; wherein data values for performing forecast at level of granularity are sparse. Zhang teaches, A system (Para 0005, computer program product) comprising: one or more processors (Para 0005, a processor); and one or more storage devices (Para 0005, computer-readable storage medium) coupled to (Para 0005, having) the one or more processors and storing instructions (Para 0005, program instructions ) that, when executed by the one or more processors, causes the one or more processors to perform operations (Para 0005, cause the processor to perform a method) for forecasting [Para 0005, Additional embodiments of the present disclosure include a computer program product for predicting low-frequency sensor signal predictions using a prediction model which can include computer-readable storage medium having program instructions embodied therewith, the program instruction executable by a processor to cause the processor to perform a method.]. Zhang is analogous to the claimed invention as they both relate to predictive models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis and Athanasopoulos’s teachings to incorporate the teachings of Zhang and provide a system in order to perform the methodologies using hardware. Spiliotis-Athanasopoulos-Zhang teach the above limitations of claim 1 including the target level of granularity (Spiliotis, Sect 3.1). Spiliotis-Athanasopoulos-Zhang do not teach wherein data values for performing forecast at level of granularity are sparse. Lohia teaches, wherein data values (Para 0023, input variables) for performing forecast (Para 0003, making predictions) at level of granularity (Para 0023, dataset) are sparse (Para 0023, sparse dataset) [Para 0023, The feature reduction module 220 may generate embedding features for the ensemble time series prediction model. In one embodiment, the feature reduction module 220 may process large and sparse datasets and performs dimensionality reduction. As referred herein, dimensionality reduction may refer to techniques that reduce the number of input variables in a dataset and generates embedding features that are high-level abstract representations extracted from the sparse dataset. The generated embedding features are more compact, which reduces time and storage space required and improves the performance of machines learning models; Para 0003, Systems and methods are disclosed herein for an ensemble time series prediction system for making predictions based on observed data]. Lohia is analogous to the claimed invention as they both relate to prediction models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, and Zhang’s teachings to incorporate the teachings of Lohia and provide sparse data [Lohia, Para 0023] in order to extract embedding features, which reduces time and storage space required and improves the performance of machines learning models. Spiliotis-Athanasopoulos-Zhang-Lohia teach the above limitations of claim 1 including Determining a distribution scheme for distributing the aggregated forecast result (Athanasopoulos, sect 3.1.2) and the target level of granularity (Spiliotis, Sect 3.1). Duarte teaches, scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics [Abstract, Hyperparameter tuning is a mandatory step for building a support vector machine classifier… We compare cross-validation (5-fold) with Xi-alpha, radius-margin bound, generalized approximate cross validation, maximum discrepancy and distance between two classes on 110 public binary data sets. Cross validation is the method that resulted in the best selection of the hyper-parameters, but it is also the method with one of the highest execution time]. Duarte is analogous to the claimed invention as they both relate to hyperparameter tuning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, Zhang, and Lohia’s teachings to incorporate the teachings of Duarte and provide hyperparameter tuning to compare combinations of evaluation metrics in order to [Duarte, Abstract and Sect 1, para 10] improve execution time by utilizing a fast selection procedure. Claim 13 is a system claim that recites identical limitations to claim 4. Therefore, claim 13 is rejected using the same rationale as claim 4. Regarding claim 17, Spiliotis teaches, For forecasting independent of level of granularity [Sect 3.1, list item 2, A forecasting model is fitted to each series in each training set and one-step-ahead forecasts are produced for each test set]; Receiving [Sect 3, information is extracted from large time series] a target evaluation metric (Sect 3.1, list item 2, training set) for performing a forecast (Sect 3.1, list item 2, forecasts are produced), the target evaluation metric (Sect 3.1, list item 3, training set) comprising a target level of granularity (Sect 3.1, list item 3, bottom series of the hierarchy) [Sect 3.1, list item 2, A forecasting model is fitted to each series in each training set and one-step-ahead forecasts are produced for each test set; Sect 3.1, list item 3, A separate ML model (either a RF or XGB) is built for predicting each of the mk bottom series of the hierarchy. The training set of each model consists of n−p observations and m +1 variables.]; Performing (Sect 3.1, list item 6, produced) the forecast (Sect 3.1, list item 6, reconciled forecasts) at an aggregated level of granularity (Sect 3.1, list item 6, the rest of the hierarchical levels) compared to the target level of granularity (Sect 3.1, list item 5, bottom level of the hierarchy) to generate an aggregated forecast result [Sect 3.1, list item 5, The mk models that were built in Step 3 are used to provide forecasts for the series of the bottom level of the hierarchy, using the base forecasts produced in Step 4 as input. This process is repeated h times, each time for a different forecasting horizon; Sect 3.1, list item 6, The forecasts produced by the ML models in step 5 are aggregated (summed) so that reconciled forecasts are produced for the rest of the hierarchical levels.]; Spiliotis does not teach A non-transitory computer readable medium for storing instructions that, when executed by one or more processors, causes the one or more processors to perform operations for forecasting, the operations comprising: wherein data values for performing forecast at level of granularity are sparse; determining a distribution scheme for distributing the aggregated forecast result to the target level of granularity and distributing the aggregated forecast result to the target level of granularity based on the determined distribution scheme to generate a forecast result at the target level of granularity. Athanasopoulos teaches, Aggregating level of granularity (Sect 3.1.2, aggregated up by the summing matrix S) to an aggregated level of granularity (Sect 3, upper level series) compared to the level of granularity [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3; Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S]; Determining a distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) for distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series); and distributing (Sect 3.1.2, summing matrix S) the aggregated forecast result (Sect 3.1.2, the top-level forecast y ^ T o t , T + h | T ) to the target level of granularity (Sect 3.1.2, to forecasts for the bottom-level series) based on (Sect 3.1.2, given by) the determined distribution scheme (Sect 3.1.2, y ~ T + h | T T D = S p y ^ T o t , T + h | T ) to generate a forecast result (Sect 3.1.2, forecasts) at the target level of granularity (Sect 3.1.2, for the bottom-level series); outputting the forecast result at the target level of granularity [Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregated forecast result applied to a target level of granularity [Athansopoulos, Abstract] as applying forecast reconciliation methods results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Spiliotis-Athanasopoulos do not teach A non-transitory computer readable medium for storing instructions that, when executed by one or more processors, causes the one or more processors to perform operations for forecasting; wherein data values for performing forecast at level of granularity are sparse. Zhang teaches, A non-transitory computer readable medium (Para 0005, computer-readable storage medium) for storing instructions (Para 0005, having program instructions ) that, when executed by one or more processors, causes the one or more processors to perform operations (Para 0005, cause the processor to perform a method) for forecasting [Para 0005, Additional embodiments of the present disclosure include a computer program product for predicting low-frequency sensor signal predictions using a prediction model which can include computer-readable storage medium having program instructions embodied therewith, the program instruction executable by a processor to cause the processor to perform a method.]. Zhang is analogous to the claimed invention as they both relate to predictive models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis and Athanasopoulos’s teachings to incorporate the teachings of Zhang and provide a non-transitory computer readable medium in order to perform the methodologies using hardware. Spiliotis-Athanasopoulos-Zhang teach the above limitations of claim 1 including the target level of granularity (Spiliotis, Sect 3.1). Spiliotis-Athanasopoulos-Zhang do not teach wherein data values for performing forecast at level of granularity are sparse. Lohia teaches, wherein data values (Para 0023, input variables) for performing forecast (Para 0003, making predictions) at level of granularity (Para 0023, dataset) are sparse (Para 0023, sparse dataset) [Para 0023, The feature reduction module 220 may generate embedding features for the ensemble time series prediction model. In one embodiment, the feature reduction module 220 may process large and sparse datasets and performs dimensionality reduction. As referred herein, dimensionality reduction may refer to techniques that reduce the number of input variables in a dataset and generates embedding features that are high-level abstract representations extracted from the sparse dataset. The generated embedding features are more compact, which reduces time and storage space required and improves the performance of machines learning models; Para 0003, Systems and methods are disclosed herein for an ensemble time series prediction system for making predictions based on observed data]. Lohia is analogous to the claimed invention as they both relate to prediction models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, and Zhang’s teachings to incorporate the teachings of Lohia and provide sparse data [Lohia, Para 0023] in order to extract embedding features, which reduces time and storage space required and improves the performance of machines learning models. Spiliotis-Athanasopoulos-Zhang-Lohia teach the above limitations of claim 1 including Determining a distribution scheme for distributing the aggregated forecast result (Athanasopoulos, sect 3.1.2) and the target level of granularity (Spiliotis, Sect 3.1). Duarte teaches, scheme based on hyperparameter tuning or a grid search to compare combinations of evaluation metrics [Abstract, Hyperparameter tuning is a mandatory step for building a support vector machine classifier… We compare cross-validation (5-fold) with Xi-alpha, radius-margin bound, generalized approximate cross validation, maximum discrepancy and distance between two classes on 110 public binary data sets. Cross validation is the method that resulted in the best selection of the hyper-parameters, but it is also the method with one of the highest execution time]. Duarte is analogous to the claimed invention as they both relate to hyperparameter tuning. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, Zhang, and Lohia’s teachings to incorporate the teachings of Duarte and provide hyperparameter tuning to compare combinations of evaluation metrics in order to [Duarte, Abstract and Sect 1, para 10] improve execution time by utilizing a fast selection procedure. Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over Spiliotis in view Anthanasopoulos, Zhang, Lohia, and Duarte, and in further view of Andalman (US 20220398433 A1), hereinafter Andalman. Regarding claim 8, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1 and claim 6. Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte do not teach wherein the aggregation scheme comprises one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast. Andalman teaches, wherein the aggregation scheme (Para 0063, Aggregation) comprises (Para 0063, involve) one of a most frequent value or a concatenate (Para 0063, concatenating) of unique values (Para 0063, the features, age, information about a user/viewer, behavior) for categorical features of data (Para 0061, the features may include demographic information about a user/viewer, information about their device, and additional information that may be used to evaluate their likely behavior) for the forecast [Para 0063, Aggregation would typically involve reorder and concatenating the features into the order expected by the model (if the first input to a model is age, then the process makes sure the features are sorted so that this is the form of the input); Para 0061, the features may include demographic information about a user/viewer, information about their device, and additional information that may be used to evaluate their likely behavior (browsing history, purchasing history, or submitted queries regarding a topic, as examples)]. Andalman is analogous to the claimed invention as they both relate to prediction methods. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, Zhang, Lohia, and Duarte’s teachings to incorporate the teachings of Andalman and provide the aggregation scheme comprising one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast [Andalman, Para 0067] as the retrieved features as inputs to the neural network or other form of model to generate an output representing a prediction or inference as to the user's/viewer's behavior. Claim(s) 9, 10, 15, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Spiliotis in view of Anthanasopoulos, Zhang, Lohia, and Duarte, and in further view of Amzal (US 12293302 B2), hereinafter Amzal. Regarding claim 9, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1 including the one or more processors (see claim 1). Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte do not teach performing training for the forecast at the aggregated level of granularity. Amzal teaches, Performing training for the forecast at the aggregated level of granularity [Col 1, lines 14-16, Before the data can be used to train the machine learning algorithm, the data may be aggregated to create features.]. Amzal is analogous to the claimed invention as they both relate to predictive models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotos, Athanasopoulos, Zhang, Lohia, and Duarte’s teachings to incorporate the teachings of Amzal and provide performing training for the forecast at the aggregated level of granularity [Amzal, Col 1, lines 14-16] in order to create features for data aggregation and train the machine learning algorithm. Regarding claim 10, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 1 including the combinations (Duarte, Abstract). Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte do not teach an accuracy of combinations of evaluation metrics are compared at the target level of granularity using a validation dataset. Amzal teaches, an accuracy (Col 3, lines 15-35, predictive performances (i.e., predictive accuracy)) of combinations (Col 3, lines 36-50, the system may create a plurality of predictive data sets for training a time-series forecasting model by changing the hierarchy levels (i.e., the level of granularity) of aggregation of different dimensions for the multidimensional data set. The data being aggregated (e.g., the measure) may be the same or partially the same (overlapping) in more than one of the predictive data sets) of evaluation metrics (Col 3, lines 15-35, data sets) are compared (Col 3, lines 15-35, compared) at the target level of granularity (Col 3, lines 15-35, the respective instances) using a validation dataset (Col 3, lines 15-35, best possible option (or options)) [Col 3, lines 15-35, The example embodiments are directed to a system which can identify different possible aggregation hierarchies (e.g., granularity) for a multidimensional data set/model, aggregate the underlying data in these different possible hierarchies, and generate a plurality of different training data sets that correspond to the different possible hierarchies. The system can then train a number of instances of a machine learning model (e.g., time-series forecasting model, etc.) using the different training data sets. The system can then compare the predictive results of the respective instances to actual results to determine a predictive performance of the different instances of the machine learning model. The system can then rank these instances based on the predictive performances (i.e., predictive accuracy) and output this information for a user along with a description of the hierarchy levels that are used to create the training data set for an instance. Accordingly, a user can see the best possible option (or options) for aggregating the multidimensional data without having to perform a guess/check operation; Col 3, lines 36-50, the system may create a plurality of predictive data sets for training a time-series forecasting model by changing the hierarchy levels (i.e., the level of granularity) of aggregation of different dimensions for the multidimensional data set. The data being aggregated (e.g., the measure) may be the same or partially the same (overlapping) in more than one of the predictive data sets. However, how the data is realized is different.]. Amzal is analogous to the claimed invention as they both relate to predictive models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotos, Athanasopoulos, Zhang, Lohia’s teachings to incorporate the teachings of Amzal and provide comparing an accuracy of combinations of evaluation metrics at the target level of granularity using a validation dataset [Amzal, Col 3, lines 60-65] in order to create the most accurate predictive model by outputting the best combinations. Claims 15 and 19 are system and non-transitory computer readable medium claims, respectively, that recite identical limitations to claim 10. Therefore, claims 15 and 19 are rejected using the same rationale as claim 10. Claim(s) 11, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Spiliotis in view of Anthanasopoulos, Zhang, Lohia, Duarte, and Amzal, and in further view of Yang et al. (CN 105205297 A, see attached translation), hereinafter Yang. Regarding claim 11, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte-Amzal teach the limitations of claim 1 and 10 including evaluation metrics to compare (see claim 10). Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte-Amzal do not teach generating heuristics to narrow the combination of evaluation metrics. Yang teaches, generating heuristics (Para 0056, a <sub> i </sub> can be equal to 1 or not equal to 1) to narrow (Para 0056, minimized) the combination of evaluation metrics (Para 0056, combination of prediction results) [Para 0056, Among them,… the prediction result of each dimension, x(t) is the time series value, a <sub> i </sub> is the prediction weight parameter corresponding to each…, a <sub> i </sub> is a rational number, a <sub> i </sub> can be equal to 1 or not equal to 1, preferably a <sub> i </sub> can be 1, which makes the prediction algorithm simpler and the algorithm converges faster; through the combination of prediction results of different dimensions by F(*), the error between them and the actual results is minimized.]. Yang is analogous to the claimed invention as they both relate to predictive modeling. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotos, Athanasopoulos, Zhang, Lohia, Duarte, and Amzal’s teachings to incorporate the teachings of Yang and provide heuristics to narrow the combination of evaluation metrics [Yang, Para 0056] in order to make the algorithm simpler and converge faster. Claims 16 and 20 are system and non-transitory computer readable medium claims, respectively, that recite identical limitations to claim 11. Therefore, claims 16 and 20 are rejected using the same rationale as claim 11. Claim(s) 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Spiliotis in view of Athanasopoulos, Zhang, Lohia, and Duarte, and in further view of Andalman. Regarding claim 14, Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte teach the limitations of claim 12. Athanasopoulos further teaches, the target level of granularity is aggregated (Sect 3.1.2, aggregated up by the summing matrix S) via an aggregation scheme (Sect 3, s ⊂ R3). [Sect 3, coherent forecasts of lower level series aggregate to their corresponding upper level series and vice versa. Let us consider the smallest possible hierarchy with two bottom-level series, depicted in Figure 3, where yTot = yA + yB. While base forecasts could lie anywhere in R3, the realisations and coherent forecasts lie in a two dimensional subspace s ⊂ R3 Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. the aggregation scheme comprises one of a sum (Sect 3.1.2, summing matrix S) or average for numerical features of data (Sect 3.1.2, m-dimensional vector consisting of a set of proportions) for the forecasting [Sect 3.1.2, In contrast, top-down approaches involve first generating forecasts for the most aggregate level and then disaggregating these down the hierarchy. In general, coherent forecasts generated from top-down approaches are given by y ~ T + h | T T D = S p y ^ T o t , T + h | T , where p = (p-1, . . . , pm)’ is an m-dimensional vector consisting of a set of proportions that disaggregate the top-level forecast y ^ T o t , T + h | T to forecasts for the bottom-level series; hence p y ^ T o t , T + h | T = b ^ T + h | T . These are then aggregated up by the summing matrix S.]. Athanasopoulos is analogous to the claimed invention as they both relate to forecast models. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis’s teachings to incorporate the teachings of Athansopoulos and provide an aggregation scheme summing or averages for numerical features [Athansopoulos, Abstract] as applying forecast reconciliation methods results in generating forecasts that are coherent with the aggregation constraints through exploiting inherent aggregation structures. Spiliotis-Athanasopoulos-Zhang-Lohia-Duarte do not teach the aggregation scheme comprising one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast. Andalman teaches, the aggregation scheme (Para 0063, Aggregation) comprising (Para 0063, involve) one of a most frequent value or a concatenate (Para 0063, concatenating) of unique values (Para 0063, the features, age, information about a user/viewer, behavior) for categorical features of data (Para 0061, the features may include demographic information about a user/viewer, information about their device, and additional information that may be used to evaluate their likely behavior) for the forecast [Para 0063, Aggregation would typically involve reorder and concatenating the features into the order expected by the model (if the first input to a model is age, then the process makes sure the features are sorted so that this is the form of the input); Para 0061, the features may include demographic information about a user/viewer, information about their device, and additional information that may be used to evaluate their likely behavior (browsing history, purchasing history, or submitted queries regarding a topic, as examples)]. Andalman is analogous to the claimed invention as they both relate to prediction methods. Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Spiliotis, Athanasopoulos, Zhang, and Lohia’s teachings to incorporate the teachings of Andalman and provide the aggregation scheme comprising one of a most frequent value or a concatenate of unique values for categorical features of data for the forecast [Andalman, Para 0067] as the retrieved features as inputs to the neural network or other form of model to generate an output representing a prediction or inference as to the user's/viewer's behavior. Claim 18 is a non-transitory computer readable medium claim that recites identical limitations to claim 14. Therefore, claim 18 is rejected using the same rationale as claim 14. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED RAYHAN AHMED whose telephone number is (571)270-0286. The examiner can normally be reached Mon-Fri ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED RAYHAN AHMED/Examiner, Art Unit 2126 /VAN C MANG/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Jul 12, 2022
Application Filed
Jul 01, 2025
Non-Final Rejection — §101, §103, §112
Sep 02, 2025
Interview Requested
Sep 09, 2025
Examiner Interview Summary
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 18, 2025
Response Filed
Dec 29, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12450891
IMAGE CLASSIFIER COMPRISING A NON-INJECTIVE TRANSFORMATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+50.0%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month