DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following NON-FINAL Office Action is in response to Applicant’s communication filed 01/31/2025 regarding Application 19/042,209. The following is the first action on the merits.
Priority Acknowledgment
Examiner acknowledges Applicant’s priority claim to Foreign Application JP2024-048727 with priority filing date of 03/25/2024.
Status of Claim(s)
Claim(s) 1-8 is/are currently pending and are rejected as follows.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-8 is/are rejected under 35 U.S.C. 101 because the claimed invention is/are directed towards a judicial exception (i.e. law of nature, a natural phenomenon, or an
abstract idea) without significantly more.
Claim(s) 1-8 are directed towards an invention for the displaying of a plurality of pieces of information indicating accuracies of demand predictions of a plurality of products belonging to a user designated class, accepting a display instruction that a prediction accuracy should be displayed product by product, displaying based on an index for each of the products belonging to the class, information indicating a product which has a high probability of requiring a prediction reassessment, and when accepting the display instruction, displaying an analysis result for the corresponding for a product. These actions fall under a subject matter grouping which the courts have considered ineligible (Organizing Human Activity, Mental Process, and Math). These claims do not integrate the abstract idea into a practical application, and do not include additional elements that provide an inventive concept (are sufficient to amount to significantly more than the abstract idea).
Under Step 1 of the Alice/Mayo framework it must be considered whether the claims are
directed to one of the four statutory categories of invention. Claim(s) 1-6 are directed towards an apparatus, Claim(s) 7 is directed towards a method comprising at least one step, and Claim(s) 8 is directed towards a product. Accordingly, the claims fall within the four statutory
categories of invention, (apparatus, method, and product) and will be further analyzed under
Step 2 of the Alice/Mayo framework).
Under Step 2, Prong One, of the Alice/Mayo framework it must be considered whether
the claims recite any abstract ideas.
Independent claims 1 and 7-8 recite an invention for the displaying of a plurality of pieces of information indicating accuracies of demand predictions of a plurality of products belonging to a user designated class, accepting a display instruction that a prediction accuracy should be displayed product by product, displaying based on an index for each of the products belonging to the class, information indicating a product which has a high probability of requiring a prediction reassessment, and when accepting the display instruction, displaying an analysis result for the corresponding for a product which recite the abstract ideas of Organizing Human Activity, Mental Process, and Math in the following limitations:
displaying a plurality of pieces of information which indicate accuracies of demand predictions of a plurality of products belonging to a class designated by a user;
accepting a[n]…instruction that a prediction accuracy should be displayed product by product;
displaying, based on an index obtained for each of the plurality of products belonging to the class by multiplying an absolute value of an error ratio of a demand prediction of that product by a weight value assigned to that product, information indicating a product which has a high possibility of requiring a prediction reassessment, in a case of accepting the display instruction in the accepting of the…instruction;
accepting a selection made by the user with respect to the product displayed in the displaying of the information indicating the product which has the high possibility; and
displaying an analysis result regarding demand for a product corresponding to the selection accepted in the accepting of the selection.
Dependent claim(s) 2-6 merely further limit the abstract idea and are thus subject to the same rationale expressed above.
Under Step 2A, Prong Two, any additional elements are recited.
Independent claim(s) 1 and 7-8 recite:
at least on processor
a display
a computer-readable non-transitory recording medium
These additional elements, considered both individually and as an ordered pair do no more than represent mere instructions to implement the abstract idea ("apply it") o computer (See MPEP 2106.05(f)). Additionally, the claims represent insignificant extra solution activity (See MPEP 2106.05(g)).These elements are recited with a high degree of generality, and the specification sets forth the general purpose nature of the technologies required to implement the invention (emphasis added).
Support for this determination can be found in Paragraph(s) [0122]-[0127] of Applicant’s specification.
Under Step 2B eligibility analysis evaluates whether the claims as a whole amounts to significantly more than the recited exception, i.e. whether any additional element, or combination of elements, adds an inventive concept to the claims (MPEP 2106.05). As explained with respect to Step 2A, Prong Two, there are several additional elements. The at least one processor, computer-readable non-transitory recording medium, and display are all, at best, merely an example to apply an exception and cannot provide an inventive concept (See MPEP 2106.05(f)). Further, the one or more processors represents insignificant extra solution activity (See MPEP 2106.05(g)), specifically that of mere data gathering which has is known to be well- understood, routine, or conventional within the art (See MPEP 2106.05(d)(II)). Insignificant extra solution activity, especially that which is well-understood, routine, or conventional in the art does not provide an inventive concept. Even when considered in combination, these additional elements to are not deemed to be sufficient enough to provide an inventive concept onto the abstract idea, therefore, they are not eligible. (Alice Corp., 134 S. Ct. at 2358 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984 (warning against a§ 101 that turns on "the draftsman's art")).
Dependent claim(s) 2-6 do not recite any further additional elements and are therefore rejected for the same reasons enumerated above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-8 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yanchenko (US 2024/0211801 A1).
Claim(s) 1 and 7-8 –
Yanchenko discloses the following:
A computer-readable non-transitory recording medium (Yanchenko: Paragraph 82, “Persistent storage 813 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 801 and/or directly to persistent storage 813. Persistent storage 813 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system 822 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface type operating systems that employ a kernel. The code included in block 200 typically includes at least some of the computer code involved in performing the inventive methods.”)
at least one processor, the at least one processor carrying out: (Yanchenko: Paragraph 78, “Processor set 810 includes one, or more, computer processors of any type now known or to be developed in the future. Processing circuitry 820 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 820 may implement multiple processor threads and/or multiple processor cores. Cache 821 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 810. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 810 may be designed for working with qubits and performing quantum computing.”)
a first displaying process of displaying a plurality of pieces of information which indicate accuracies of demand predictions of a plurality of products belonging to a class designated by a user; (Yanchenko: Paragraph 33, “Contrary to this brute force method, the illustrative embodiments provide an artificial intelligence (AI) based solution that involves a machine learning training of a machine learning computer model, e.g., neural network, deep neural network, random forest, support vector machine, a light gradient boosting machine (LightGBM), or the like, to thereby cause the computer model to learn from historical data and prior reconciliation performance, to predict the performance of particular reconciliation computer tool/model performance for given reconciliation tools/models. That is, given a set of input dataset features, forecast computer model features, and reconciliation computing tool/model features, the trained machine learning computer model predicts the performance of the set of reconciliation computing tools/models. From this prediction of performance, the reconciliation computing tools/models may be ranked relative to one another, e.g., highest to lowest performance, and one or more of the highest ranked reconciliation computing tools/models may be selected for use in performing reconciliation of forecasts for a given combination of time series dataset and forecast computer model features. The ranking and selection may be used to generate an output, such as a dashboard or the like for presentation to an authorized user, may be used to automatically select a reconciliation computing tool/model that is applied to the forecast data generated by the forecast computer model based on the input time series dataset, or the like. In the case of a dashboard, various views of the data used to generate the ranking of reconciliation computing tools/models as well as the basis for the rankings may be provided in the dashboard.”; Paragraph 38, “Moreover, a catalog of reconciliation computing tools/models may be maintained and utilized to retrieve reconciliation computing tool/model features, such as the encoding method used, additional parameter encoding features, and the like. These features will be input along with the features of the forecast computing model and time series dataset features to predict performance of the reconciliation computing tools/models in the catalog and rank them relative to one another. The illustrative embodiments may automatically select a reconciliation computing tool/model from the catalog for use with the time series dataset and forecast computing model to thereby automatically generate a reconciled forecast dataset. In some cases, the illustrative embodiments may generate a recommendation output, such as via a dashboard or the like, that outputs the results of the prediction of performance and ranking of reconciliation computing tools/models.”; Paragraph 39, “In some illustrative embodiments, from the operation of the illustrative embodiments, a set of rules may be generated that correlates time series dataset features with reconciliation computing tools/models that provide the highest ranking performance. This set of rules may be provided as an output for execution to select a reconciliation computer tool/model based on an analysis of time series dataset features. For example, a set of rules may include a first rule that states “If time series demonstrates consistent seasonality, and forecast quality is good, select a top-down reconciliation computing tool/model” and a rule that specifies “If higher levels in the hierarchy are noisy, select a bottom-up reconciliation computing tool/model”, and further a rule that specifies “if the time series dataset demonstrates non-stationarity, then select a dynamic reconciliation computing tool/model”. These rules may be automatically generated based on a correlation of reconciliation computing tool/model performance predictions to time series dataset features. There may be a separate set of rules for each forecast computer model. Thus, when these rules are deployed, all one need do is run an analysis of the time series dataset to extract the time series dataset's features and then execute the set of rules on these features to identify a reconciliation computing tool/model to use with the forecast data generated by the forecast computer model to thereby generate reconciled forecast data.”; Paragraph 63, “The results generated by the ranking engine 430 and the selection engine 450 may be used by the output engine 440 to generate a dashboard 470 output or other output interface that informs the end user of the recommended reconciliation computer tool(s)/model(s) for the time series dataset and forecast computer model. This dashboard 470 may specify the recommendation, the relative ranking, the reasoning for the relative ranking, e.g., ranking scores and weighted factors for the various components of the ranking scores, predicted performance, etc.”; Paragraph 75, “The present invention may be a specifically configured computing system, configured with hardware and/or software that is itself specifically configured to implement the particular mechanisms and functionality described herein, a method implemented by the specifically configured computing system, and/or a computer program product comprising software logic that is loaded into a computing system to specifically configure the computing system to implement the mechanisms and functionality described herein. Whether recited as a system, method, of computer program product, it should be appreciated that the illustrative embodiments described herein are specifically directed to an improved computing tool and the methodology implemented by this improved computing tool. In particular, the improved computing tool of the illustrative embodiments specifically provides automated reconciliation computer tool/model performance predictions, ranking, and selection taking into consideration the features of a hierarchical dataset, features of the reconciliation computer tool/model, and features of the forecast computer model. The improved computing tool implements mechanism and functionality, such as the reconciliation computing tool/model recommendation engine 400 in FIG. 4, which cannot be practically performed by human beings either outside of, or with the assistance of, a technical environment, such as a mental process or the like. The improved computing tool provides a practical application of the methodology at least in that the improved computing tool is able to provide intelligent artificial intelligence based performance prediction of reconciliation computer tools/models and ultimately ranking and selection of a reconciliation computer tool/model from a large set of candidates while avoiding accuracy issues of static selection of a reconciliation computer tool/model and avoiding brute force methods of executing a plurality of reconciliation computer tools/models on the hierarchical data and determining after the fact which provides a highest accuracy result”)
a first accepting process of accepting a display instruction that a prediction accuracy should be displayed product by product; (Yanchenko: Paragraph 33, “Contrary to this brute force method, the illustrative embodiments provide an artificial intelligence (AI) based solution that involves a machine learning training of a machine learning computer model, e.g., neural network, deep neural network, random forest, support vector machine, a light gradient boosting machine (LightGBM), or the like, to thereby cause the computer model to learn from historical data and prior reconciliation performance, to predict the performance of particular reconciliation computer tool/model performance for given reconciliation tools/models. That is, given a set of input dataset features, forecast computer model features, and reconciliation computing tool/model features, the trained machine learning computer model predicts the performance of the set of reconciliation computing tools/models. From this prediction of performance, the reconciliation computing tools/models may be ranked relative to one another, e.g., highest to lowest performance, and one or more of the highest ranked reconciliation computing tools/models may be selected for use in performing reconciliation of forecasts for a given combination of time series dataset and forecast computer model features. The ranking and selection may be used to generate an output, such as a dashboard or the like for presentation to an authorized user, may be used to automatically select a reconciliation computing tool/model that is applied to the forecast data generated by the forecast computer model based on the input time series dataset, or the like. In the case of a dashboard, various views of the data used to generate the ranking of reconciliation computing tools/models as well as the basis for the rankings may be provided in the dashboard.”; Paragraph 38, “Moreover, a catalog of reconciliation computing tools/models may be maintained and utilized to retrieve reconciliation computing tool/model features, such as the encoding method used, additional parameter encoding features, and the like. These features will be input along with the features of the forecast computing model and time series dataset features to predict performance of the reconciliation computing tools/models in the catalog and rank them relative to one another. The illustrative embodiments may automatically select a reconciliation computing tool/model from the catalog for use with the time series dataset and forecast computing model to thereby automatically generate a reconciled forecast dataset. In some cases, the illustrative embodiments may generate a recommendation output, such as via a dashboard or the like, that outputs the results of the prediction of performance and ranking of reconciliation computing tools/models.”; Paragraph 39, “In some illustrative embodiments, from the operation of the illustrative embodiments, a set of rules may be generated that correlates time series dataset features with reconciliation computing tools/models that provide the highest ranking performance. This set of rules may be provided as an output for execution to select a reconciliation computer tool/model based on an analysis of time series dataset features. For example, a set of rules may include a first rule that states “If time series demonstrates consistent seasonality, and forecast quality is good, select a top-down reconciliation computing tool/model” and a rule that specifies “If higher levels in the hierarchy are noisy, select a bottom-up reconciliation computing tool/model”, and further a rule that specifies “if the time series dataset demonstrates non-stationarity, then select a dynamic reconciliation computing tool/model”. These rules may be automatically generated based on a correlation of reconciliation computing tool/model performance predictions to time series dataset features. There may be a separate set of rules for each forecast computer model. Thus, when these rules are deployed, all one need do is run an analysis of the time series dataset to extract the time series dataset's features and then execute the set of rules on these features to identify a reconciliation computing tool/model to use with the forecast data generated by the forecast computer model to thereby generate reconciled forecast data.”; Paragraph 63, “The results generated by the ranking engine 430 and the selection engine 450 may be used by the output engine 440 to generate a dashboard 470 output or other output interface that informs the end user of the recommended reconciliation computer tool(s)/model(s) for the time series dataset and forecast computer model. This dashboard 470 may specify the recommendation, the relative ranking, the reasoning for the relative ranking, e.g., ranking scores and weighted factors for the various components of the ranking scores, predicted performance, etc.”; Paragraph 75, “The present invention may be a specifically configured computing system, configured with hardware and/or software that is itself specifically configured to implement the particular mechanisms and functionality described herein, a method implemented by the specifically configured computing system, and/or a computer program product comprising software logic that is loaded into a computing system to specifically configure the computing system to implement the mechanisms and functionality described herein. Whether recited as a system, method, of computer program product, it should be appreciated that the illustrative embodiments described herein are specifically directed to an improved computing tool and the methodology implemented by this improved computing tool. In particular, the improved computing tool of the illustrative embodiments specifically provides automated reconciliation computer tool/model performance predictions, ranking, and selection taking into consideration the features of a hierarchical dataset, features of the reconciliation computer tool/model, and features of the forecast computer model. The improved computing tool implements mechanism and functionality, such as the reconciliation computing tool/model recommendation engine 400 in FIG. 4, which cannot be practically performed by human beings either outside of, or with the assistance of, a technical environment, such as a mental process or the like. The improved computing tool provides a practical application of the methodology at least in that the improved computing tool is able to provide intelligent artificial intelligence based performance prediction of reconciliation computer tools/models and ultimately ranking and selection of a reconciliation computer tool/model from a large set of candidates while avoiding accuracy issues of static selection of a reconciliation computer tool/model and avoiding brute force methods of executing a plurality of reconciliation computer tools/models on the hierarchical data and determining after the fact which provides a highest accuracy result”)
a second displaying process of displaying, based on an index obtained for each of the plurality of products belonging to the class by multiplying an absolute value of an error ratio of a demand prediction of that product by a weight value assigned to that product, information indicating a product which has a high possibility of requiring a prediction reassessment, in a case of accepting the display instruction in the first accepting process; (Yanchenko: Paragraph 33, “Contrary to this brute force method, the illustrative embodiments provide an artificial intelligence (AI) based solution that involves a machine learning training of a machine learning computer model, e.g., neural network, deep neural network, random forest, support vector machine, a light gradient boosting machine (LightGBM), or the like, to thereby cause the computer model to learn from historical data and prior reconciliation performance, to predict the performance of particular reconciliation computer tool/model performance for given reconciliation tools/models. That is, given a set of input dataset features, forecast computer model features, and reconciliation computing tool/model features, the trained machine learning computer model predicts the performance of the set of reconciliation computing tools/models. From this prediction of performance, the reconciliation computing tools/models may be ranked relative to one another, e.g., highest to lowest performance, and one or more of the highest ranked reconciliation computing tools/models may be selected for use in performing reconciliation of forecasts for a given combination of time series dataset and forecast computer model features. The ranking and selection may be used to generate an output, such as a dashboard or the like for presentation to an authorized user, may be used to automatically select a reconciliation computing tool/model that is applied to the forecast data generated by the forecast computer model based on the input time series dataset, or the like. In the case of a dashboard, various views of the data used to generate the ranking of reconciliation computing tools/models as well as the basis for the rankings may be provided in the dashboard.”; Paragraph 38, “Moreover, a catalog of reconciliation computing tools/models may be maintained and utilized to retrieve reconciliation computing tool/model features, such as the encoding method used, additional parameter encoding features, and the like. These features will be input along with the features of the forecast computing model and time series dataset features to predict performance of the reconciliation computing tools/models in the catalog and rank them relative to one another. The illustrative embodiments may automatically select a reconciliation computing tool/model from the catalog for use with the time series dataset and forecast computing model to thereby automatically generate a reconciled forecast dataset. In some cases, the illustrative embodiments may generate a recommendation output, such as via a dashboard or the like, that outputs the results of the prediction of performance and ranking of reconciliation computing tools/models.”; Paragraph 39, “In some illustrative embodiments, from the operation of the illustrative embodiments, a set of rules may be generated that correlates time series dataset features with reconciliation computing tools/models that provide the highest ranking performance. This set of rules may be provided as an output for execution to select a reconciliation computer tool/model based on an analysis of time series dataset features. For example, a set of rules may include a first rule that states “If time series demonstrates consistent seasonality, and forecast quality is good, select a top-down reconciliation computing tool/model” and a rule that specifies “If higher levels in the hierarchy are noisy, select a bottom-up reconciliation computing tool/model”, and further a rule that specifies “if the time series dataset demonstrates non-stationarity, then select a dynamic reconciliation computing tool/model”. These rules may be automatically generated based on a correlation of reconciliation computing tool/model performance predictions to time series dataset features. There may be a separate set of rules for each forecast computer model. Thus, when these rules are deployed, all one need do is run an analysis of the time series dataset to extract the time series dataset's features and then execute the set of rules on these features to identify a reconciliation computing tool/model to use with the forecast data generated by the forecast computer model to thereby generate reconciled forecast data.”; Paragraph 40, “Thus, the mechanisms of the illustrative embodiments are able to predict the performance of reconciliation computing tools/models so that informed decisions can be made regarding which reconciliation computing tools/models to utilize to reconcile forecasts generated for hierarchical datasets, such as time series datasets or the like. The “performance” of a reconciliation computing tool/model may be, for example, a measure of the accuracy of the reconciled forecast data relative to the actual data at each level of the hierarchy. The performance may be measured using various accuracy metrics, such as mean squared error, mean absolute error, symmetric mean absolute percentage error, or the like. In order to determine the performance, forecasts are produced using the underlying forecast models, the reconciliation approach is applied, and the reconciled forecasts and ground truth, e.g., actual data, are used to compute the performance metric.”; Paragraph 42, “The illustrative embodiments avoid having to perform an exhaustive search of reconciliation computing tools/models by implementing all of the reconciliation computing tools/models on the forecasts and then evaluating the results after the fact. Moreover, the illustrative embodiments avoid merely selecting a reconciliation computing tool/model and using the same selection for all input time series datasets and forecast computing models. To the contrary, the illustrative embodiments adapt the predictions of performance to the particular features of the time series dataset, the forecast computing models, and the reconciliation computing tools/models being evaluated, such that the most likely highest performing reconciliation computing tools/models may be identified ahead of time, selected, and applied, thereby reducing resource expenditures, reliance on inaccurate reconciled forecast data, and maximizing accuracy of the reconciled forecast data.”; Paragraph 63, “The results generated by the ranking engine 430 and the selection engine 450 may be used by the output engine 440 to generate a dashboard 470 output or other output interface that informs the end user of the recommended reconciliation computer tool(s)/model(s) for the time series dataset and forecast computer model. This dashboard 470 may specify the recommendation, the relative ranking, the reasoning for the relative ranking, e.g., ranking scores and weighted factors for the various components of the ranking scores, predicted performance, etc.”; Paragraph 75, “The present invention may be a specifically configured computing system, configured with hardware and/or software that is itself specifically configured to implement the particular mechanisms and functionality described herein, a method implemented by the specifically configured computing system, and/or a computer program product comprising software logic that is loaded into a computing system to specifically configure the computing system to implement the mechanisms and functionality described herein. Whether recited as a system, method, of computer program product, it should be appreciated that the illustrative embodiments described herein are specifically directed to an improved computing tool and the methodology implemented by this improved computing tool. In particular, the improved computing tool of the illustrative embodiments specifically provides automated reconciliation computer tool/model performance predictions, ranking, and selection taking into consideration the features of a hierarchical dataset, features of the reconciliation computer tool/model, and features of the forecast computer model. The improved computing tool implements mechanism and functionality, such as the reconciliation computing tool/model recommendation engine 400 in FIG. 4, which cannot be practically performed by human beings either outside of, or with the assistance of, a technical environment, such as a mental process or the like. The improved computing tool provides a practical application of the methodology at least in that the improved computing tool is able to provide intelligent artificial intelligence based performance prediction of reconciliation computer tools/models and ultimately ranking and selection of a reconciliation computer tool/model from a large set of candidates while avoiding accuracy issues of static selection of a reconciliation computer tool/model and avoiding brute force methods of executing a plurality of reconciliation computer tools/models on the hierarchical data and determining after the fact which provides a highest accuracy result”)
a second accepting process of accepting a selection made by the user with respect to the product displayed in the second displaying process; and (Yanchenko: Paragraph 50, “As described above, the illustrative embodiments of the present invention are specifically directed to an improved computing tool that automatically identify hierarchical reconciliation processes for producing coherent forecasts. In particular, the illustrative embodiments provide an artificial intelligence (AI) based computing tool that automatically predicts the performance of reconciliation computing tools/models on particular time series datasets and forecast computing models so as to relatively rank them to one another and, in some cases, generate recommendation outputs, automatically execute a selected reconciliation computing tool/model, and/or generate a set of reconciliation computing tool/model selection rules that may be executed on features of time series datasets to select a reconciliation computing tool/model to utilize with forecast data generated by a forecast computing model. All of the functions of the illustrative embodiments as described herein are intended to be performed using automated processes without human intervention. While a human being may initiate operation of the illustrative embodiments and/or provide some of the input data used by the mechanisms of the illustrative embodiments, the illustrative embodiments of the present invention are not directed to actions performed by the human being but rather logic and functions performed specifically by the improved computing tool based on the machine learning training of the machine learning computer model and/or automated rule generation engine. Moreover, even though the present invention may provide an output that ultimately assists human beings with regard to making decisions based on forecasts, the illustrative embodiments of the present invention are not directed to actions performed by the human being viewing the results of the processing performed by the illustrative embodiments, but rather to the specific operations performed by the specific improved computing tool of the present invention. Thus, the illustrative embodiments are not organizing any human activity, but are in fact directed to the automated logic and functionality of an improved computing tool.”; Paragraph 54, “In general, the recon recommendation engine 400 learns a meta model that captures the relationship between properties, or features, of the underlying time series (TS) dataset, forecast computer model, and reconciliation computing tools/models, to the performance of the reconciliation computing tools/models. During a machine learning training operation, the machine learning training is performed based on historical data 402-404 and a ground truth data set 406. During runtime operation, the operations are based on runtime data 408 which have similar components to historical data 402-404, but are new datasets and not labeled training data corresponding to ground truth data 408. The following will first describe a machine learning training of the recon prediction model 420 followed by a discussion of the runtime operation.”; Paragraph 62, “The recon selection engine 450 may operate based on the relative ranking generated by the ranking engine 430 to select one or more reconciliation computer tools/models for recommending to the end user. For example, in some cases, a top-k selection process may be specified for selecting the top-k ranked reconciliation computer tools/models for use with the time series dataset and forecast computer model specified in the runtime data 408. In other implementations, different selection criteria may be used to select from the ranking of reconciliation computer tools/models, e.g., using a ranking which balances anticipated performance, e.g., represented by MSE, MAE, or other metrics, as well as computational complexity, e.g., bottom-up reconciliation being simpler than MinT.”; Paragraph 68, “Thus, the mechanisms provide an automated computing tool for predicting the performance of reconciliation computer tools/models with regard to features of a hierarchical dataset, e.g., a time series dataset, and a forecast computer model which operates on the hierarchical dataset. The mechanisms provide one or more specifically trained machine learning trained computer models to make such predictions which take into account the features of the time series dataset, the features of the forecast computer model, and the features of the reconciliation computer tools/models to determine the predicted performance. Based on the predicted performance, the candidate reconciliation computer tools/models may be ranked relative to one another and a selection of a candidate reconciliation computer tool/model to use to perform reconciliation of forecast data is made. This recommendation and corresponding supporting data may be presented through a dashboard interface, may be used for automatic execution of the selected reconciliation computer tool/model, and the like. In some cases, the performance predictions, rankings, and selections may be used along with the input features to automatically generate rule sets for forecast computer models to correlate features of a time series dataset with a selection of a reconciliation computer tool/model to use with the forecast data of the forecast computer model to generate reconciled forecast data.”; Paragraph 74, “A determination is made as to whether a matching rule is found (step 750). If a matching rule is found, the reconciliation computer tool/model specified in the matched rule is selected (step 760). If a matching rule is not found, then a default reconciliation computer tool/model may be selected (step 770). The selected reconciliation computer tool/model is then used as a basis for generating an output (step 780). The output may be a dashboard output specifying a recommendation of a reconciliation computer tool/model for use with the forecast computer model and time series dataset. The output may be an automated execution of the selected reconciliation computer tool/model on forecast data generated by the forecast computer model executed on the time series dataset. The operation then terminates.”)
a third displaying process of displaying an analysis result regarding demand for a product corresponding to the selection accepted in the second accepting process. (Yanchenko: Paragraph 50, “As described above, the illustrative embodiments of the present invention are specifically directed to an improved computing tool that automatically identify hierarchical reconciliation processes for producing coherent forecasts. In particular, the illustrative embodiments provide an artificial intelligence (AI) based computing tool that automatically predicts the performance of reconciliation computing tools/models on particular time series datasets and forecast computing models so as to relatively rank them to one another and, in some cases, generate recommendation outputs, automatically execute a selected reconciliation computing tool/model, and/or generate a set of reconciliation computing tool/model selection rules that may be executed on features of time series datasets to select a reconciliation computing tool/model to utilize with forecast data generated by a forecast computing model. All of the functions of the illustrative embodiments as described herein are intended to be performed using automated processes without human intervention. While a human being may initiate operation of the illustrative embodiments and/or provide some of the input data used by the mechanisms of the illustrative embodiments, the illustrative embodiments of the present invention are not directed to actions performed by the human being but rather logic and functions performed specifically by the improved computing tool based on the machine learning training of the machine learning computer model and/or automated rule generation engine. Moreover, even though the present invention may provide an output that ultimately assists human beings with regard to making decisions based on forecasts, the illustrative embodiments of the present invention are not directed to actions performed by the human being viewing the results of the processing performed by the illustrative embodiments, but rather to the specific operations performed by the specific improved computing tool of the present invention. Thus, the illustrative embodiments are not organizing any human activity, but are in fact directed to the automated logic and functionality of an improved computing tool.”; Paragraph 54, “In general, the recon recommendation engine 400 learns a meta model that captures the relationship between properties, or features, of the underlying time series (TS) dataset, forecast computer model, and reconciliation computing tools/models, to the performance of the reconciliation computing tools/models. During a machine learning training operation, the machine learning training is performed based on historical data 402-404 and a ground truth data set 406. During runtime operation, the operations are based on runtime data 408 which have similar components to historical data 402-404, but are new datasets and not labeled training data corresponding to ground truth data 408. The following will first describe a machine learning training of the recon prediction model 420 followed by a discussion of the runtime operation.”; Paragraph 62, “The recon selection engine 450 may operate based on the relative ranking generated by the ranking engine 430 to select one or more reconciliation computer tools/models for recommending to the end user. For example, in some cases, a top-k selection process may be specified for selecting the top-k ranked reconciliation computer tools/models for use with the time series dataset and forecast computer model specified in the runtime data 408. In other implementations, different selection criteria may be used to select from the ranking of reconciliation computer tools/models, e.g., using a ranking which balances anticipated performance, e.g., represented by MSE, MAE, or other metrics, as well as computational complexity, e.g., bottom-up reconciliation being simpler than MinT.”; Paragraph 68, “Thus, the mechanisms provide an automated computing tool for predicting the performance of reconciliation computer tools/models with regard to features of a hierarchical dataset, e.g., a time series dataset, and a forecast computer model which operates on the hierarchical dataset. The mechanisms provide one or more specifically trained machine learning trained computer models to make such predictions which take into account the features of the time series dataset, the features of the forecast computer model, and the features of the reconciliation computer tools/models to determine the predicted performance. Based on the predicted performance, the candidate reconciliation computer tools/models may be ranked relative to one another and a selection of a candidate reconciliation computer tool/model to use to perform reconciliation of forecast data is made. This recommendation and corresponding supporting data may be presented through a dashboard interface, may be used for automatic execution of the selected reconciliation computer tool/model, and the like. In some cases, the performance predictions, rankings, and selections may be used along with the input features to automatically generate rule sets for forecast computer models to correlate features of a time series dataset with a selection of a reconciliation computer tool/model to use with the forecast data of the forecast computer model to generate reconciled forecast data.”; Paragraph 72, “Feature extraction is performed on the runtime data and reconciliation computer tool/model characteristics, which may be retrieved from the catalog data structure (step 620). The features are encoded and input to the trained reconciliation prediction computer model(s), which may be trained through a process such as described above with regard to FIG. 5. The trained reconciliation prediction computer model(s) evaluate the input features of the time series dataset and forecast computer model and generate, for each candidate reconciliation computer tool/model being considered, a predicted performance (step 630). The predictions are used to rank the reconciliation computer tools/models relative to one another (step 640) and a reconciliation computer tool/model is selected for application to the forecast data generated by the forecast computer model on the time series dataset (step 650). An output is generated based on the selection, which may include a dashboard recommendation output or an automatic execution of the selected reconciliation computer tool/model (step 660). The operation then terminates.”; Paragraph 74, “A determination is made as to whether a matching rule is found (step 750). If a matching rule is found, the reconciliation computer tool/model specified in the matched rule is selected (step 760). If a matching rule is not found, then a default reconciliation computer tool/model may be selected (step 770). The selected reconciliation computer tool/model is then used as a basis for generating an output (step 780). The output may be a dashboard output specifying a recommendation of a reconciliation computer tool/model for use with the forecast computer model and time series dataset. The output may be an automated execution of the selected reconciliation computer tool/model on forecast data generated by the forecast computer model executed on the time series dataset. The operation then terminates.”)
Claim 2 –
Yanchenko discloses the limitations of claim 1
Yanchenko further discloses the following:
wherein in the first displaying process, the at least one processor displays mean absolute percentage error (MAPE), an f-Bias ratio, and forecast value added (FVA) of each of the plurality of products, the f-Bias ratio representing a trend in error between a prediction and a plan. (Yanchenko: Paragraph 23, “Generating forecasts at different levels of the hierarchy can provide different insights into potential future states. For example, forecasting at higher levels of the hierarchy aggregates the lower levels of the hierarchy and can be more accurate at capturing aggregate patterns and generating forecasts based on aggregate patterns. For example, the large scale trend data 160 can identify seasonality and larger aggregate patterns to better forecast patterns when the time series data is aggregated bottom-up to higher hierarchy levels. Forecasting at lower levels of the hierarchy can more accurately incorporate local effects and drive decisions, such as inventory management and the like. However, these forecasts need to be coherent in order to have coherent decision making across the various levels of the hierarchical time series datasets. For example, the sales data 170 at the lowest level 150 of the hierarchy can better represent intermittent sales data and capture localized effects to identify local trends.”; Paragraph 35, “Once the machine learning computer model of the illustrative embodiments is trained on historical data, the trained machine learning computer model may be applied to new data to predict the performance of reconciliation computing tools/models and thereby identify which reconciliation computing tools/models should be applied to perform reconciliation of forecast data. That is, a user may specify a particular hierarchy of an underlying time series dataset, such as by providing a hierarchical tree data structure, applying an analysis tool that generates such a hierarchical tree data structure, or otherwise specifying the hierarchy, such as by specifying a summing matrix data structure or the like. The time series dataset is processed by a forecast computer model to generate forecast data for the hierarchical time series dataset. Moreover, a time series dataset feature extractor analyzes and extracts specific features of the time series dataset, including, for example, a seasonality of the time series dataset, domain and metadata features, noise across the hierarchy of the time series dataset, noise across time, trend/stationarity of the data in the time series dataset, and other hierarchy characteristics. For example, seasonality refers to the length of a seasonal period, e.g., as determined by looking at a frequency spectrum for the data, where the seasonality may be represented as a seasonal pattern vector, such as a vector of length 4 for quarterly average values, or vector of length 7 for day or week patterns. The trend/stationarity information indicates whether a trend exists or not (i.e., a trend indicator) and may report the result of a test statistic for a stationarity test, e.g., augmented Dickey Fuller test.”; Paragraph 37, “In addition, characteristics of the forecast computer models may also be provided, where these characteristics may include the forecast computer model, base forecast error properties, and base forecast properties, where these characteristics may be encoded as on-hot vectors. In some illustrative embodiments, a catalog of forecast computer models may be maintained and stored for retrieval when a user specifies the particular forecast computer model being utilized with the time series dataset.”; Paragraph 40, “Thus, the mechanisms of the illustrative embodiments are able to predict the performance of reconciliation computing tools/models so that informed decisions can be made regarding which reconciliation computing tools/models to utilize to reconcile forecasts generated for hierarchical datasets, such as time series datasets or the like. The “performance” of a reconciliation computing tool/model may be, for example, a measure of the accuracy of the reconciled forecast data relative to the actual data at each level of the hierarchy. The performance may be measured using various accuracy metrics, such as mean squared error, mean absolute error, symmetric mean absolute percentage error, or the like. In order to determine the performance, forecasts are produced using the underlying forecast models, the reconciliation approach is applied, and the reconciled forecasts and ground truth, e.g., actual data, are used to compute the performance metric.”; Paragraph 58, “This process may be performed over multiple epochs or iterations using the same or different time series datasets from the historical time series data 404 to thereby adjust or modify the operational parameters of the recon prediction computer model 420 until a satisfactory loss (error) is achieved, e.g., loss/error below a predetermined threshold value, or until a predetermined number of epochs are executed, i.e., convergence of the recon prediction computer model 420. It should be appreciated that while FIG. 4 shows a single recon prediction computer model 420, there may in fact be multiple different recon prediction computer models 420 that are each individually trained through a machine learning training process. For example, in some illustrative embodiments, a separate recon prediction computer model 420 may be provided and trained for each forecast computer model. In this way, different recon prediction computer models 420 may be trained to predict performance of reconciliation computer tools/models for different forecast computer models. Thus, one or more recon prediction computer models 420 are trained through machine learning training processes of the ML logic 425 that provide accurate predictions of performance of reconciliation computer tools/models for different combinations of features of different types of time series datasets and forecast computer models.”)
Claim(s) 3 –
Yanchenko discloses the limitations of claims 1-2
Yanchenko further discloses the following:
wherein the MAPE is weighted MAPE (Yanchenko: Paragraph 61, “The predicted performance may be used to rank the reconciliation computer tools/models relative to one another for the given time series dataset and forecast computer model. The ranking engine 430 may utilize these predicted performance measures to rank the reconciliation computer tools/models relative to one another, but may, in some illustrative embodiments, utilize additional factors to perform the relative ranking. For example, in addition to the predicted performance, the ranking engine 430 may also utilize factors such as computing resource requirements for the various reconciliation computer tools/models, storage requirements, and the like. In some illustrative embodiments, a weighted combination of these features may be utilized to determine a final ranking score for each reconciliation computing tool/model which is then used to generate a final relative ranking, e.g., from highest ranking to lowest ranking scores.”; Paragraph 62, “The recon selection engine 450 may operate based on the relative ranking generated by the ranking engine 430 to select one or more reconciliation computer tools/models for recommending to the end user. For example, in some cases, a top-k selection process may be specified for selecting the top-k ranked reconciliation computer tools/models for use with the time series dataset and forecast computer model specified in the runtime data 408. In other implementations, different selection criteria may be used to select from the ranking of reconciliation computer tools/models, e.g., using a ranking which balances anticipated performance, e.g., represented by MSE, MAE, or other metrics, as well as computational complexity, e.g., bottom-up reconciliation being simpler than MinT.”; Paragraph 63, “The results generated by the ranking engine 430 and the selection engine 450 may be used by the output engine 440 to generate a dashboard 470 output or other output interface that informs the end user of the recommended reconciliation computer tool(s)/model(s) for the time series dataset and forecast computer model. This dashboard 470 may specify the recommendation, the relative ranking, the reasoning for the relative ranking, e.g., ranking scores and weighted factors for the various components of the ranking scores, predicted performance, etc.”)
Claim(s) 4 –
Yanchenko discloses the limitations of claim 1
Yanchenko further discloses the following:
wherein in the second displaying process, the at least one processor displays a list of the plurality of products sorted in descending order of the index (Yanchenko: Paragraph 27, “FIG. 2A is an example diagram demonstrating a base or simple reconciliation computing tool/model's mapping matrix P and a corresponding hierarchy of time series data. As shown in FIG. 2A, assume that there is an input time series dataset having a hierarchy as shown in the tree data structure 210 of FIG. 2A, where each node represents a classification of the data at different levels of the hierarchy and edges represent relationships between the classifications of the data, e.g., a hierarchy of “Tops” (T)->“Short Sleeved” (A)->“Item1” (AA), “Item2” (AB), “Item3” (AC). Portions of the input dataset are associated with the nodes, where each parent node is an aggregate of its child nodes. The tree data structure 210 may be determined from the structure of the time series datasets that is input, a temporal hierarchy, or the like.”; Paragraph 33, “Contrary to this brute force method, the illustrative embodiments provide an artificial intelligence (AI) based solution that involves a machine learning training of a machine learning computer model, e.g., neural network, deep neural network, random forest, support vector machine, a light gradient boosting machine (LightGBM), or the like, to thereby cause the computer model to learn from historical data and prior reconciliation performance, to predict the performance of particular reconciliation computer tool/model performance for given reconciliation tools/models. That is, given a set of input dataset features, forecast computer model features, and reconciliation computing tool/model features, the trained machine learning computer model predicts the performance of the set of reconciliation computing tools/models. From this prediction of performance, the reconciliation computing tools/models may be ranked relative to one another, e.g., highest to lowest performance, and one or more of the highest ranked reconciliation computing tools/models may be selected for use in performing reconciliation of forecasts for a given combination of time series dataset and forecast computer model features. The ranking and selection may be used to generate an output, such as a dashboard or the like for presentation to an authorized user, may be used to automatically select a reconciliation computing tool/model that is applied to the forecast data generated by the forecast computer model based on the input time series dataset, or the like. In the case of a dashboard, various views of the data used to generate the ranking of reconciliation computing tools/models as well as the basis for the rankings may be provided in the dashboard.”; Paragraph 35, “Once the machine learning computer model of the illustrative embodiments is trained on historical data, the trained machine learning computer model may be applied to new data to predict the performance of reconciliation computing tools/models and thereby identify which reconciliation computing tools/models should be applied to perform reconciliation of forecast data. That is, a user may specify a particular hierarchy of an underlying time series dataset, such as by providing a hierarchical tree data structure, applying an analysis tool that generates such a hierarchical tree data structure, or otherwise specifying the hierarchy, such as by specifying a summing matrix data structure or the like. The time series dataset is processed by a forecast computer model to generate forecast data for the hierarchical time series dataset. Moreover, a time series dataset feature extractor analyzes and extracts specific features of the time series dataset, including, for example, a seasonality of the time series dataset, domain and metadata features, noise across the hierarchy of the time series dataset, noise across time, trend/stationarity of the data in the time series dataset, and other hierarchy characteristics. For example, seasonality refers to the length of a seasonal period, e.g., as determined by looking at a frequency spectrum for the data, where the seasonality may be represented as a seasonal pattern vector, such as a vector of length 4 for quarterly average values, or vector of length 7 for day or week patterns. The trend/stationarity information indicates whether a trend exists or not (i.e., a trend indicator) and may report the result of a test statistic for a stationarity test, e.g., augmented Dickey Fuller test.”; Paragraph 56, “For example, the feature extractor 412 may extract time series dataset features, such as seasonality features, domain/metadata features, noise across hierarchy features, noise across time features, trend/stationarity features, and hierarchy characteristics features, for example. The feature extractor 414 may extract reconciliation computing tool/model features such as the tool/model encoding method used and additional parameter encoding method. The feature extractor 416 may extract forecast computing model features such as the forecasting approach used, the base forecast error properties, and base forecast properties, for example.”; Paragraph 68, “Thus, the mechanisms provide an automated computing tool for predicting the performance of reconciliation computer tools/models with regard to features of a hierarchical dataset, e.g., a time series dataset, and a forecast computer model which operates on the hierarchical dataset. The mechanisms provide one or more specifically trained machine learning trained computer models to make such predictions which take into account the features of the time series dataset, the features of the forecast computer model, and the features of the reconciliation computer tools/models to determine the predicted performance. Based on the predicted performance, the candidate reconciliation computer tools/models may be ranked relative to one another and a selection of a candidate reconciliation computer tool/model to use to perform reconciliation of forecast data is made. This recommendation and corresponding supporting data may be presented through a dashboard interface, may be used for automatic execution of the selected reconciliation computer tool/model, and the like. In some cases, the performance predictions, rankings, and selections may be used along with the input features to automatically generate rule sets for forecast computer models to correlate features of a time series dataset with a selection of a reconciliation computer tool/model to use with the forecast data of the forecast computer model to generate reconciled forecast data.”)
Claim(s) 5 –
Yanchenko discloses the limitations of claim 1
Yanchenko further discloses the following:
a fourth displaying process of displaying, based on results of the demand predictions of the plurality of products, information which indicates a product eligible for an alert regarding a prediction reassessment; (Yanchenko: Paragraph 22, “Many times, these time series datasets exhibit hierarchical structure, either cross-section, temporal, or both. For example, consider the product hierarchy example show in FIG. 1. The example of FIG. 1 shows a product hierarchy 100 in which a highest depicted level 110 is the category “apparel” which has lower levels of the hierarchy 120-140 nested in a tree-like hierarchical arrangement. That is, for example, the level 110 of “Category:Apparel” can be further categorized into “bottoms” and “tops” at lower level 120. The “tops” category of products can further be categorized into “short-sleeve” and “long-sleeve” in the lower level 130. The “short-sleeve” category may be further categorized into “Style A” and “Style B” in the lower level 140. Thereafter, the “Style B” category may be further categorized into specific products “Item 1”, “Item 2”, and “Item 3” in the lowest depicted level 150. Thus, time series datasets may be provided with regard to one or more of these levels and forecasting computer models need to be coherent regardless of the levels at which the time series datasets are provided. That is, the same forecasts, or at least consistent forecasts, should be made if the time series datasets provide data at level 150 as well as at level 110.”; Paragraph 23, “Generating forecasts at different levels of the hierarchy can provide different insights into potential future states. For example, forecasting at higher levels of the hierarchy aggregates the lower levels of the hierarchy and can be more accurate at capturing aggregate patterns and generating forecasts based on aggregate patterns. For example, the large scale trend data 160 can identify seasonality and larger aggregate patterns to better forecast patterns when the time series data is aggregated bottom-up to higher hierarchy levels. Forecasting at lower levels of the hierarchy can more accurately incorporate local effects and drive decisions, such as inventory management and the like. However, these forecasts need to be coherent in order to have coherent decision making across the various levels of the hierarchical time series datasets. For example, the sales data 170 at the lowest level 150 of the hierarchy can better represent intermittent sales data and capture localized effects to identify local trends.”; Paragraph 28, “The structure of the hierarchy is captured through a summing matrix S 212, applied to b.sub.t 214 which is the lowest-level observations, e.g., custom-character, etc. Y.sub.t 216 are all the observations in the hierarchy, e.g., Y.sub.T, Y.sub.A, Y.sub.B, Y.sub.AA, etc., where these observations are a combination of subsets of the lowest-level observations. For example, the values of S for Y.sub.T are all “1” indicating that Y.sub.T is a combination of all of the lowest-level observations. However, the values of S for Y.sub.B are 1's only for custom-character and custom-character. Thus, when the summing matrix S is applied to the lowest-level observations, one gets the entire structure of the hierarchical tree data structure 210, i.e., Y.sub.t=SŶ.sub.t. When comparing the result of applying the summing matrix to base forecasts, the results may differ from forecasts made directly at higher levels. This may be due to the noisiness of the forecasts at the various levels, e.g., data tends to be more well behaved at higher levels of the hierarchy whereas at lower levels of the hierarchy, data may be more erratic, e.g., sales may be intermittent/sparse.”; Paragraph 41, “The reconciliation computing tools/models may vary in complexity as well as corresponding resource costs, e.g., processor and/or memory resource requirements and computation time. As noted above, in some cases less complex, and thus less resource costly, reconciliation computing tools/models may be able to provide sufficiently similar performance to more complex reconciliation computing tools/models. With the mechanisms of the illustrative embodiments, these situations may be identified automatically using the ranking generated by the illustrative embodiments so that in such cases, the less complex reconciliation computing tools may be selected, i.e., when a simpler method provides comparable performance, or even out-performs, a more sophisticated method, then the simpler method may be selected. Moreover, this ranking can identify situations where a more complex reconciliation computing tool/model is needed to provide sufficient performance. This evaluation may be codified into automatic selection of a reconciliation computing tool/model, automatic generation of a set of reconciliation computing tool/model selection rules for a forecast computer model, or output to an authorized user that performs the selection based on the relative ranking of reconciliation computing tools/models.”; Paragraph 50, “As described above, the illustrative embodiments of the present invention are specifically directed to an improved computing tool that automatically identify hierarchical reconciliation processes for producing coherent forecasts. In particular, the illustrative embodiments provide an artificial intelligence (AI) based computing tool that automatically predicts the performance of reconciliation computing tools/models on particular time series datasets and forecast computing models so as to relatively rank them to one another and, in some cases, generate recommendation outputs, automatically execute a selected reconciliation computing tool/model, and/or generate a set of reconciliation computing tool/model selection rules that may be executed on features of time series datasets to select a reconciliation computing tool/model to utilize with forecast data generated by a forecast computing model. All of the functions of the illustrative embodiments as described herein are intended to be performed using automated processes without human intervention. While a human being may initiate operation of the illustrative embodiments and/or provide some of the input data used by the mechanisms of the illustrative embodiments, the illustrative embodiments of the present invention are not directed to actions performed by the human being but rather logic and functions performed specifically by the improved computing tool based on the machine learning training of the machine learning computer model and/or automated rule generation engine. Moreover, even though the present invention may provide an output that ultimately assists human beings with regard to making decisions based on forecasts, the illustrative embodiments of the present invention are not directed to actions performed by the human being viewing the results of the processing performed by the illustrative embodiments, but rather to the specific operations performed by the specific improved computing tool of the present invention. Thus, the illustrative embodiments are not organizing any human activity, but are in fact directed to the automated logic and functionality of an improved computing tool.”)
a third accepting process of accepting a selection made by the user with respect to the product displayed in the fourth displaying process; and (Yanchenko: Paragraph 22, “Many times, these time series datasets exhibit hierarchical structure, either cross-section, temporal, or both. For example, consider the product hierarchy example show in FIG. 1. The example of FIG. 1 shows a product hierarchy 100 in which a highest depicted level 110 is the category “apparel” which has lower levels of the hierarchy 120-140 nested in a tree-like hierarchical arrangement. That is, for example, the level 110 of “Category:Apparel” can be further categorized into “bottoms” and “tops” at lower level 120. The “tops” category of products can further be categorized into “short-sleeve” and “long-sleeve” in the lower level 130. The “short-sleeve” category may be further categorized into “Style A” and “Style B” in the lower level 140. Thereafter, the “Style B” category may be further categorized into specific products “Item 1”, “Item 2”, and “Item 3” in the lowest depicted level 150. Thus, time series datasets may be provided with regard to one or more of these levels and forecasting computer models need to be coherent regardless of the levels at which the time series datasets are provided. That is, the same forecasts, or at least consistent forecasts, should be made if the time series datasets provide data at level 150 as well as at level 110.”; Paragraph 23, “Generating forecasts at different levels of the hierarchy can provide different insights into potential future states. For example, forecasting at higher levels of the hierarchy aggregates the lower levels of the hierarchy and can be more accurate at capturing aggregate patterns and generating forecasts based on aggregate patterns. For example, the large scale trend data 160 can identify seasonality and larger aggregate patterns to better forecast patterns when the time series data is aggregated bottom-up to higher hierarchy levels. Forecasting at lower levels of the hierarchy can more accurately incorporate local effects and drive decisions, such as inventory management and the like. However, these forecasts need to be coherent in order to have coherent decision making across the various levels of the hierarchical time series datasets. For example, the sales data 170 at the lowest level 150 of the hierarchy can better represent intermittent sales data and capture localized effects to identify local trends.”; Paragraph 28, “The structure of the hierarchy is captured through a summing matrix S 212, applied to b.sub.t 214 which is the lowest-level observations, e.g., custom-character, etc. Y.sub.t 216 are all the observations in the hierarchy, e.g., Y.sub.T, Y.sub.A, Y.sub.B, Y.sub.AA, etc., where these observations are a combination of subsets of the lowest-level observations. For example, the values of S for Y.sub.T are all “1” indicating that Y.sub.T is a combination of all of the lowest-level observations. However, the values of S for Y.sub.B are 1's only for custom-character and custom-character. Thus, when the summing matrix S is applied to the lowest-level observations, one gets the entire structure of the hierarchical tree data structure 210, i.e., Y.sub.t=SŶ.sub.t. When comparing the result of applying the summing matrix to base forecasts, the results may differ from forecasts made directly at higher levels. This may be due to the noisiness of the forecasts at the various levels, e.g., data tends to be more well behaved at higher levels of the hierarchy whereas at lower levels of the hierarchy, data may be more erratic, e.g., sales may be intermittent/sparse.”; Paragraph 41, “The reconciliation computing tools/models may vary in complexity as well as corresponding resource costs, e.g., processor and/or memory resource requirements and computation time. As noted above, in some cases less complex, and thus less resource costly, reconciliation computing tools/models may be able to provide sufficiently similar performance to more complex reconciliation computing tools/models. With the mechanisms of the illustrative embodiments, these situations may be identified automatically using the ranking generated by the illustrative embodiments so that in such cases, the less complex reconciliation computing tools may be selected, i.e., when a simpler method provides comparable performance, or even out-performs, a more sophisticated method, then the simpler method may be selected. Moreover, this ranking can identify situations where a more complex reconciliation computing tool/model is needed to provide sufficient performance. This evaluation may be codified into automatic selection of a reconciliation computing tool/model, automatic generation of a set of reconciliation computing tool/model selection rules for a forecast computer model, or output to an authorized user that performs the selection based on the relative ranking of reconciliation computing tools/models.”; Paragraph 50, “As described above, the illustrative embodiments of the present invention are specifically directed to an improved computing tool that automatically identify hierarchical reconciliation processes for producing coherent forecasts. In particular, the illustrative embodiments provide an artificial intelligence (AI) based computing tool that automatically predicts the performance of reconciliation computing tools/models on particular time series datasets and forecast computing models so as to relatively rank them to one another and, in some cases, generate recommendation outputs, automatically execute a selected reconciliation computing tool/model, and/or generate a set of reconciliation computing tool/model selection rules that may be executed on features of time series datasets to select a reconciliation computing tool/model to utilize with forecast data generated by a forecast computing model. All of the functions of the illustrative embodiments as described herein are intended to be performed using automated processes without human intervention. While a human being may initiate operation of the illustrative embodiments and/or provide some of the input data used by the mechanisms of the illustrative embodiments, the illustrative embodiments of the present invention are not directed to actions performed by the human being but rather logic and functions performed specifically by the improved computing tool based on the machine learning training of the machine learning computer model and/or automated rule generation engine. Moreover, even though the present invention may provide an output that ultimately assists human beings with regard to making decisions based on forecasts, the illustrative embodiments of the present invention are not directed to actions performed by the human being viewing the results of the processing performed by the illustrative embodiments, but rather to the specific operations performed by the specific improved computing tool of the present invention. Thus, the illustrative embodiments are not organizing any human activity, but are in fact directed to the automated logic and functionality of an improved computing tool.”)
a fifth displaying process of displaying an analysis result regarding demand for a product corresponding to the selection accepted in the third accepting process. (Yanchenko: Paragraph 22, “Many times, these time series datasets exhibit hierarchical structure, either cross-section, temporal, or both. For example, consider the product hierarchy example show in FIG. 1. The example of FIG. 1 shows a product hierarchy 100 in which a highest depicted level 110 is the category “apparel” which has lower levels of the hierarchy 120-140 nested in a tree-like hierarchical arrangement. That is, for example, the level 110 of “Category:Apparel” can be further categorized into “bottoms” and “tops” at lower level 120. The “tops” category of products can further be categorized into “short-sleeve” and “long-sleeve” in the lower level 130. The “short-sleeve” category may be further categorized into “Style A” and “Style B” in the lower level 140. Thereafter, the “Style B” category may be further categorized into specific products “Item 1”, “Item 2”, and “Item 3” in the lowest depicted level 150. Thus, time series datasets may be provided with regard to one or more of these levels and forecasting computer models need to be coherent regardless of the levels at which the time series datasets are provided. That is, the same forecasts, or at least consistent forecasts, should be made if the time series datasets provide data at level 150 as well as at level 110.”; Paragraph 23, “Generating forecasts at different levels of the hierarchy can provide different insights into potential future states. For example, forecasting at higher levels of the hierarchy aggregates the lower levels of the hierarchy and can be more accurate at capturing aggregate patterns and generating forecasts based on aggregate patterns. For example, the large scale trend data 160 can identify seasonality and larger aggregate patterns to better forecast patterns when the time series data is aggregated bottom-up to higher hierarchy levels. Forecasting at lower levels of the hierarchy can more accurately incorporate local effects and drive decisions, such as inventory management and the like. However, these forecasts need to be coherent in order to have coherent decision making across the various levels of the hierarchical time series datasets. For example, the sales data 170 at the lowest level 150 of the hierarchy can better represent intermittent sales data and capture localized effects to identify local trends.”; Paragraph 28, “The structure of the hierarchy is captured through a summing matrix S 212, applied to b.sub.t 214 which is the lowest-level observations, e.g., custom-character, etc. Y.sub.t 216 are all the observations in the hierarchy, e.g., Y.sub.T, Y.sub.A, Y.sub.B, Y.sub.AA, etc., where these observations are a combination of subsets of the lowest-level observations. For example, the values of S for Y.sub.T are all “1” indicating that Y.sub.T is a combination of all of the lowest-level observations. However, the values of S for Y.sub.B are 1's only for custom-character and custom-character. Thus, when the summing matrix S is applied to the lowest-level observations, one gets the entire structure of the hierarchical tree data structure 210, i.e., Y.sub.t=SŶ.sub.t. When comparing the result of applying the summing matrix to base forecasts, the results may differ from forecasts made directly at higher levels. This may be due to the noisiness of the forecasts at the various levels, e.g., data tends to be more well behaved at higher levels of the hierarchy whereas at lower levels of the hierarchy, data may be more erratic, e.g., sales may be intermittent/sparse.”; Paragraph 41, “The reconciliation computing tools/models may vary in complexity as well as corresponding resource costs, e.g., processor and/or memory resource requirements and computation time. As noted above, in some cases less complex, and thus less resource costly, reconciliation computing tools/models may be able to provide sufficiently similar performance to more complex reconciliation computing tools/models. With the mechanisms of the illustrative embodiments, these situations may be identified automatically using the ranking generated by the illustrative embodiments so that in such cases, the less complex reconciliation computing tools may be selected, i.e., when a simpler method provides comparable performance, or even out-performs, a more sophisticated method, then the simpler method may be selected. Moreover, this ranking can identify situations where a more complex reconciliation computing tool/model is needed to provide sufficient performance. This evaluation may be codified into automatic selection of a reconciliation computing tool/model, automatic generation of a set of reconciliation computing tool/model selection rules for a forecast computer model, or output to an authorized user that performs the selection based on the relative ranking of reconciliation computing tools/models.”; Paragraph 50, “As described above, the illustrative embodiments of the present invention are specifically directed to an improved computing tool that automatically identify hierarchical reconciliation processes for producing coherent forecasts. In particular, the illustrative embodiments provide an artificial intelligence (AI) based computing tool that automatically predicts the performance of reconciliation computing tools/models on particular time series datasets and forecast computing models so as to relatively rank them to one another and, in some cases, generate recommendation outputs, automatically execute a selected reconciliation computing tool/model, and/or generate a set of reconciliation computing tool/model selection rules that may be executed on features of time series datasets to select a reconciliation computing tool/model to utilize with forecast data generated by a forecast computing model. All of the functions of the illustrative embodiments as described herein are intended to be performed using automated processes without human intervention. While a human being may initiate operation of the illustrative embodiments and/or provide some of the input data used by the mechanisms of the illustrative embodiments, the illustrative embodiments of the present invention are not directed to actions performed by the human being but rather logic and functions performed specifically by the improved computing tool based on the machine learning training of the machine learning computer model and/or automated rule generation engine. Moreover, even though the present invention may provide an output that ultimately assists human beings with regard to making decisions based on forecasts, the illustrative embodiments of the present invention are not directed to actions performed by the human being viewing the results of the processing performed by the illustrative embodiments, but rather to the specific operations performed by the specific improved computing tool of the present invention. Thus, the illustrative embodiments are not organizing any human activity, but are in fact directed to the automated logic and functionality of an improved computing tool.”; Paragraph 74, “A determination is made as to whether a matching rule is found (step 750). If a matching rule is found, the reconciliation computer tool/model specified in the matched rule is selected (step 760). If a matching rule is not found, then a default reconciliation computer tool/model may be selected (step 770). The selected reconciliation computer tool/model is then used as a basis for generating an output (step 780). The output may be a dashboard output specifying a recommendation of a reconciliation computer tool/model for use with the forecast computer model and time series dataset. The output may be an automated execution of the selected reconciliation computer tool/model on forecast data generated by the forecast computer model executed on the time series dataset. The operation then terminates.”)
Claim(s) 6 –
Yanchenko discloses the limitations of claim 1
Yanchenko further discloses the following:
wherein the weight value is determined according to an actual outcome value of sales of a product. (Yanchenko: Paragraph 20, “Forecasting computer models many times utilize hierarchical datasets, e.g., time series datasets, to provide historical data upon which the forecasting computer model operates to generate forecast predictions. A time series dataset is a collection of data representing observations or measurements obtained through repeated measurements over time, and may be represented as a sequence of these observations/measurements with corresponding timestamps. For example, sales data may represent number of sales of one or more products at various points or ranges in time, e.g., seasons or time periods such as quarterly, monthly, etc., or even event based time periods, such as during a particular annual sale, during a promotion period, or the like. The time series datasets, e.g., sale data, may have various characteristics, such as sales price, numbers of units sold, geographic region sold, and a plethora of other data pertinent to evaluating the state of an organization, business, or the like, with regard to that product or product(s).”; Paragraph 57, “These features may be encoded, such as by generating vector representations of these features, e.g., one-hot vector encoding, that are input to the recon prediction computer model 420. The recon prediction computer model 420 may operate on the encoded features for the time series dataset 404, the forecast computing model, and a reconciliation computing tool/model from the catalog 407. The recon prediction computer model 420 generates a prediction of the performance of the reconciliation computing tool/model. This predicted performance may then be compared to the historical performance data 402 for this time series dataset, forecast computing model, and reconciliation computing tool/model to determine a loss between the prediction and the actual historical data. This loss may be calculated using a predetermined loss function and the ML logic 425 may operate to determine a modification to operational parameters of the recon prediction computer model 420 to reduce this loss, or error, such as by using linear regression and stochastic gradient descent or any other suitable machine learning methodology.”; Paragraph 58, “This process may be performed over multiple epochs or iterations using the same or different time series datasets from the historical time series data 404 to thereby adjust or modify the operational parameters of the recon prediction computer model 420 until a satisfactory loss (error) is achieved, e.g., loss/error below a predetermined threshold value, or until a predetermined number of epochs are executed, i.e., convergence of the recon prediction computer model 420. It should be appreciated that while FIG. 4 shows a single recon prediction computer model 420, there may in fact be multiple different recon prediction computer models 420 that are each individually trained through a machine learning training process. For example, in some illustrative embodiments, a separate recon prediction computer model 420 may be provided and trained for each forecast computer model. In this way, different recon prediction computer models 420 may be trained to predict performance of reconciliation computer tools/models for different forecast computer models. Thus, one or more recon prediction computer models 420 are trained through machine learning training processes of the ML logic 425 that provide accurate predictions of performance of reconciliation computer tools/models for different combinations of features of different types of time series datasets and forecast computer models.”; Paragraph 63, “The results generated by the ranking engine 430 and the selection engine 450 may be used by the output engine 440 to generate a dashboard 470 output or other output interface that informs the end user of the recommended reconciliation computer tool(s)/model(s) for the time series dataset and forecast computer model. This dashboard 470 may specify the recommendation, the relative ranking, the reasoning for the relative ranking, e.g., ranking scores and weighted factors for the various components of the ranking scores, predicted performance, etc.”; Paragraph 70, “FIG. 5 is a flowchart outlining an example operation for training a machine learning computer model for reconciliation computing tool selection in accordance with some illustrative embodiments. As shown in FIG. 5, the operation starts by receiving historical data and ground truth data (step 510). Feature extraction is performed on the historical data which includes historical time series datasets, corresponding forecast computer models characteristics, and corresponding reconciliation computer tool/model characteristics, which may be retrieved from a catalog data structure or the like (step 520). The features are encoded and input to a machine learning computer model (step 530) which is then trained on the input features using a machine learning training logic, the ground truth data, and a predefined loss function minimization methodology (step 540). The machine learning computer model predicts a reconciliation (recon) computer tool/model performance for each candidate recon computer tool/model (step 550), e.g., an error between a reconciled forecast data should the reconciliation computer tool/model be applied to the forecast data for the given forecast computer model operating on the time series data, and an actual forecast data that represents a correct forecast. By attempting to minimization of the error between these two, a modification of operational parameters (step 560) of the machine learning computer model to minimize the error is performed and the process is iterated until convergence (step 570). After convergence, the trained machine learning computer model is then deployed for runtime operation (step 580). In some illustrative embodiments, the trained machine learning computer model may be executed again on the historical data to generate correlations between input features and predicted performance to thereby generate rule sets for forecast computer models (step 590) which may then be deployed (step 595) in the service or to end user computing systems. The operation then terminates.”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Willemain (US 2013/0166350 A1) discloses a method for cluster based processing for forecasting demand
Magelinic (US 2020/0167869 A1) discloses a method for a real-time predictive analytics engine
Lee (US 2022/0374827 A1) discloses a method for automatic replenishment of retail enterprise stores
Paul (US 2024/0119470 A1) discloses a method for generating a forecast of a timeseries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Philip N Warner/Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624