DETAILED ACTION
Status of the Application
Claims 1-37 have been examined in this application. This communication is the first action on the merits. The information disclosure statement (IDS) submitted on 03/20/2025; was filed with this application. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status
This action is a Non-Final Action on the merits in response to the application filed on 12/18/2024.
Claims 1-37 remain pending in this application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are directed towards an method, claims 18-37 are directed towards a system all of which are among the statutory categories of invention.
Claims 1-37 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception without significantly more.
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites at least one step or act, including forecasting data. Thus, the claim is to a process, which is one of the statutory categories of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
With respect to claims 1-37, the independent claims (claims 1, 8, 12, 18, 23, 27, and 32) are directed to managing of data, In independent claim 1, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention:
a method for performing a meta-prediction of at least one time series by using a set of forecasting models, each forecasting model being associated with a respective forecasting theme,
generating, by using the set of forecasting models, based on each of the at least one time series data, a set of forecast signals, each respective forecast signal of the set of forecast signals predicting at least one future value derived from the time series according to the respective forecasting theme;
generating, by at least one signal and feature processing model, based on the time series data, a set of features;
determining, by a trained meta-learner on historical time series data, based on the time series data and the set of features, a set of weights, the set of weights comprising a respective weight for each respective forecast signal of the set of forecast signals, the respective weight being indicative of a relative importance of the respective theme of the respective forecasting model; and
generating, using the set of weights and the set of forecast signals, a meta- prediction for the time series data.
these steps fall within and recite an abstract ideas because they are directed to a method of organizing human activity which includes commercial interaction such as business relations; mental processes which includes concepts performed in the human such as observation and evaluation (i.e. determining a size of the image) (See MPEP 2106.04(a)(2), subsection II).
If a claim limitation, under its broadest reasonable interpretation, covers commercial interaction; observation and evaluation, then it falls within the “method of organizing human activity” and “mental processes” groupings of abstract ideas. Therefore, If the identified limitation(s) falls within any of the groupings of abstract ideas enumerated in the MPEP 2106, the analysis should proceed to Prong Two. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites the additional elements of models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device. The claims recite the steps are performed by the models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device.
The limitations of
the method being executed by at least one processor operatively connected to at least one non-transitory storage medium, the at least one processor having access to the set of forecasting models, the method comprising:
receiving, from the at least one non-transitory storage medium, at least one time series data;
are mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05.
Further, the limitations are recited as being performed by models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device. The models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device are recited at a high level of generality. In limitation (a), meta-learner, unsupervised machine learning is used as a tool to perform the generic computer function of receiving data. See MPEP 2106.05(f). The meta-learner, unsupervised machine learning are used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). Additionally, claim 1 recites The meta-learner, unsupervised machine learning. The general use of a machine learning technique does not provide a meaningful limitation to transform the abstract idea into a practical application.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explained with respect to Step 2A, Prong Two, the additional elements are the models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device. The additional elements were found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering. Then, meta-learner, unsupervised machine learning are a machine learning techniques recited in the claim are disclosed at a high-level of generality (see at least Specification [0150 “regimes in time series data are identified by leveraging unsupervised machine learning techniques 1101 and combining these regimes with additional signals to improve the accuracy of the results. Each regime has specific characteristics that can be exploited, and combining regimes with additional signals, such as a binary or ternary signal indicating the direction of the trend can be useful to further provide context to the eventual meta-prediction signal FN”; 0154 “A person skilled in the art will recognize that unsupervised machine learning models are well known, and that selecting one is within the skill of this person, based on the needs of a particular application. In addition, one or more models can be used alone or in combination, whether stacked or running in parallel. ”]) and does not amount to significantly more than the abstract idea.
However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As discussed in Step 2A, Prong Two above, the recitations of
the method being executed by at least one processor operatively connected to at least one non-transitory storage medium, the at least one processor having access to the set of forecasting models, the method comprising:
receiving, from the at least one non-transitory storage medium, at least one time series data;
are recited at a high level of generality. These elements amount to receiving and generating data, are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. 10 As discussed in Step 2A, Prong Two above, the recitation of an models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device to perform limitations amounts to no more than mere instructions to apply the exception using a generic computer component. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO).
Dependent claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37, do not contain any new additional elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims. In this case, the claims are rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Thus, the claim is not patent eligible.
Regarding the dependent claims. Dependent claims 2, recite forecasting models to receive data; claims 4, 10, 11, 13, 14, 19, 21, 25, 26, 28, 29 recite signal and processing model to process data; claim 15, 22, 37 recite machine learning to discover regimes; claim 16, 30, 35 recite client device to output data; claims 17, 31 recite model to generate an explanation. The dependent claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37recite limitations that are not technological in nature and merely limits the abstract idea to a particular environment. Claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37 recites models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device which are considered an insignificant extra-solution activities of collecting and analyzing data; see MPEP 2106.05(g). Claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37 recites models, storage medium, processor, signal, meta-learner, unsupervised machine learning, client device, which merely recites an instruction to apply the abstract idea using a generic computer component; MPEP 2106.05(f). Additionally, claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37recite steps that further narrow the abstract idea. No additional elements are disclosed in the dependent claims that were not considered in independent claims 1, 8, 12, 18, 23, 27, and 32. Therefore claims 2-7, 9-11, 13-17, 19-22, 24-26, 28-31, 33-37 do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-37 are rejected under 35 U.S.C. 103 as being unpatentable over United States Patent Publication US 20220058669, Yin, et al. to hereinafter Yin in view of United States Patent Publication US 20240386015, Crabtree, et al.
Referring to Claim 1, Yin teaches a method for performing a meta-prediction of at least one time series by using a set of forecasting models, each forecasting model being associated with a respective forecasting theme, the method being executed by at least one processor operatively connected to at least one non-transitory storage medium, the at least one processor having access to the set of forecasting models, the method comprising:
Yin: Sec. 0083, The computer system 1000 may include a memory 1004, such as a memory 1004 that can communicate via a bus 1008. The memory 1004 may be a main memory, a static memory, or a dynamic memory. The memory 1004 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 1004 includes a cache or random access memory for the processor 1002. In alternative examples, the memory 1004 is separate from the processor 1002, such as a cache memory of a processor, the system memory, or other memory. The memory 1004 may be an external storage device or database for storing data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store data. The memory 1004 is operable to store instructions executable by the processor 1002. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor 1002 executing the instructions stored in the memory 1004. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination.
receiving, from the at least one non-transitory storage medium, at least one time series data (
Yin: Sec. 0040, The method further comprises predicting (step 306) a plurality of intermediate prediction results based on a plurality of demand forecasting models from at-least one transformed dataset; and at least one input data set other than the transformed data set. The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0042, In an example, such generating of said final high time-scale forecast result comprises integrating the high time scale ensembled forecast into a corresponding low-time scale ensembled forecast. One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights. Accordingly, the final high time-scale forecast is generated as the adjusted high time scale ensembled forecast.);
generating, by using the set of forecasting models, based on each of the at least one time series data, a set of forecast signals, each respective forecast signal of the set of forecast signals predicting at least one future value derived from the time series according to the respective forecasting theme (See Crabtree) (
Yin: Sec. 0076, FIG. 12 illustrates example results of Forecast Improvement through the present subject matter. As depicted in the figures, the machine learning based forecast for the index closely follows the actual-measurements with the training, validating and testing error falling in the range defined by 1.9% to 2.6% as defined in the below mentioned Table 6.);
generating, by at least one signal and feature processing model, based on the time series data, a set of features (
Yin: Sec. 0004, the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured.
Yin: Claim. 1, receiving a plurality of input data-sets associated with time-series data, wherein each of said data-sets refers a time-based variation of one or more variables in accordance with a designated time-interval;
generate at-least one transformation-result by transforming time-intervals of at least one input dataset based on a plurality of time interval transformation models;);
determining, by a trained meta-learner on historical time series data, based on the time series data and the set of features, a set of weights, the set of weights comprising a respective weight for each respective forecast signal of the set of forecast signals, the respective weight being indicative of a relative importance of the respective theme (See Crabtree) of the respective forecasting model (See Crabtree) (
Yin: Sec. 0042, In an example, such generating of said final high time-scale forecast result comprises integrating the high time scale ensembled forecast into a corresponding low-time scale ensembled forecast. One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights. Accordingly, the final high time-scale forecast is generated as the adjusted high time scale ensembled forecast.
Yin: Sec. 0065, As a part of ensemble Step 2, averaging is performed with respect to the each of the shortlisted time-domain forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later. The generated results as a part of present ensemble step 2 corresponds to generating a high time scale ensembled forecast and a low time scale ensembled forecast from the selected results..);
and
generating, using the set of weights and the set of forecast signals, a meta-prediction for the time series data (
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0065, As a part of ensemble Step 2, averaging is performed with respect to the each of the shortlisted time-domain forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later. The generated results as a part of present ensemble step 2 corresponds to generating a high time scale ensembled forecast and a low time scale ensembled forecast from the selected results.
Yin: Sec. 0068, FIG. 11 illustrates an operation of ensemble learning module 408 and thereby depicts an ensemble step 3 based operation in continuation to ensemble step 2 operation of FIG. 10. More specifically, ensemble step 3 corresponds to a Low granularity forecast to adjust high time scale forecast (S1) from FIG. 10b based on training error, validation error, or both to decide the weight.
Yin: Claim. 10, The method as claimed in claim 8, wherein generating the aggregated result comprises calculated a weighted-average of the second intermediate forecast results to generate the final prediction result.).
Yin does not explicitly teach theme; forecasting theme.
However, Crabtree teaches these limitation.
theme (
Crabtree: Sec. 0401, This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.)
forecasting theme (
Crabtree: Sec. 0210, the model has a vocabulary of 10,000 unique tokens. The linear transformation would project the Decoder's hidden states into a 10,000-dimensional vector space. Each element in this vector represents the model's predicted probability or score for the corresponding token in the vocabulary.
Crabtree: Sec. 0211, A softmax function is applied to the projected values (vectors) to generate output probabilities over the vocabulary. The softmax function normalizes the values so that they sum up to 1, representing a probability distribution over the vocabulary. Each probability indicates the likelihood of a specific token being the next output token. The token with the highest probability is selected as the next output token. During the model's training, the objective is to maximize the probability of the correct next token given the input sequence and the previously generated tokens. The model learns to assign higher probabilities to the tokens that are more likely to appear based on the context.
Crabtree: Sec. 0276, These may include the multidimensional time series data store 1020 with its robust scripting features which may include a distributive friendly, fault-tolerant, real-time, continuous run prioritizing, programming platform such as, but not limited to Erlang/OTP 1121 and a compatible but comprehensive and proven library of math functions of which the C″ math libraries are an example 1122, data formalization and ability to capture time series data including irregularly transmitted, burst data;
Crabtree: Sec. 0302, According to an embodiment, the subsystem 3230 can use machine learning approaches (e.g., deep learning, probabilistic graphical models, etc.) to learn ontology mappings and perform automated ontology integration.)
Crabtree describes forecast signal by teaching the modeling of probability and a mathematical transformation.
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 2, Yin teaches the method of claim 1, wherein said at least one time series includes a plurality of time series, and wherein each forecasting model of the set of forecasting models receives a different time series (
Yin: Sec. 0004, The plurality of time series patterns are integrated and an index ID is generated for searching the integrated time series patterns at high speed. In addition, data is generated that manages transitions between integrated time series patterns.
Yin: Sec. 0010, a method for forecasting demand with respect to an entity. The method comprises receiving a plurality of input data-sets associated with time-series data, wherein each of said data-sets refers a time-based variation of one or more variables in accordance with a designated time-interval.
Yin: Sec. 0038, forecasting time-series based dataset in accordance with another embodiment of the subject matter. The method comprises receiving (302) a plurality of input data-sets associated with time-series data, wherein each of said data set refers a time-based variation of one or more variables in accordance with a designated time-scale.
Yin: Sec. 0040, a plurality of intermediate prediction results based on a plurality of demand forecasting models from at-least one transformed dataset; and at least one input data set other than the transformed data set. The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0061, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . ).
Referring to Claim 3, Yin teaches the method of claim 1, wherein said at least one time series includes a transformed time series (
Yin: Sec. 0039, The transforming of the time-scale of the at least one input data set through the transformation model comprises executing a first plurality of machine-learning and time series models over the at least one input data set to obtain a plurality of intermediate transformation data sets.).
Referring to Claim 4, Yin teaches the method of claim 1, Yin does not explicitly teach wherein said generating, by at least one signal and feature processing model, based on the time series data, the set of features includes applying a latent space transformation on the time series data to obtain at least a subset of the set of features.
However, Crabtree teaches wherein said generating, by at least one signal and feature processing model, based on the time series data, the set of features includes applying a latent space transformation on the time series data to obtain at least a subset of the set of features (
Crabtree: Sec. 0156, The hyperparameter optimization system 2126 uses Bayesian optimization to search for the best combination of latent space dimensionality, regularization strength, and decoder architecture. The optimization is guided by an information-theoretic objective that maximizes the mutual information between the latent space and the generated sentences, ensuring that the VAE captures meaningful and interpretable representations.
Crabtree: Sec. 0198, The vectors exist in a continuous high-dimensional space, where each dimension represents a latent feature or aspect of the word or token.
Crabtree: Sec. 0215, The encoder takes the input data and maps it to a lower-dimensional representation, often referred to as the latent space or bottleneck. The decoder takes the latent representation and tries to reconstruct the original input data. Autoencoders can be used for dimensionality reduction by learning a compressed representation of the input data in the latent space. The latent space has a lower dimensionality than the input data, capturing the most salient features or patterns. The training objective of an autoencoder is to minimize the reconstruction error between the original input and the reconstructed output.
Crabtree: Sec. 0259, Periodically, (e.g., hourly, daily, weekly, etc.) platform 400 may collect (e.g., aggregate) model parameters, encrypted data, and/or the like from all of, or a subset of, edge devices 410 a-n and apply the aggregated model parameters as an update to a master or global model (e.g., context classification, neuro-symbolic GenAI model, etc.). The updated global model or just its parameters, may be transmitted to all of, or a subset of, the edge devices 410 a-n where they may be applied to the local models operating thereon. Similarly, platform 400 can aggregate obtained training data, which may or may not be encrypted, and apply the training data to global models. These updated models may be transmitted to edge devices as described above.).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 5, Yin teaches the method of claim 4, wherein said on the time series data to obtain at least the subset of the set of features comprises generating a synthetic time series based on the time series data and extracting at least the subset of features therefrom (
Yin: Sec. 0007, disclosure related to the down sampling and feature engineering is concerned, the same at-least fails to refer any time series or time domain based up-scaling of data-transform.
Yin: Sec. 0010, At least one transformation-result is generated by transforming time-intervals of at least one input dataset based on a plurality of time interval transformation models.
Yin: Sec. 0011, The method comprises receiving a plurality of input data-sets associated with time-series data, wherein each of said data set refers a time-based variation of one or more variables in accordance with a designated time-scale. A time-scale of at-least one of said plurality of data sets is transformed based on at least one time-scale transformation model to generate at-least one transformed dataset.
Yin: Sec. 0039, The transforming of the time-scale of the at least one input data set through the transformation model comprises executing a first plurality of machine-learning and time series models over the at least one input data set to obtain a plurality of intermediate transformation data sets.
Yin: Sec. 0041, The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models).
Yin does not explicitly teach applying the latent space transformation.
However, Crabtree teaches applying the latent space transformation (
Crabtree: Sec. 0156, The hyperparameter optimization system 2126 uses Bayesian optimization to search for the best combination of latent space dimensionality, regularization strength, and decoder architecture. The optimization is guided by an information-theoretic objective that maximizes the mutual information between the latent space and the generated sentences, ensuring that the VAE captures meaningful and interpretable representations.
Crabtree: Sec. 0215, The encoder takes the input data and maps it to a lower-dimensional representation, often referred to as the latent space or bottleneck. The decoder takes the latent representation and tries to reconstruct the original input data. Autoencoders can be used for dimensionality reduction by learning a compressed representation of the input data in the latent space. The latent space has a lower dimensionality than the input data, capturing the most salient features or patterns.)
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 6, Yin teaches the method of claim 2, wherein one of said at least one time series data comprises a set of time series (
Yin: Sec. 0002, Such models usually receive input data set of time series index data comprising independent, predictable variables (e.g. historical sales) as input to forecast the sales of target product, wherein the sales of target product acts as a predicted-variable.
Yin: Sec. 0051, Specifically, the forecast step generates multiple monthly forecast from monthly records by a set of machine learning, time series models, deep learning (LSTM, RNN, . . . ) etc.);
and wherein said generating, by the at least one signal and feature processing model, based on the time series data, the set of features comprises:
determining interactions between at least a first time series and a second time series of the set of time series to obtain a further subset of features (
Yin: Sec. 0040, The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0064, the selection basis may be a percentile setting associated with said second plurality of machine learning models and time series models.).
Referring to Claim 7, Yin teaches a method according to claim 1, Yin does not explicitly teach wherein said at least one future value includes a fixed value, a tendency, a binary value and a combination thereof.
However, Crabtree teaches wherein said at least one future value includes a fixed value, a tendency, a binary value and a combination thereof (
Crabtree: Sec. 0221, One-hot encoding is a common technique used to represent categorical variables, such as words in a vocabulary, as binary vectors.
Crabtree: Sec. 0274, Results of the transformative analysis process may then be combined with further client directives, and additional business rules and practices relevant to the analysis and situational information external to the already available data in the automated planning service module 1030 which also runs powerful information theory 1030 a based predictive statistics functions and machine learning algorithms to allow future trends and outcomes to be rapidly forecast based upon the current system derived results and choosing each a plurality of possible business decisions).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 8, Yin teaches a method for performing a meta-prediction of time series by using a set of forecasting models, each forecasting model being associated with a respective forecasting theme, the method being executed by at least one processor operatively connected to at least one non-transitory storage medium, the at least one processor having access to the set of forecasting models, the method comprising:
receiving, from the at least one non-transitory storage medium, endogenous data comprising endogenous time series data associated with endogenous metadata (
Yin: Sec. 0059, FIG. 9 illustrates an example operation of data forecast module 406 executing the steps 208 and 308. FIG. 9a represents the transformed result of FIG. 7c and comprises one or multiple set of high timescale dataset. Other data set may be one or multiple set of low timescale dataset as originally present in FIG. 5a . Accordingly, FIG. 9b represents obtaining multiple intermediate forecast results from FIG. 9a based on multiple or different types of forecast models (machine-learning and time series models) such as linear models, Random forest, Gradient boost, Deep learning (LTSM, RNN) for results generation.
Yin: Sec. 0061, As a part of ensemble step 1, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . Thereafter a model-selection criteria is applied to filter forecasts or select second intermediate results from the first intermediate results, wherein the training error (TE) or validation error (FE) or forecast difference (FD) may be considered as the criteria.
Yin describes endogenous data by the use of mathematical for processing time series.
Yin: Sec. 0040, The method further comprises predicting (step 306) a plurality of intermediate prediction results based on a plurality of demand forecasting models from at-least one transformed dataset; and at least one input data set other than the transformed data set. The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0042, In an example, such generating of said final high time-scale forecast result comprises integrating the high time scale ensembled forecast into a corresponding low-time scale ensembled forecast. One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights. Accordingly, the final high time-scale forecast is generated as the adjusted high time scale ensembled forecast.);
receiving, from the at least one non-transitory storage medium, exogenous data characterizing an environment of the time series (
Yin: Sec. 0004, valuable data in one model for the forecast and accordingly and employ different models based on different timescales. In an example of state of the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured
Yin: Sec. 0005, a time interval of the time-series data is selected from a group consisting of the time intervals of the data sets. The method further includes “down-sampling” the observations of the first data set, and converting the time interval of the first data set to the time interval of the time-series data. Overall, this disclosure refers determining time interval of input data, perform down-sampling, feature engineering, and forecasting results.
Yin describes exogenous data by the use of quantity for processing time series.
Yin: Sec. 0040, The method further comprises predicting (step 306) a plurality of intermediate prediction results based on a plurality of demand forecasting models from at-least one transformed dataset; and at least one input data set other than the transformed data set. The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0042, In an example, such generating of said final high time-scale forecast result comprises integrating the high time scale ensembled forecast into a corresponding low-time scale ensembled forecast. One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights. Accordingly, the final high time-scale forecast is generated as the adjusted high time scale ensembled forecast.);
generating, by using the set of forecasting models, based on the endogenous and exogenous data, a set of forecast signals, each respective forecast signal of the set of forecast signals predicting at least one future value in the time series according to the respective forecasting theme (See Crabtree) (
Yin: Sec. 0004, valuable data in one model for the forecast and accordingly and employ different models based on different timescales. In an example of state of the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured
Yin: Sec. 0005, a time interval of the time-series data is selected from a group consisting of the time intervals of the data sets. The method further includes “down-sampling” the observations of the first data set, and converting the time interval of the first data set to the time interval of the time-series data. Overall, this disclosure refers determining time interval of input data, perform down-sampling, feature engineering, and forecasting results.
Yin describes exogenous data by the use of quantity for processing time series.
Yin: Sec. 0059, FIG. 9 illustrates an example operation of data forecast module 406 executing the steps 208 and 308. FIG. 9a represents the transformed result of FIG. 7c and comprises one or multiple set of high timescale dataset. Other data set may be one or multiple set of low timescale dataset as originally present in FIG. 5a . Accordingly, FIG. 9b represents obtaining multiple intermediate forecast results from FIG. 9a based on multiple or different types of forecast models (machine-learning and time series models) such as linear models, Random forest, Gradient boost, Deep learning (LTSM, RNN) for results generation.
Yin: Sec. 0061, As a part of ensemble step 1, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . Thereafter a model-selection criteria is applied to filter forecasts or select second intermediate results from the first intermediate results, wherein the training error (TE) or validation error (FE) or forecast difference (FD) may be considered as the criteria.
Yin describes endogenous data by the use of mathematical for processing time series.
Yin: Claim. 1, receiving a plurality of input data-sets associated with time-series data, wherein each of said data-sets refers a time-based variation of one or more variables in accordance with a designated time-interval;
generate at-least one transformation-result by transforming time-intervals of at least one input dataset based on a plurality of time interval transformation models;);
generating, by at least one signal and feature processing model, based on the endogenous data and the exogenous data, a set of features (
Yin: Sec. 0004, valuable data in one model for the forecast and accordingly and employ different models based on different timescales. In an example of state of the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured
Yin: Sec. 0005, a time interval of the time-series data is selected from a group consisting of the time intervals of the data sets. The method further includes “down-sampling” the observations of the first data set, and converting the time interval of the first data set to the time interval of the time-series data. Overall, this disclosure refers determining time interval of input data, perform down-sampling, feature engineering, and forecasting results.
Yin describes exogenous data by the use of quantity for processing time series.
Yin: Sec. 0047, FIG. 6 illustrates the modules 406 and 408 as illustrated in FIG. 4. FIG. 6a depicts the demand forecasting module 406, which is executing the steps 206, 306 depicted in FIG. 6a conducts forecast by multiple forecast model or different forecast models. The different forecast models include one or multiple forecast results with each forecast model (hyperparameter selection, featuring engineering, etc). One set of the same time-scales index data generate the same time-scale target forecast.);
determining, by a trained meta-learner on historical time series data, based on the endogenous time series data and the set of features, a respective weight for each respective forecast signal, the respective weight being indicative of a relative importance of the respective theme (See Crabtree) of the respective forecasting model (
Yin: Sec. 0059, FIG. 9 illustrates an example operation of data forecast module 406 executing the steps 208 and 308. FIG. 9a represents the transformed result of FIG. 7c and comprises one or multiple set of high timescale dataset. Other data set may be one or multiple set of low timescale dataset as originally present in FIG. 5a . Accordingly, FIG. 9b represents obtaining multiple intermediate forecast results from FIG. 9a based on multiple or different types of forecast models (machine-learning and time series models) such as linear models, Random forest, Gradient boost, Deep learning (LTSM, RNN) for results generation.
Yin: Sec. 0061, As a part of ensemble step 1, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . Thereafter a model-selection criteria is applied to filter forecasts or select second intermediate results from the first intermediate results, wherein the training error (TE) or validation error (FE) or forecast difference (FD) may be considered as the criteria.
Yin describes endogenous data by the use of mathematical for processing time series.
Yin: Sec. 0042, In an example, such generating of said final high time-scale forecast result comprises integrating the high time scale ensembled forecast into a corresponding low-time scale ensembled forecast. One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights. Accordingly, the final high time-scale forecast is generated as the adjusted high time scale ensembled forecast.
Yin: Sec. 0065, As a part of ensemble Step 2, averaging is performed with respect to the each of the shortlisted time-domain forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later. The generated results as a part of present ensemble step 2 corresponds to generating a high time scale ensembled forecast and a low time scale ensembled forecast from the selected results..);
generating, using the set of weights and the set of forecast signals, a meta- prediction (
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0065, As a part of ensemble Step 2, averaging is performed with respect to the each of the shortlisted time-domain forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later. The generated results as a part of present ensemble step 2 corresponds to generating a high time scale ensembled forecast and a low time scale ensembled forecast from the selected results.
Yin: Sec. 0068, FIG. 11 illustrates an operation of ensemble learning module 408 and thereby depicts an ensemble step 3 based operation in continuation to ensemble step 2 operation of FIG. 10. More specifically, ensemble step 3 corresponds to a Low granularity forecast to adjust high time scale forecast (S1) from FIG. 10b based on training error, validation error, or both to decide the weight.
Yin: Claim. 10, The method as claimed in claim 8, wherein generating the aggregated result comprises calculated a weighted-average of the second intermediate forecast results to generate the final prediction result.).
Yin does not explicitly teach theme; forecasting theme
However, Crabtree teaches these limitations
theme (
Crabtree: Sec. 0401, This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.)
forecasting theme (
Crabtree: Sec. 0210, the model has a vocabulary of 10,000 unique tokens. The linear transformation would project the Decoder's hidden states into a 10,000-dimensional vector space. Each element in this vector represents the model's predicted probability or score for the corresponding token in the vocabulary.
Crabtree: Sec. 0211, A softmax function is applied to the projected values (vectors) to generate output probabilities over the vocabulary. The softmax function normalizes the values so that they sum up to 1, representing a probability distribution over the vocabulary. Each probability indicates the likelihood of a specific token being the next output token. The token with the highest probability is selected as the next output token. During the model's training, the objective is to maximize the probability of the correct next token given the input sequence and the previously generated tokens. The model learns to assign higher probabilities to the tokens that are more likely to appear based on the context.
Crabtree: Sec. 0276, These may include the multidimensional time series data store 1020 with its robust scripting features which may include a distributive friendly, fault-tolerant, real-time, continuous run prioritizing, programming platform such as, but not limited to Erlang/OTP 1121 and a compatible but comprehensive and proven library of math functions of which the C″ math libraries are an example 1122, data formalization and ability to capture time series data including irregularly transmitted, burst data;
Crabtree: Sec. 0302, According to an embodiment, the subsystem 3230 can use machine learning approaches (e.g., deep learning, probabilistic graphical models, etc.) to learn ontology mappings and perform automated ontology integration.)
Crabtree describes forecast signal by teaching the modeling of probability and a mathematical transformation.
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 9, Yin teaches the method of claim 8, wherein the exogenous data comprises exogenous time series data and exogenous alternative data representative of the environment of the time series data (
Yin: Sec. 0004, valuable data in one model for the forecast and accordingly and employ different models based on different timescales. In an example of state of the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured
Yin: Sec. 0005, a time interval of the time-series data is selected from a group consisting of the time intervals of the data sets. The method further includes “down-sampling” the observations of the first data set, and converting the time interval of the first data set to the time interval of the time-series data. Overall, this disclosure refers determining time interval of input data, perform down-sampling, feature engineering, and forecasting results.
Yin describes exogenous data by the use of quantity for processing time series.
Yin: Sec. 0002, Machine-learning (ML) models have been developed as predictive analysis criteria for drawing predictions such as sales-forecast. Such models usually receive input data set of time series index data comprising independent, predictable variables (e.g. historical sales) as input to forecast the sales of target product, wherein the sales of target product acts as a predicted-variable.
Yin: Sec. 0004, The plurality of time series patterns are integrated and an index ID is generated for searching the integrated time series patterns at high speed. In addition, data is generated that manages transitions between integrated time series patterns. At-least based on said index and data described above, it is possible to predict the trend of an event in real-time with high prediction accuracy.
Yin: Sec. 0040, The predicting of plurality of intermediate prediction results from the transformed data set comprises executing a second plurality of machine-learning and time series models over the at least one transformed data set and the at least one input data set to obtain said intermediate prediction results.
Yin: Sec. 0075, The present subject matter accordingly renders comprehensive-system architecture to transform and unify the predictors data on different timescale through the transformation module 404. Instead of a point forecast, the proposed approach makes a forecast interval that generates a probability distribution of demand forecast through an ensemble of many predictive models as provided by the demand forecast module 406, whereby the optimal forecast percentile is chosen by the ensemble learning module 408. As a result, the present subject matter is robust, adaptive and addresses the uncertainties generated in the forecast module 406 due to unification of regressors at different timescales. Moreover, such an approach enables incorporation of the domain expertise).
Yin describes exogenous data by disclosing that exogenous alternative data is broad set of data that may provide predictive values, in which the Applicant specs at 0177 and 0181 teaches that exogenous data may include alternative exogenous data.
Referring to Claim 10, Yin teaches the method of claim 8, wherein said generating, by the at least one signal and feature processing model, based on the endogenous data and the exogenous data, the set of features comprises at least one of:
generating a first subset of features indicative of regime changes in the endogenous time series data (
Yin: Sec. 0004, data is generated that manages transitions between integrated time series patterns. At-least based on said index and data described above, it is possible to predict the trend of an event in real-time with high prediction accuracy.
Yin: Sec. 0059, FIG. 9 illustrates an example operation of data forecast module 406 executing the steps 208 and 308. FIG. 9a represents the transformed result of FIG. 7c and comprises one or multiple set of high timescale dataset. Other data set may be one or multiple set of low timescale dataset as originally present in FIG. 5a . Accordingly, FIG. 9b represents obtaining multiple intermediate forecast results from FIG. 9a based on multiple or different types of forecast models (machine-learning and time series models) such as linear models, Random forest, Gradient boost, Deep learning (LTSM, RNN) for results generation.
Yin: Sec. 0061, As a part of ensemble step 1, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . Thereafter a model-selection criteria is applied to filter forecasts or select second intermediate results from the first intermediate results, wherein the training error (TE) or validation error (FE) or forecast difference (FD) may be considered as the criteria. ),
Yin describes endogenous data by the use of mathematical for processing time series, and the managing changes of time-series.
generating a second subset of features of the endogenous time series data, and generating a third subset of features by performing a transformation based on the endogenous data and the exogenous data (
Yin: Sec. 0035, predicting (step 206) a plurality of first intermediate forecast results based on a plurality of demand forecasting models from the at-least one transformation result. In an implementation, the plurality of demand prediction models predicts the plurality of first intermediate forecast results based on at-least one of: the transformation result and a third-input dataset having the same or different time interval than the transformation result.).
Referring to Claim 11, Yin teaches the method of claim 10, wherein said generating, by the at least one signal and feature processing model, based on the endogenous data and the exogenous data, the third subset of set of features comprises determining at least one of correlations, co-integrations and conditional relationships between the endogenous time series data and the exogenous time series data (
Yin: Sec. 0004, valuable data in one model for the forecast and accordingly and employ different models based on different timescales. In an example of state of the art predictive analytics as depicted in FIG. 1a , waveforms are based on identical time-series values of historically captured objective variables as earlier measured
Yin: Sec. 0005, a time interval of the time-series data is selected from a group consisting of the time intervals of the data sets. The method further includes “down-sampling” the observations of the first data set, and converting the time interval of the first data set to the time interval of the time-series data. Overall, this disclosure refers determining time interval of input data, perform down-sampling, feature engineering, and forecasting results.
Yin: Sec. 0035, predicting (step 206) a plurality of first intermediate forecast results based on a plurality of demand forecasting models from the at-least one transformation result. In an implementation, the plurality of demand prediction models predicts the plurality of first intermediate forecast results based on at-least one of: the transformation result and a third-input dataset having the same or different time interval than the transformation result.
Yin: Sec. 0059, FIG. 9 illustrates an example operation of data forecast module 406 executing the steps 208 and 308. FIG. 9a represents the transformed result of FIG. 7c and comprises one or multiple set of high timescale dataset. Other data set may be one or multiple set of low timescale dataset as originally present in FIG. 5a . Accordingly, FIG. 9b represents obtaining multiple intermediate forecast results from FIG. 9a based on multiple or different types of forecast models (machine-learning and time series models) such as linear models, Random forest, Gradient boost, Deep learning (LTSM, RNN) for results generation.
Yin: Sec. 0061, As a part of ensemble step 1, multiple-forecasts for each time-series or time domain input are considered as first intermediate results as rendered from FIG. 9b . Thereafter a model-selection criteria is applied to filter forecasts or select second intermediate results from the first intermediate results, wherein the training error (TE) or validation error (FE) or forecast difference (FD) may be considered as the criteria.).
Yin does not explicitly teach performing a latent space representation transformation.
However, Crabtree teaches performing a latent space representation transformation (
Crabtree: Sec. 0156, The hyperparameter optimization system 2126 uses Bayesian optimization to search for the best combination of latent space dimensionality, regularization strength, and decoder architecture. The optimization is guided by an information-theoretic objective that maximizes the mutual information between the latent space and the generated sentences, ensuring that the VAE captures meaningful and interpretable representations.
Crabtree: Sec. 0215, The encoder takes the input data and maps it to a lower-dimensional representation, often referred to as the latent space or bottleneck. The decoder takes the latent representation and tries to reconstruct the original input data. Autoencoders can be used for dimensionality reduction by learning a compressed representation of the input data in the latent space. The latent space has a lower dimensionality than the input data, capturing the most salient features or patterns.)
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Claim 12 recites limitations that stand rejected via the art citations and rationale applied to claims 1, 4, 9. Regarding outputting, to a client device, at least one of an interpretation and an explanation of the meta-prediction based on the set of weights and an indication of the respective themes (See Crabtree) of the set of forecasting engines (
Yin: Sec. 0038, FIG. 3 illustrates a method for forecasting time-series based dataset in accordance with another embodiment of the subject matter. The method comprises receiving (302) a plurality of input data-sets associated with time-series data, wherein each of said data set refers a time-based variation of one or more variables in accordance with a designated time-scale.
Yin: Sec. 0041, The method further comprises generating an aggregated prediction-result (step 308) from the plurality of the intermediate prediction results based on an ensemble-learning model. The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models. Thereafter, a high time scale ensembled forecast and a low time scale ensembled forecast are generated from the selected prediction result. Further, a final high time-scale forecast result is generated based on adjustment of the high time scale ensembled forecast by the low time scale ensembled forecast.
Yin: Sec. 0065, As a part of ensemble Step 2, averaging is performed with respect to the each of the shortlisted time-domain forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later. The generated results as a part of present ensemble step 2 corresponds to generating a high time scale ensembled forecast and a low time scale ensembled forecast from the selected results.
Yin: Sec. 0068, FIG. 11 illustrates an operation of ensemble learning module 408 and thereby depicts an ensemble step 3 based operation in continuation to ensemble step 2 operation of FIG. 10. More specifically, ensemble step 3 corresponds to a Low granularity forecast to adjust high time scale forecast (S1) from FIG. 10b based on training error, validation error, or both to decide the weight.
Yin: Claim. 10, The method as claimed in claim 8, wherein generating the aggregated result comprises calculated a weighted-average of the second intermediate forecast results to generate the final prediction result.).
theme (
Crabtree: Sec. 0401, This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.)
Referring to Claim 13, Yin teaches the method of claim 12, further comprising, generating the at least one of the interpretation and the explanation of the meta-prediction by performing at least one of:
generating an interpretation signal based on the forecast signals relative to a reference forecast (
Yin: Sec. 0064, TABLE 3
Errors
Model i error definitions
Training Error (TE)
Training Error (TE) at point k: ei k =
|forecast_k_i − actual_k|
Validation Error (VE)
Validation Error (VE) at point m: ei m =
|forecast_m_i − actual_m|
Forecast Difference (FD)
Forecast Difference (FD) at point n: ei n =
|forecast_n_i − forecast_n_ref|
forecast_n_ref can be provided as reference to AI
forecast in some scenarios, such as customer
provided purchase forecast data
expressing context related to a regime signal discovered by at least one unsupervised learning algorithm,
expressing the set of forecast signals relative to a respective reference value, and
determining a distribution of possible outcomes associated with respective probabilities based on historical forecast signals (
Yin: Sec. 0037, selecting of the plurality of second intermediate prediction results comprises generating a first type of distribution for each time interval from the plurality of first intermediate forecast results. Optionally, a second type of distribution is also generated from the first distribution. Based on said first or second distribution, the second intermediate forecast results are selected from the plurality of first intermediate prediction results based on one or more of: a training error, a validation error, derivatives of said training and validation errors comprising an error variance, said errors and derivatives being associated with the plurality of first intermediate prediction results.
Yin: Sec. 0064, Overall, as a part of ensemble step 1, a first type of distribution as box plots is generated for each time interval from the plurality of first intermediate forecast results. Optionally, a second type of distribution “histograms” may be generated from the first distribution. Based on said first or second distribution, the second intermediate forecast results are selected from the plurality of first intermediate prediction results based on a training error, a validation error, a forecast difference (if available), derivatives of said training and validation errors and forecast difference comprising an error variance, said errors and derivatives being associated with the plurality of first intermediate prediction results.
Yin: Sec. 0067, the present ensemble steps 1 and 2 refer an ensemble of machine learning models to generate an empirical cumulative probability distribution of the forecast. Thereafter, an optimal range of percentile is chosen based on Table 3 and the forecasts of different time scales are computed through Table 4 by a weighted average of the different percentiles of the empirical cumulative probability distribution.).
Referring to Claim 14, Yin teaches the method of claim 13, further comprising:
Yin does not explicitly teach receiving historical forecast signals associated with respective historical features and respective historical weight vectors; clustering the historical weight vectors to obtain historical weight clusters; clustering the historical features to obtain historical feature clusters; associating at least one historical weight cluster with at least one historical feature cluster to obtain an associated historical weight-feature cluster, historical weights in the historical weight-feature cluster being indicative of a relative importance of the historical forecast signals; and generating, based: on the associated historical weight-feature cluster, the set of forecast signals and the set of weights, at least one of a further explanation and a further interpretation of the meta-prediction
However, Crabtree teaches these limitations
receiving historical forecast signals associated with respective historical features and respective historical weight vectors (
Crabtree: Sec. 0091, The composite AI platform comprises a set of neural network models that generate vector embeddings representing input data elements. The embeddings are stored in databases (or in block storage like AWS S3 or Ceph). Additional indices linking vectorized data element representations to ontology elements are created and iteratively refined using contextual information from comparisons between ontological data from knowledge graphs containing facts, entities, and relations using at least vector similarity comparison as part of a comparative objective function for relevance.
Crabtree: Sec. 0093, Contextual information, such as user preferences, search history, device from which a query or recommendation is being sought, recent history of environmental conditions and movement (e.g., just ran through the rain), and location (historical, present and planned-such as from an upcoming calendar invite), plays a role in guiding the reasoning and inference process the system can employ to maximize search or recommendation relevance with minimal user interaction requirements.);
clustering the historical weight vectors to obtain historical weight clusters; clustering the historical features to obtain historical feature clusters; associating at least one historical weight cluster with at least one historical feature cluster to obtain an associated historical weight-feature cluster, historical weights in the historical weight-feature cluster being indicative of a relative importance of the historical forecast signals (
Crabtree: Sec. 0148, financial forecasting AI system blends the predictions of several certified models, each specializing in different asset classes or market conditions. The blending weights are adjusted based on each model's historical performance and current market challenges.
Crabtree: Sec. 0301, the subsystem 3220 can apply unsupervised learning methods such as clustering (e.g., K-means, hierarchical clustering) and topic modeling (e.g., Latent Dirichlet Allocation) to discover semantic categories and hierarchies. In some implementations, various rule-based and statistical approaches may be used for relation extraction, such as pattern-based methods (e.g., Hearst patterns) and deep learning models (e.g., convolutional neural networks, recurrent neural networks).
Crabtree: Sec. 0401, The system incorporates advanced machine learning and data mining techniques to continuously improve the quality and efficiency of the semantic search process. This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.);
and generating, based:
on the associated historical weight-feature cluster, the set of forecast signals and the set of weights, at least one of a further explanation and a further interpretation of the meta-prediction (
Crabtree: Sec. 0331, The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions. Additionally, or alternatively, the system can utilize ensemble methods, such as bagging or boosting, to create multiple instances of each model and combine their predictions through voting or averaging. ).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 15, Yin teaches the method of claim 13, further comprising:
providing at least one of said at least one time series to an unsupervised machine learning algorithm (See Crabtree) to discover regimes in said time series
generating, based on at least one regime, the set of forecast signals and the set of weights, at least one of a further explanation and a further interpretation of the meta-prediction (
Yin: Sec. 0042, One or more weights are determined based on any function of one or more of a training error, a validation error, the derivatives, the combinations thereof associated with the second plurality of machine learning models and time series models. Thereafter, the high time scale ensembled forecast is adjusted based on one or more low time scale ensembled forecasts and said one or more weights.
Yin: Sec. 0065, forecasts in Ensemble step 1 to output one or more averaged time domain forecast that again may correspond to high time domain or low time domain. The averaging denotes computing a weighted-average based on a) validation error, b) sophisticated functions on training error, validation error, forecast difference (if applicable), or any combination, by the model ensemble based on weights for each selected model results at each data point. Overall, the generation of aggregated result comprises calculating a weighted-average of said shortlisted or the second intermediate forecast results to generate the final prediction result through ensemble step 3 as described later.
Yin: Sec. 0072, Specifically, the step 1102 corresponds to determining one or more weights based on any function of one or more of a training error, a validation error, the derivatives, and the combinations thereof associated with the corresponding machine learning models and time series models.).
Yin does not explicitly teach unsupervised machine learning algorithm.
However, Crabtree teaches unsupervised machine learning algorithm (
Crabtree: Sec. 0401, The system incorporates advanced machine learning and data mining techniques to continuously improve the quality and efficiency of the semantic search process. This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.)
Referring to Claim 16, Yin teaches the method of claim 12 , Yin does not explicitly teach further comprising generating, based on the set of forecast signals and historical forecast signals, a set of conviction scores associated with at least one of the set of forecast signals and the meta-prediction, each respective conviction score being indicative of a respective likelihood of a forecast signal being realized; and outputting, to the client device, based on the set of conviction scores, an indication of a level of trust in the meta-prediction.
However, Crabtree teaches further comprising generating, based on the set of forecast signals and historical forecast signals, a set of conviction scores associated with at least one of the set of forecast signals and the meta-prediction, each respective conviction score being indicative of a respective likelihood of a forecast signal being realized; and outputting, to the client device, based on the set of conviction scores, an indication of a level of trust in the meta-prediction (
Crabtree: Sec. 0330, Additionally, the platform 2120 may develop multiple semantic matching models that measure the semantic similarity between the user query and ontological concepts/relationships. This may comprise the use of different similarity measures, such as cosine similarity, Jaccard similarity, or semantic distance metrics (e.g., path-based, information content-based). For example, the platform 2120 can train these models on labeled query-concept pairs or query-relationship pairs, where each pair is assigned a relevance score indicating the semantic relatedness.
Crabtree: Sec. 0331, The platform can implement various model blending techniques to combine the predictions from different intent classification, query expansion, and semantic matching models. As an example, the use of weighted averaging, where each model's prediction is assigned a weight based on its performance or domain expertise. The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions. Additionally, or alternatively, the system can utilize ensemble methods, such as bagging or boosting, to create multiple instances of each model and combine their predictions through voting or averaging. The platform can continuously evaluate the performance of individual models and blending strategies using evaluation metrics such as precision, recall, F1-score, or normalized discounted cumulative gain (NDCG). Model selection techniques, such as cross-validation or Bayesian optimization, may be used to identify the best-performing models or blending strategies for different query types or domains. In some implementations, the system can leverage online learning or incremental learning approaches to adapt the models in real-time based on user feedback and evolving search patterns.).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 17, Yin teaches the method of claim 12, further comprising:
Yin does not explicitly teach generating, using a large language model (LLM), an explanation of the meta-prediction based on the weight vector, the set of features, and the respective themes of the set of forecasting models.
However, Crabtree teaches generating, using a large language model (LLM), an explanation of the meta-prediction based on the weight vector, the set of features, and the respective themes of the set of forecasting models (
Crabtree: Sec. 0326, the user interface 3370 may implement knowledge graph visualization techniques, such as node-link diagrams or hierarchical layouts, to provide interactive exploration and navigation of search results within the ontological structure, and develop natural language generation (NLG) models, such as sequence-to-sequence models or template-based approaches or LLMs, to generate human-readable summaries or explanations of search results based on the knowledge graph information.
Crabtree: Sec. 0331, The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions. Additionally, or alternatively, the system can utilize ensemble methods, such as bagging or boosting, to create multiple instances of each model and combine their predictions through voting or averaging. ).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Claims 18-21 recite limitations that stand rejected via the art citations and rationale applied to claims 1, 4, 5, and 6, respectfully. Regarding performing a meta-prediction (
Crabtree: Sec. 0331, The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions. Additionally, or alternatively, the system can utilize ensemble methods, such as bagging or boosting, to create multiple instances of each model and combine their predictions through voting or averaging.)
Referring to Claim 22, Yin teaches the system of claim 18, Yin does not explicitly teach wherein said at least one processor is further configured to generate, by an unsupervised machine learning module, at least two regimes expressing behavioral characteristics of said at least one time series.
However, Crabtree teaches wherein said at least one processor is further configured to generate, by an unsupervised machine learning module, at least two regimes expressing behavioral characteristics of said at least one time series (
Crabtree: Sec. 0401, The system incorporates advanced machine learning and data mining techniques to continuously improve the quality and efficiency of the semantic search process. This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.
Crabtree: Sec. 0107, By combining these different mathematical analysis and knowledge and semantic representation approaches, the platform aims to create a more comprehensive and expressive semantic representation of knowledge, rules, or models that can handle the complexities of language and reasoning across both deterministic and heuristic exploration regimes and across extrapolative and generative modeling techniques to include simulation modeling.
Crabtree: Sec. 0389, A DCG orchestrated model which employs a hierarchical classification and model selection regime for content (either in whole or in part) can enable much more accurate ultimate semantic performance.).
Claims 23-26 recite limitations that stand rejected via the art citations and rationale applied to claims 8-11.
Claims 27-31 recite limitations that stand rejected via the art citations and rationale applied to claims 12-14, 16, and 17 respectfully.
Claim 32 recites limitations that stand rejected via the art citations and rationale applied to claims 8 and 18. Regarding an unsupervised machine learning module (
Crabtree: Sec. 0186, The training can take multiple steps, usually starting with an unsupervised learning approach. In that approach, the model is trained on unstructured data and unlabeled data. The benefit of training on unlabeled data is that there is often vastly more data available. At this stage, the model begins to derive relationships between different words and concepts.),
generating, using the set of weights and the set of forecast signals, a meta- prediction, said meta-prediction being further conditioned by identifying a probability that said time series is in a regime of said at least two regimes (
Crabtree: Sec. 0331, The platform can implement various model blending techniques to combine the predictions from different intent classification, query expansion, and semantic matching models. As an example, the use of weighted averaging, where each model's prediction is assigned a weight based on its performance or domain expertise. The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions. Additionally, or alternatively, the system can utilize ensemble methods, such as bagging or boosting, to create multiple instances of each model and combine their predictions through voting or averaging.
Crabtree: Sec. 0389, A DCG orchestrated model which employs a hierarchical classification and model selection regime for content (either in whole or in part) can enable much more accurate ultimate semantic performance.).
Referring to Claim 33, Yin teaches a system according to claim 32, wherein represent contextual information relating to the time series (
Yin: Sec. 0041, The generation of an aggregated prediction-result based on the ensemble-learning model comprises selecting at least a subset said intermediate prediction results based on any function of training error, validation error, derivatives of said errors, combinations thereof, and a percentile setting associated with said second plurality of machine learning models and time series models.).
Yin describes setting related to the time series, which is contextual information.
Yin does not explicitly teach said at least two regimes.
However, Crabtree teaches said at least two regimes (
Crabtree: Sec. 0107, By combining these different mathematical analysis and knowledge and semantic representation approaches, the platform aims to create a more comprehensive and expressive semantic representation of knowledge, rules, or models that can handle the complexities of language and reasoning across both deterministic and heuristic exploration regimes and across extrapolative and generative modeling techniques to include simulation modeling.
Crabtree: Sec. 0389, A DCG orchestrated model which employs a hierarchical classification and model selection regime for content (either in whole or in part) can enable much more accurate ultimate semantic performance.)
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 34, Yin teaches a system according to claim 33, wherein said contextual information is graphically illustrated on a graph identifying each of the at least two regimes of the time series, and a probability that the time series is currently in one or another of the at least two regimes (
Yin: Sec. 0045, The input index data may have mixed time-scale interval, e. g, with monthly and quarterly records. For example, GDP index is commonly recorded in quarterly, PMI in monthly, while IHS Market car data in mixed of monthly, quarterly, and yearly. In the present example as depicted in FIG. 5a , the input data or index has a time scale or time series as 1) monthly, 2) quarterly and 2) a mixed time series of monthly and quarterly spanning across a time period defined from January 2017 till December 2019.
Yin: Sec. 0046, Further, as shown in FIG. 5b , the transform module 404 executing the method steps 204, 304 acts as a Transform for index or time-scale to transform index data with mixed high and low time-scale into uniformed high granularity.
Yin: Sec. 0067, Overall, the present ensemble steps 1 and 2 refer an ensemble of machine learning models to generate an empirical cumulative probability distribution of the forecast. Thereafter, an optimal range of percentile is chosen based on Table 3 and the forecasts of different time scales are computed through Table 4 by a weighted average of the different percentiles of the empirical cumulative probability distribution.).
Claim 16 recites limitations that stand rejected via the art citations and rationale applied to claim 16.
Referring to Claim 36, Yin teaches a system according to claim 32, Yin does not explicitly teach wherein each of said regimes is assigned a regime score, said regime score being based at least in part on performance characteristics of each of said regimes.
However, Crabtree teaches wherein each of said regimes is assigned a regime score, said regime score being based at least in part on performance characteristics of each of said regimes (
Crabtree: Sec. 0210, When the Decoder's final hidden states are passed through a linear transformation, they are projected into a vector space with the same dimensionality as the size of the vocabulary. Each dimension in this space corresponds to a specific token in the vocabulary. For example, the model has a vocabulary of 10,000 unique tokens. The linear transformation would project the Decoder's hidden states into a 10,000-dimensional vector space. Each element in this vector represents the model's predicted probability or score for the corresponding token in the vocabulary.
Crabtree: Sec. 0269, An expert may be registered by providing proof of identity and qualifications, and creating an expert profile which can store a variety of information about the expert such as their name, industry, credentials, scores (e.g., scores that the expert has assigned to data sources, models/algorithms, model outputs, and/or the like), and reputation. For example, a university professor who specializes in transformer-based algorithms can register as an expert in the realm of generative algorithms. As another example, a virologist could register as an expert and provide scores for academic papers which disclose a new methodology for viral spread modelling.).
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Referring to Claim 37, Yin teaches a system according to claim 27, wherein:
being further conditioned by identifying a probability that said time series is in a regime of said at least two regimes (
Yin: Sec. 0045, The input index data may have mixed time-scale interval, e. g, with monthly and quarterly records. For example, GDP index is commonly recorded in quarterly, PMI in monthly, while IHS Market car data in mixed of monthly, quarterly, and yearly. In the present example as depicted in FIG. 5a , the input data or index has a time scale or time series as 1) monthly, 2) quarterly and 2) a mixed time series of monthly and quarterly spanning across a time period defined from January 2017 till December 2019.
Yin: Sec. 0046, Further, as shown in FIG. 5b , the transform module 404 executing the method steps 204, 304 acts as a Transform for index or time-scale to transform index data with mixed high and low time-scale into uniformed high granularity.
Yin: Sec. 0067, Overall, the present ensemble steps 1 and 2 refer an ensemble of machine learning models to generate an empirical cumulative probability distribution of the forecast. Thereafter, an optimal range of percentile is chosen based on Table 3 and the forecasts of different time scales are computed through Table 4 by a weighted average of the different percentiles of the empirical cumulative probability distribution.).
Yin does not explicitly teach said system is further adapted to generate, by an unsupervised machine learning module, at least two regimes expressing behavioral characteristics of said at least one time series and said meta-prediction.
However, Crabtree teaches said system is further adapted to generate, by an unsupervised machine learning module, at least two regimes expressing behavioral characteristics of said at least one time series and said meta-prediction (
Crabtree: Sec. 0331, The final intent prediction may be obtained by taking the weighted average of individual model predictions. As another example, the platform can employ stacking or meta-learning approaches, where a higher-level model (e.g., logistic regression, random forest) is trained to learn the optimal combination of base model predictions.
Crabtree: Sec. 0401, The system incorporates advanced machine learning and data mining techniques to continuously improve the quality and efficiency of the semantic search process. This involves the use of unsupervised learning algorithms, such as clustering and topic modeling, to discover hidden patterns and themes in the data, as well as supervised learning methods, such as relevance feedback and click-through data analysis, to learn from user interactions and preferences.
Crabtree: Sec. 0107, By combining these different mathematical analysis and knowledge and semantic representation approaches, the platform aims to create a more comprehensive and expressive semantic representation of knowledge, rules, or models that can handle the complexities of language and reasoning across both deterministic and heuristic exploration regimes and across extrapolative and generative modeling techniques to include simulation modeling.
Crabtree: Sec. 0389, A DCG orchestrated model which employs a hierarchical classification and model selection regime for content (either in whole or in part) can enable much more accurate ultimate semantic performance.);
Yin and Crabtree are both directed to the analysis of machine learning (See Yin at 0041, 0042, 0051, 0072; Crabtree at 0032, 0040, 0146). Yin discloses that additional elements, such as the time series models can be considered (See Yin at 0051). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Yin, which teaches detecting and repairing information technology problems in view of Crabtree, to efficiently apply analysis of machine learning to enhancing the capability to create various types of models such as forecast, large language, neural network. (See Crabtree at 0091, 0148, 0458, 0459).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cella et al., U.S. Pub. 20190146474, (discussing the operation and managing of a neural network).
Cella et al., W.O. Pub. 2022133330, (discussing the operation and managing of a neural network).
Panimalar et al., A Review Of Churn Prediction Models Using Different Machine Learning And Deep Learning Approaches In Cloud Environment, https://ph04.tci-thaijo.org/index.php/JCST/article/download/211/12, Journal of Current Science and Technology, 2023 (discussing the use of machine learning in different environments).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UCHE BYRD whose telephone number is (571)272-3113. The examiner can normally be reached Mon.-Fri..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UCHE BYRD/Examiner, Art Unit 3624