DETAILED ACTION
This action is responsive to Applicant’s reply filed 19 November 2025. This action is made final.
Status of the Claims
Claims 1, 8, 11, 15-16 and 20 are currently amended.
Claims status is currently pending and under examination for claims 1-20 of which independent claims are 1, 11 and 16.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In regards to the rejection of claims 1-20 under 35 U.S.C. 101 for being directed towards an abstract idea without significantly more, Applicant argues the claims are not directed to a judicial exception but rather to an improvement on the technical problem of time-series data processing and predictive modeling (See Applicant’s response, pages 10). On Pages 10-11, Applicant argues that the claimed system improves computer functionality and operation to “enable processing of complex “wide” datasets”. However, Applicant’s argument is not persuasive since the improvements of computer functionality and operation are not reflected in the claims. On Page 12, Applicant argues that computing “correlation values between a first product and other products …” produces data structures that cannot be obtained manually since this operation uses “algorithmic computation.” However, Applicant’s argument is not persuasive since “algorithmic computation” is not recited or reflected in the claims. Computing a correlation value is recited at a high level, such that the step is not required to have any specific level of complexity or execution requirements that would preclude the step from being performed entirely in the human mind or with the use of a physical aid.
On Page 12, Applicant argues the selection step improves computer operation and enables “a single forecasting engine to self-configure for different dataset dimensions without human intervention.” Applicant’s arguments are not persuasive since improving computer operation and a self-configuring forecasting engine is not reflected or recited in the claims. On Page 12, Applicant argues that claim 1’s operations cannot be mental processes because they require “large-scale computation of correlation matrices across thousands of time-value pairs, iterative training of algorithmic models, and adaptive adjustment of model architectures”. Applicant’s arguments are not persuasive since the large-scale computation of correlation matrices across thousands of time-value pairs is not recited or reflected in the claims. The computing correlation values step is recited at a high level, such that the step is not required to have any specific level of complexity or execution requirements that would preclude the step from being performed entirely in the human mind or with the use of a physical aid. It is unclear where iterative training is implemented in the claims. It appears that the Applicant is importing limitations from the specification into the claim, therefore, Applicant’s arguments are not persuasive. The adaptive adjustment of model architectures is not recited or reflected in the claims. Selecting a type of machine learning model is recited at a high level, such that the step is not required to have any specific level of complexity or execution requirements that would preclude the step from being performed entirely in the human mind or with the use of a physical aid.
On Pages 12-13, Applicant argues the claims recite the specific improvement of improving computer functionality so that time-series forecasting can automatically adapt to dataset characteristics. However, Applicant’s arguments are not persuasive since the improvement is not reflected or recited in the claims. On Page 13, Applicant argues lag correlation computation, adaptive model selection, and model execution provide improvements in computational efficiency and forecast accuracy. Applicant’s arguments are not persuasive since the improvements are not reflected in the claims. On Page 13, applicant argues the present claims are directed to the technological improvements of time-series data processing, computational efficiency, expanding model adaptability, and improving forecasting accuracy. However, Applicant’s arguments are not persuasive since the improvements are not reflected or recited in the claims. Thus, the rejections of claims 1-20 as being directed towards an abstract idea without significantly more are still maintained.
In regards to the rejection of claims 1-4, 9, 11-13, and 16-18 under 35 U.S.C. 103 as being unpatentable over Wu in view of Valdyanathan, Applicant argues that the examiner has failed to establish a proper prima facie case of obviousness because there is a lack of suggestion to combine the references. Arguing against the lack of suggestion to combine references is discussed in MPEP § 2145(X). Per MPEP § 2143, KSR Rationale G can be used to establish a rationale for obviousness if “(1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings; (2) a finding that there was reasonable expectation of success.” On Pages 15-16 of Applicant’s reply, Applicant argues that the rationale for combining Wu and Valdyanathan to use a machine learning model to discover underlying patterns and make accurate predictions is “insufficient”. Applicant’s argument is not persuasive since the motivation to use machine learning models to discover underlying patterns and make accurate predictions arises from the recognized benefit of using a machine learning model to automate decision making and improve prediction performance. Furthermore, a person of ordinary skill in the art would have a reasonable expectation of success combining the time-lag pairs (time series) of Wu with the time-series forecasting machine learning model of Valdyanathan without undue experimentation. Therefore, under KSR Rationale G, the combination of Wu in view of Valdyanathan is sufficient.
On Page 16, Applicant’s argument that the “general desire for accuracy does not constitute a specific motivation to modify Wu’s correlation engine to include adaptive model selection based on dataset size” is not persuasive since the Examiner did not point to combining Wu and Valdyanathan to include adaptive model selection in the previous office action. On Page 16, Applicant argues that the combination of Wu in view of Valdyanathan would “require substantial redesign of Wu’s statistical engine and Valdyanathan’s ML pipeline.” Applicant’s argument is not persuasive since the Applicant has not demonstrated that the combination would require substantial redesign, nor has the Applicant shown that the combination would alter Wu’s operation. On Page 16, Applicant’s argument that the examiner’s conclusion of obviousness is based on improper hindsight reasoning is not persuasive. The rejection is based on the teachings of Wu and Valdyanathan and knowledge generally available to one of ordinary skill in the art. A person of ordinary skill in the art would have a reasonable expectation of success combining the time-lag pairs (time series) of Wu with the time-series forecasting machine learning model of Valdyanathan without undue experimentation to achieve a predictable result. Therefore, Applicant’s arguments are not persuasive and the rejection of original claim 1 as being obvious by Wu in view of Valdyanathan is still maintained.
With respect to the arguments provided at pages 14-16 regarding model selection, claim 1’s amendments are newly presented and have been addressed in the rejection below.
Applicant’s arguments regarding the art rejections for claims 6-7, are moot in view of the new grounds of rejection necessitated by applicant’s amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Independent Claims 1, 11, and 16
Step 2A Prong One: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, independent claim 1, under the broadest reasonable interpretation, recites the following limitations that are abstract ideas:
select a first product from the set of products; (mental process)
compute a correlation value between the first product and a plurality of other products from the set of products and for one or more degrees of lag to obtain a set of correlation values representing correlations between the first product and the plurality of other products assessed at prior times; (mental process)
select a subset of products from the set of products based at least in part on the correlation values; (mental process)
select, based on data dimensions of the time-series data comprising a number of time points per series, a type of machine learning model from among a plurality of model types comprising at least one of regression-based model, a decision-tree-based model, or a deep-learning model; (mental process)
The “select a first product” step involves identifying a first product which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of selecting a first product at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the “select a first product” step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III).
The “compute” step involves calculating correlation values between products which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of computing a correlation value at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the “compute” step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III).
The “select a subset of products” step involves identifying a subset of products based on correlation values which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of selecting a subset of products at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the “select a subset of products” step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III).
The “select, based on data dimensions …” step involves identifying a machine learning to use based on data dimensions which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of selecting a type of machine learning model at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III).
Therefore, the independent claims recite a judicial exception. Independent claims 11 and 16 recite similar limitations corresponding to claim 1, therefore the same subject matter eligibility analysis is applied.
Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, the judicial exception recited above is not integrated into a practical application. The claims recite the following additional elements, but these additional elements are not sufficient to integrate the judicial exception into a practical application:
memory storing computer program instructions; (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea)
and one or more processors configured to execute the computer program instructions (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea)
retrieve, for each product of a set of products, the time-series data including a plurality of time-value pairs; (MPEP § 2106.05(g) necessary data gathering and insignificant extra-solution activity to the judicial exception)
provide the time-series data associated with each product from the subset of products and the first product to the selected machine learning model trained to predict a future value of the first product based on values of the subset of products at the prior times; (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or generally links exception to a technological environment)
and obtain, from the machine learning model, prediction data representing a set of predicted values for the first product at one or more future times (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or generally links exception to a technological environment)
a non-transitory computer readable medium having instructions recorded thereon for generating a prediction of time-series data from data sets (claim 11) (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea)
at least one programmable processor cause operations (claims 11 and 16) (MPEP § 2106.05(f) mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea)
The “retrieve” step amounts to mere data gathering and is recited at a high level of generality, thus adding insignificant extra-solution activity to the judicial exception – see MPEP § 2106.05(g). Under MPEP § 2106.05(d), such additional elements have been found by the courts to not integrate a judicial exception into a practical application.
The “provide” step is recited at a high-level of generality such that the limitation amounts to no more than mere instructions to “apply” the judicial exception on a computer. It can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers, see MPEP § 2106.05(f).
The “obtain” step is recited at a high-level of generality such that the limitation amounts to no more than mere instructions to “apply” the judicial exception on a computer. It can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers, see MPEP § 2106.05(f).
The remaining additional elements are recited at a high-level of generality such that they amount to no more than mere instructions to “apply” an exception using a generic component. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP § 2106.05(f).
Therefore, the above limitations do not integrate the judicial exception into a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
No. The claims do not include additional elements that are sufficient for the claims to amount to significantly more than the judicial exception.
In regards to the “retrieve” step, this step adds insignificant extra-solution activity. An extra-solution activity is a well-understood, routine and conventional (WURC) activity per MPEP § 2106.05(d)(II), “the courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.” The “retrieve” step does not integrate the judicial exception into a practical application and does not amount to significantly more.
In regards to the “provide” and “obtain” steps and the remaining additional elements, the limitations are recited so generically such that they amount to no more than mere instructions to “apply” the judicial exception on a computer using generic computer components. Mere instructions to apply a judicial exception cannot provide an inventive concept. See MPEP § 2106.05(f).
Therefore, independent claims 1, 11, and 16 are not patent eligible.
Dependent Claims 2-10, 12-15, and 17-20
The remaining dependent claims being rejected do not recite additional elements, whether considered individually or in combination, that are sufficient to integrate the judicial exception into a practical application or amount to significantly more than a judicial exception.
Dependent claim 2, recites the further limitation of “wherein the machine learning model is configured to generate the prediction data for only the first product.” The step is recited at a high-level of generality such that the limitations amount to no more than mere instructions to “apply” the judicial exception on a computer. They can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers, see MPEP § 2106.05(f). Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP § 2106.05(f). The limitation does not integrate the judicial exception into a practical application and does not amount to significantly more.
Dependent claim 3, recites the further limitation of “wherein the machine learning model is configured to generate the prediction data for the first product and one or more of the plurality of other products but not for all of the plurality of products.” The step is recited at a high-level of generality such that the limitations amount to no more than mere instructions to “apply” the judicial exception on a computer. They can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers, see MPEP § 2106.05(f). Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP § 2106.05(f). The limitation does not integrate the judicial exception into a practical application and does not amount to significantly more.
Dependent claim 4 recites the further limitation of “determining a ranking of correlation values from the set of correlation values, wherein the ranking of correlation values indicates which products from the set of products have data trends that are most strongly correlated with a first data trend of the first product, wherein the subset of products have the top N correlation values from the ranking.” This step involves sorting products and identifying subsets based on the identified ranking which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of determining a ranking of correlation values at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III). This claim does not recite any non-abstract additional elements.
Dependent claim 5 recites the further limitation of “wherein at least one product of the set of products comprises an environmental, social, and governance (ESG) metric, and wherein the ESG metric is one of a carbon metric, an ESG fund ratings metric, or an ESG product involvement metric.” This limitation represents mere necessary data gathering and is recited at a high level of generality, thus adding insignificant extra-solution activity to the judicial exception - see MPEP § 2106.05(g). The extra-solution activity is a well-understood, routine and conventional (WURC) activity per MPEP § 2106.05(d)(II), “the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.” The limitation does not integrate the judicial exception into a practical application and does not amount to significantly more.
Dependent claim 6 recites the further limitation of “wherein a regression model that predicts the future values as implemented by the machine learning model includes a random error term.” The step is recited at a high-level of generality such that the limitations amount to no more than mere instructions to “apply” the judicial exception on a computer. They can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers, see MPEP § 2106.05(f). Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP § 2106.05(f). The limitation does not integrate the judicial exception into a practical application and does not amount to significantly more.
Dependent claim 7 recites the further limitation of “wherein the correlation value is computed using Spearman's correlation coefficient.” The computing step involves calculating a correlation value by using Spearman's correlation coefficient, which is a mathematical formula that measures the strength of a relationship between two variables. Therefore, the step is considered to represent mathematical calculations and is considered to be an abstract idea of a mathematical concept, see MPEP § 2106.04(a)(2)(I). This claim does not recite any non-abstract additional elements.
Dependent claim 8 recites the following limitations:
when the data dimensions of the time-series data have 2-40 time points per series, select LASSO
when the data dimensions of the time-series data have 40-5000 time points per series, select Random Forests
and when the data dimensions of the time-series data have 5000 or more time points per series, select Deep Learning
The “selecting” steps involve identifying which machine learning model to use based on a range of time points per series, which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the steps of selecting a machine learning model at a high degree of generality, thus the steps are not required to have any specific level of complexity that would preclude the steps from being mental processes. Therefore, the “selecting” steps are considered to be mental processes, see MPEP § 2106.04(a)(2)(III). This claim does not recite any non-abstract additional elements.
Dependent claim 9 recites the following limitations:
the time-series data includes first time-series data associated with the first product and second time-series data associated with a second product from the set of products;
the first time-series data comprises a first plurality of time-value pairs, wherein each time-value pair of the first plurality of time-value pairs represents a value associated with the first product at each of a first set of times;
the second time-series data comprises a second plurality of time-value pairs, wherein each time-value pair of the second plurality of time-value pairs represents a value associated with the second product at each of a second set of times;
the first set of times being discrete and captured at a first temporal frequency;
the second set of times being discrete and captured at a second temporal frequency;
the first temporal frequency and the second temporal frequency differ
The limitations represent mere necessary data gathering and are recited at a high level of generality, thus adding insignificant extra-solution activity to the judicial exception - see MPEP § 2106.05(g). The extra-solution activity is a well-understood, routine and conventional (WURC) activity per MPEP § 2106.05(d)(II), “the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.” The limitations do not integrate the judicial exception into a practical application and do not amount to significantly more.
Dependent claim 10 recites the further limitation of “generate intermediate values for the second product at each of the first set of times of which there is no corresponding value for the second product from the second plurality of time-value pairs, wherein the intermediate values are determined by interpolating the second plurality of time-value pairs at each of the first set of times of which there is no corresponding value for the second product from the second plurality of time-value pairs.” The “generate” step involves identifying and calculating intermediate values and determining where intermediate values belong, which amounts to no more than observations, evaluations, and judgments that can be performed in the human mind or with the use of a physical aid (e.g., pen and paper). The claim recites the step of generating intermediate values at a high degree of generality, thus the step is not required to have any specific level of complexity that would preclude the step from being mental processes. Therefore, the “generate” step is considered to be mental processes, see MPEP § 2106.04(a)(2)(III). This claim does not recite any non-abstract additional elements.
Dependent claims 12 and 17 recite similar limitations corresponding to claim 2, therefore the same subject matter eligibility analysis is applied.
Dependent claims 13 and 18 recite similar limitations corresponding to claim 4, therefore the same subject matter eligibility analysis is applied.
Dependent claims 14 and 19 recite similar limitations corresponding to claim 5, therefore the same subject matter eligibility analysis is applied.
Dependent claims 15 and 20 recite similar limitations corresponding to claim 8, therefore the same subject matter eligibility analysis is applied.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 9, 11-13, 16-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 20080147486 A1) in view of Valdyanathan et al. (US 20180053255 A1), hereinafter Valdyanathan, in further view of Mariia et al. (“Model selection approach for time series forecasting”), hereinafter Mariia.
With respect to claim 1, Wu teaches:
a system for generating a prediction of time-series data from data sets, the system comprising (Wu discloses “a computerized system for generating a prediction. The system includes a computerized database stored on a computer memory medium. A processor in communication with the database is configured to control the system to receiving a plurality of data streams, determine a strength of one or more of the plurality of data streams, identify at least one of the one or more data streams having a strength greater than a threshold value as a leading indicator, and generate predicted values for the plurality of data streams based on the at least one of the one or more data streams identified as a leading indicator” [0010].
Wu further discloses “our experiments were conducted using monthly demand data that covered the 26-month period from December 2001 to January 2004. The data set included 3,500 semiconductor (IC) products across eight business entities” [0049]. See Fig. 2A depicting time series comprised of (time, quantity) pairs that predict demand quantity.):
memory storing computer program instructions; and one or more processors configured to execute the computer program instructions to (Wu discloses “a computerized system for generating a prediction. The system includes a computerized database stored on a computer memory medium. A processor in communication with the database is configured to control the system to receiving a plurality of data streams” [0010]. Computer program instructions are implied by the use of a processor.):
retrieve, for each product of a set of products, the time-series data including a plurality of time-value pairs (Wu discloses “the exemplary engine is created such that it is convenient to test demand data provided by any semiconductor manufacturer so long they provide their data in a standardized format. A description of this exemplary implementation is described below with reference to the flow chart 300 in FIG. 3A. 1. The user identifies a product group of interest and sets a threshold specifying the minimum time lag and correlation required in step 302. To initialize the procedure, in an exemplary embodiment all products in the group are placed into one common cluster … Given a cluster C of products, select a product i from the cluster and set time lag k=1 in step 304 … Compute the correlation in step 306 between (i) the demand time series associated with product i where the time series is offset by (t-k) and (ii) the demand time series associated with the cluster excluding i (set C\{i})” [0035-0039]. See Fig. 2A depicting time series comprised of (time, quantity) pairs that predict demand quantity.);
select a first product from the set of products (Wu discloses “the user identifies a product group of interest and sets a threshold specifying the minimum time lag and correlation required in step 302. To initialize the procedure, in an exemplary embodiment all products in the group are placed into one common cluster … Given a cluster C of products, select a product i from the cluster and set time lag k=1 in step 304” [0036-0038].);
compute a correlation value between the first product and a plurality of other products from the set of products and for one or more degrees of lag to obtain a set of correlation values representing correlations between the first product and the plurality of other products assessed at prior times (Wu discloses “the user identifies a product group of interest and sets a threshold specifying the minimum time lag and correlation required” [0036].
Wu further discloses “Given a cluster C of products, select a product i from the cluster and set time lag k=1 in step 304 … compute the correlation in step 306 between (i) the demand time series associated with product i where the time series is offset by (t-k) and (ii) the demand time series associated with the cluster excluding i (set C\{i})” [0038-0039].
Wu discloses “We calculate the correlation between the product's demand series (offset by the time lag) and the cluster's demand (excluding the product under consideration). We then rank all of the product-time lag pairs by their absolute correlation over the [estimation period]” [0063].);
select a subset of products from the set of products based at least in part on the correlation values (Wu discloses a leading indicator (‘first product’) and its corresponding cluster (‘subset of products’) are chosen based on their correlation value, “we calculate the correlation between the product's demand series (offset by the time lag) and the cluster's demand (excluding the product under consideration). We then rank all of the product-time lag pairs by their absolute correlation over the EP. For the top 100 product-time lag pairs (leading indicators), we produce a leading indicator-based forecast for months 16 through 26 using the procedure described above, and we compute the forecasting error (in MAPE) using the actual shipment data from the VP” [0063].
Wu discloses “given a product group of interest, the leading indicator engine can often find one or more indicator(s) that predicts the group demand pattern two to eight months ahead of time with a correlation ranging from 0.51 to 0.95. More importantly, these leading indicators are capable of producing reliable forecasts for the larger product group” [0047].);
and obtain … prediction data representing a set of predicted values for the first product at one or more future times (Wu discloses Figure 2A (reproduced below) depicting a leading indicator (‘first product’) that predicts demand quantity three months into the future.
PNG
media_image1.png
515
733
media_image1.png
Greyscale
).
However, Wu does not teach providing time-series data to a machine learning model and obtaining prediction data from a machine learning model, which Valdyanathan does:
provide the time-series data associated with each product from the subset of products and the first product to [a] … machine learning model trained to predict a future value of the first product based on values of the subset of products at the prior times (The Examiner interprets “product” according to its broadest reasonable interpretation (in view of the Applicant’s specification at Paragraph 0032) as encompassing a financial asset and impacting factors as disclosed by Valdyanathan.
Valdyanathan discloses “method to perform analysis of asset values and factors, predict the future prices of assets, provide recommendations, and preform actions. A method may include analyzing and forecasting the performance of at least one asset against one or more impacting factors. A financial asset includes, but not limited to a company stock price and asset factors include revenue, sales, EBITDA etc. The impacting factors include, but not limited to a comprehensive set of structured and un-structured data such as SEC filings, company reports, business graphs, news and social media, and economic and non-economic indicators … The method employs self-learning deep machine learning techniques that eliminate human bias, emotions, and conflicts of interest” [Abstract].
Valdyanathan further discloses “the 1-layer prediction is based on the belief that today's market was a result of the past economic conditions. So the model is trained with a historical time series of market prices against the variety of signals whose timestamps lag behind by a canonical time period. While this lag period is user-defined, it is also automatically determined by running the model against various time periods and identifying the one with the least error” [0046].
Valdyanathan discloses “signals include, but not limited to market data as shown in box 11 in FIG. 1, company SEC filings (10K/10Q) … economic indicators from around the world … general data including weather, health, and demographics … unstructured data from social and news media … Signals include economic and market data from developed, emerging, and frontier markets as well” [0013].);
and obtain, from the machine learning model, prediction data representing a set of predicted values for the first product at one or more future times (Valdyanathan discloses “the premise behind the 1-layer prediction is based on the belief that today's market was a result of the past economic conditions. So the model is trained with a historical time series of market prices against the variety of signals whose timestamps lag behind by a canonical time period. While this lag period is user-defined, it is also automatically determined by running the model against various time periods and identifying the one with the least error. Given this approach, the future market prices are simply predicted based on the current impacting factors such as the economic indicators” [0046]. See Figure 4 which depicts generated forecast values at future times.).
Valdyanathan teaches training a machine learning model to predict a future value of a financial asset (‘product’) at one or more future times is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the machine learning model disclosed by Valdyanathan to discover underlying patterns. By using and training a machine learning model, underlying correlations and patterns between data can be discovered since machine learning models can process large datasets and learn intricate relationships. A trained machine learning model can then use the patterns it has learned to make accurate predictions on unseen data, thereby yielding reliable predictions to use in decision making.
Furthermore, the combination of Wu in view of Valdyanathan does not teach selecting, based on data dimensions of a time-series, a type of machine learning model from among a plurality of model types and providing time-series data to the selected machine learning model trained to predict a future value, which is taught by Mariia:
A system for generating a prediction of time-series data from data sets (Mariia discloses “we check the correctness of the recommendations for choosing a forecasting model from Table 2 depending on data properties for developing an automated meta-algorithm for predicting time series. The main advantage of this method would be simplicity and the absence of the need for preprocessing and decomposition of the time series, which is used in other approaches to meta-learning [see, for example, 16] to obtain information for the preliminary classifier” (P. 2, Sec. II-B, First Paragraph).
Mariia discloses “We have used 250 monthly randomly selected time series from different fields (see Table II) from M4 Kaggle competition with different characteristics” (P. 2, Sec. III, First Paragraph).),
the system comprising: memory storing computer program instructions (Mariia discloses “For the linear regression model and RNN model with 256 LSTM cells we have applied transformation to the supervised learning task with the sliding window of 12 time steps” (P. 3, Sec. III-C, Last Paragraph). Training linear regression and RNN models implies the use of a computer, which further implies a memory storing computer program instructions.);
and one or more processors configured to execute the computer program instructions to (Mariia discloses “For the linear regression model and RNN model with 256 LSTM cells we have applied transformation to the supervised learning task with the sliding window of 12 time steps” (P. 3, Sec. III-C, Last Paragraph). Training linear regression and RNN models implies the use of a computer, which further implies a processor configured to execute programming instructions.):
retrieve … the time-series data including a plurality of time-value pairs (Mariia discloses “The dataset contains of various time series with different length (Table III). The major part of time series set has average size: between 100 and 300 time steps” (P. 2, Sec. III-A, First Paragraph).
Mariia discloses “For measuring the strength of the trend (1) and the strength of the seasonality (2) the approach described in [17] was applied. The core idea of this approach is to measure the proportion of residuals variance in variance of de-trended (deseasonalized) time series after STL decomposition” (P. 2, Sec. III-A, ¶2). A trend of a time series represents the direction or movement of data over time, therefore, a time series consisting of time-values pairs is implied.);
select, based on data dimensions of the time-series data comprising a number of time points per series, a type of machine learning model from among a plurality of model types comprising at least one of regression-based model, a decision-tree-based model, or a deep-learning model (Mariia discloses “the model selection aims to estimate the performance of different model candidates in order to choose the most appropriate one. In this study we suggest exploiting specific features of time series for the optimal forecasting model selection such as length, seasonality, trend strength and others. To demonstrate reliability of feature-based approach, forecasting error distribution of LSTM Recurrent Neural Network, Linear Regression model, Holt-Winters model and ARIMA model trained on 250 time series with various characteristics were compared. Results of statistical experiments have demonstrated a significant dependence of a forecasting model on the characteristics of a series. Proposed model selection approach allows formulating a priori recommendations for choosing the optimal forecasting model for the specific time series” (P. 1, Abstract).
Mariia discloses Table 1 (reproduced below) on P. 2 depicting a table of model selection approaches based on time series length (‘time points per series’) and other characteristics. Table 1 shows that for a time series with a length of 200 or more, a recurrent neural network (‘a deep learning model’) should be selected. Furthermore, Table 1 depicts that for a time series with a length of less than 200, a linear regression model, Holt-Winters model, or an ARIMA model should be selected.
PNG
media_image2.png
621
722
media_image2.png
Greyscale
);
provide the time-series data … to the selected machine learning model trained to predict a future value … (Mariia discloses “the model selection process may be seen through the measures of forecasting errors, such as Mean Absolute Percentage Error (MAPE) [8], Mean Squared Error (MSE) [10], Symmetric Mean Absolute Percent Error (sMAPE) [7], adjusted R-squared [5]. For this study MAPE was used to measure model performance because it allows to compare metrics calculated for different time series” (P. 1, Sec. II-A, ¶2).
Mariia discloses “The experiments illustrate significant difference in different model performance for various time series characteristics: such as length, trend and seasonal strength, forecasting horizon length. It gives a ground to select the most appropriate model for data with specific characteristics (Table VIII). The distributions of MAPE errors for different types of time series are presented in Figs. 3,4,5. … As expected, for time series with a length of more than 200 time steps, RNN showed a minimum MAPE value on average. For short time series, models such as linear regression and ARIMA are more preferable” (P. 4, Sec. III-D, ¶4-5).);
and obtain, from the machine learning model, prediction data representing a set of predicted values … at one or more future times (Mariia discloses Table VIII on P. 4 (reproduced below) depicting the calculated mean MAPE (Mean Absolute Percentage Error) of each model. The MAPE is calculated to demonstrate how a model performs depending on time series length and other time series characteristics.
PNG
media_image3.png
421
675
media_image3.png
Greyscale
Mariia discloses MAPE is used to measure forecasting errors, “the model selection process may be seen through the measures of forecasting errors, such as Mean Absolute Percentage Error (MAPE) … For this study MAPE was used to measure model performance because it allows to compare metrics calculated for different time series” (P. 1, Sec. II-A, ¶2). To evaluate a model’s forecasting predictions, predicted future events and values must be obtained, therefore, obtaining prediction data representing a set of predicted values at one or more future times is implied.).
Mariia teaches selecting models based on time series length and measuring forecasting errors is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu and the machine learning model of Valdyanathan with the model selection approach of Mariia to choose an optimal forecasting model for a time series based on time series length. By choosing an optimal forecasting model based on time series length, forecasting errors can be minimized, thereby improving a model’s ability to generate reliable predictions.
With respect to claim 2, claim 2 is an obvious extension of claim 1. The Examiner finds that it would have been obvious before the effective filing date of the claimed invention to generate prediction data for only a first product with a reasonable expectation of success since there are finite quantities of predictions that can be made. Given that the combination of Wu in view of Valdyanathan and in further view of Mariia teaches the system of claim 1, there are only three possible quantities of predictions that can be made: (1) make predictions for all products; (2) make predictions for some products only; or (3) make no predictions. The advantage of making predictions for all products is that it would result in some predictions that are useful. The advantage of making no predictions is that no computing resources are used. Making only one prediction offers system engineers a way to balance these two concerns, and therefore is obvious. See MPEP § 2143(I)(E) "Obvious to try" rationale. For the forgoing reasons, claim 2 is obvious in view of the combination of Wu in view of Valdyanathan and in further view of Mariia.
With respect to claim 3, claim 3 is an obvious extension of claim 1. The Examiner finds that it would have been obvious before the effective filing date of the claimed invention to generate prediction data for only some products with a reasonable expectation of success since there are finite quantities of predictions that can be made. Given that the combination of Wu in view of Valdyanathan and in further view of Mariia teaches the system of claim 1, there are only three possible quantities of predictions that can be made: (1) make predictions for all products; (2) make predictions for some products only; or (3) make no predictions. The advantage of making predictions for all products is that it would result in some predictions that are useful. The advantage of making no predictions is that no computing resources are used. Making predictions for only some products offers system engineers a way to balance these two concerns, and therefore is obvious. See MPEP § 2143(I)(E) "Obvious to try" rationale. For the forgoing reasons, claim 3 is obvious in view of the combination of Wu in view of Valdyanathan and in further view of Mariia.
With respect to claim 4, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches:
the system of claim 1, further comprising: determining a ranking of correlation values from the set of correlation values (Wu discloses “we calculate the correlation between the product's demand series (offset by the time lag) and the cluster's demand (excluding the product under consideration). We then rank all of the product-time lag pairs by their absolute correlation over the EP. For the top 100 product-time lag pairs (leading indicators), we produce a leading indicator-based forecast for months 16 through 26 using the procedure described above” [0063].),
wherein the ranking of correlation values indicates which products from the set of products have data trends that are most strongly correlated with a first data trend of the first product (Wu discloses Figure 2A (reproduced above) depicting a time series of a leading indicator (‘first product’) that predicts demand quantity three months into the future. A time series of a cluster’s (‘set of products’) demand quantity over time is also depicted on the same plot. The time series (‘trends’) show the cluster and leading indicator have a correlation value of 0.95 and are closely aligned.),
wherein the subset of products have the top N correlation values from the ranking (Wu discloses “we then rank all of the product-time lag pairs by their absolute correlation over the EP. For the top 100 product-time lag pairs (leading indicators), we produce a leading indicator-based forecast for months 16 through 26 using the procedure described above” [0063].).
With respect to claim 9, the combination of Wu in view of Valdyanathan teaches and in further view of Mariia teaches:
the system of claim 1, wherein: the time-series data includes first time-series data associated with the first product and second time-series data associated with a second product from the set of products (Wu discloses “all products in the group are placed into one common cluster … Given a cluster C of products, select a product i from the cluster and set time lag k=1 in step 304 … Compute the correlation in step 306 between (i) the demand time series associated with product i where the time series is offset by (t-k) and (ii) the demand time series associated with the cluster excluding i (set C\{i})” [0035-0039].);
the first time-series data comprises a first plurality of time-value pairs, wherein each time-value pair of the first plurality of time-value pairs represents a value associated with the first product at each of a first set of times (Wu discloses Fig. 2A (reproduced above) depicting a time series comprised of (time, quantity) pairs for a leading indicator (‘first product’).);
the second time-series data comprises a second plurality of time-value pairs, wherein each time-value pair of the second plurality of time-value pairs represents a value associated with the second product at each of a second set of times (Wu discloses Fig. 2A depicting a time series comprised of (time, quantity) pairs for a cluster (‘second product’).);
the first set of times being discrete and captured at a first temporal frequency (See Fig. 2A depicting a time series (‘first set of times’) for a leading indicator (‘first product’) with a time lag of three months (‘first temporal frequency’).);
the second set of times being discrete and captured at a second temporal frequency (See Fig. 2A depicting a time series (‘second set of times’) for a cluster (‘second product’) with no time lag (‘second temporal frequency’) and demand quantity captured each month.);
the first temporal frequency and the second temporal frequency differ (Wu discloses “all products in the group are placed into one common cluster … Given a cluster C of products, select a product i from the cluster and set time lag k=1 in step 304 … Compute the correlation in step 306 between (i) the demand time series associated with product i where the time series is offset by (t-k) and (ii) the demand time series associated with the cluster excluding i (set C\{i})” [0035-0039]. See Fig. 2A showing a first and second time series with differing time lags (‘temporal frequencies’).).
With respect to claim 11, the rejection of claim 1 is incorporated. The difference in scope being
a non-transitory computer readable medium having instructions recorded thereon for generating a prediction of time-series data from data sets, the instructions when executed by a computer having at least one programmable processor cause operations comprising (Wu discloses “a computerized system for generating a prediction. The system includes a computerized database stored on a computer memory medium. A processor in communication with the database is configured to control the system to receiving a plurality of data streams” [0010]. Computer program instructions are implied by the use of a processor.)
With respect to claim 12, the claim recites similar limitations corresponding to claim 2, therefore the same rationale of rejection is applicable.
With respect to claim 13, the claim recites similar limitations corresponding to claim 4, therefore the same rationale of rejection is applicable.
With respect to claim 16, the rejection of claim 1 is incorporated. The difference in scope being
a method for implementation by at least one programmable processor (Wu discloses “a computerized system for generating a prediction. The system includes a computerized database stored on a computer memory medium. A processor in communication with the database is configured to control the system to receiving a plurality of data streams” [0010].)
With respect to claim 17, the claim recites similar limitations corresponding to claim 2, therefore the same rationale of rejection is applicable.
With respect to claim 18, the claim recites similar limitations corresponding to claim 4, therefore the same rationale of rejection is applicable.
With respect to claim 20, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches:
the method of claim 16, wherein:
when the data dimensions of the time-series data have 2-40 time points per series, select LASSO,
when the data dimensions of the time-series data have 40-5000 time points per series, select Random Forests,
and when the data dimensions of the time-series data have 5000 or more time points per series, select Deep Learning (Mariia discloses “the model selection aims to estimate the performance of different model candidates in order to choose the most appropriate one. In this study we suggest exploiting specific features of time series for the optimal forecasting model selection such as length, seasonality, trend strength and others. To demonstrate reliability of feature-based approach, forecasting error distribution of LSTM Recurrent Neural Network, Linear Regression model, Holt-Winters model and ARIMA model trained on 250 time series with various characteristics were compared. Results of statistical experiments have demonstrated a significant dependence of a forecasting model on the characteristics of a series. Proposed model selection approach allows formulating a priori recommendations for choosing the optimal forecasting model for the specific time series” (P. 1, Abstract).
Mariia discloses Table 1 (reproduced below) on P. 2 depicting a table of model selection approaches based on time series length (‘time points’) and other features. Table 1 shows that for a time series with a length of 200 or more, a recurrent neural network (a deep learning model) should be selected.
PNG
media_image2.png
621
722
media_image2.png
Greyscale
).
Mariia teaches performing machine learning model selection based on time series length (‘time points’) is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the technique disclosed by Mariia to train an optimal machine learning model. By selecting a machine learning model to train based on time series length, an appropriate model can be chosen to capture underlying patterns and trends since model complexity affects how complex relationships and patterns are learned. Therefore, training a more complex model for longer time series data can yield a more accurate model with reliable predictions.
The claimed invention in the instant application is directed to having contingent limitations. Per MPEP § 2111.04(ii) “the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. For example, assume a method claim requires step A if a first condition happens and step B if a second condition happens. If the claimed invention may be practiced without either the first or second condition happening, then neither step A or B is required by the broadest reasonable interpretation of the claim.” The claimed invention may be practiced without selecting each of the LASSO, Random Forest, and Deep Learning models. If time-series data with 5000 or more time points per series is acquired, the invention can still be practiced without having to select a LASSO or a Random Forest model. In another example, for time series data with 2 time points per series, a LASSO model is selected and the claimed invention can still be performed without selecting a Random Forest or Deep Learning model. When one model is selected, the other steps are not required to occur to practice the invention, therefore, the method steps of claim 20 are not required to be performed under a broadest reasonable interpretation of the claim and are obvious over the prior art.
Claims 5, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Valdyanathan, in further view of Mariia and Polizzotto (US 10810637 B2).
With respect to claim 5, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches the system of claim 1, however, the combination does not teach a product comprising an environmental, social, and governance metric, which Polizzotto does:
wherein at least one product of the set of products comprises an environmental, social, and governance (ESG) metric, and wherein the ESG metric is one of a carbon metric, an ESG fund ratings metric, or an ESG product involvement metric (Polizzotto discloses “a responsibility score may define the manner in which a client (e.g. a corporation or business entity) is viewed with respect to their social responsibility. For example, a current responsibility score associated with the client may include one or more of: an environmental score; a social score; and a governance score, wherein one example of such a responsibility score is an ESG score. As is known in the art, an ESG score is defined using various ESG scoring criteria. Further and as discussed above, social platform promotion process 10 may recommend social platforms (chosen from social platform pool 56) that may address perceived social responsibility issues associated with a client, wherein these social responsibility issues may often be identified by social platform promotion process 10 examining a responsibility score. Accordingly, social platform promotion process 10 may be configured to predict how a responsibility score may change when a client contributes to one of the social platforms recommended by social platform promotion process 10” (Col. 17, line 62 to Col. 18, line 14).
Polizzotto further discloses “examples of ESG criteria used by investors include determining a company's impact on climate change or carbon emissions, water use or conservation efforts, anti-corruption policies, board diversity, human rights efforts and community development” (Col. 10, lines 4-8).).
Polizzotto teaches using an environmental, social, and governance (ESG) score (‘ESG metric’) to predict a company’s social perception is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the ESG score disclosed by Polizzotto to predict the effect social perception has on a product’s performance. Social perceptions regarding a company can affect the performance of a product if a company is deemed to be socially or environmentally irresponsible and by calculating an ESG score, a company’s perceived social responsibility can be quantified and be used to predict a product’s future value. Therefore, including an ESG score in data analysis would yield a more accurate prediction since social perceptions influence a product’s performance.
With respect to claims 14 and 19, the claims recite similar limitations corresponding to claim 5, therefore the same rationale of rejection is applicable.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Valdyanathan, in further view of Mariia and Maeser (US 20200327434 A1).
With respect to claim 6, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches the system of claim 1, however, the combination does not teach a regression model with a random error term, which Maeser does:
wherein a regression model that predicts the future values as implemented by the machine learning model includes a random error term (Maeser discloses “the relationship between the response (Y) and predictor variables
(
X
1
,
X
2
,
X
3
,
X
4
,
X
5
)
can be approximated by the regression models of
Y
=
f
(
X
1
)
+
E
for simple regression and
Y
=
f
(
X
1
,
X
2
,
X
3
,
X
4
,
X
5
)
+
E
for multiple regression. “E is assumed to be a random error representing the discrepancy in the approximation” and accounts for the “failure of the model to fit the data exactly” [1]” [107].).
Maeser teaches using a regression model with a random error term to make predictions is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the regression model disclosed by Maeser to capture missing variables. By calculating a random error in a regression model, the amount that a regression model fails to fit a set of data can be measured. The random error can then be used to make observations about the model such as that there are missing influential variables and the model needs to be trained again, thereby increasing model accuracy and performance upon retraining.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Valdyanathan, in further view of Mariia and Harwalkar et al. (“Analytical Study of Correlation Between Demand and Renewable Energy Forecasting Using Data Mining/Analytics”), hereinafter Harwalkar.
With respect to claim 7, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches the system of claim 1, however, the combination does not teach calculating a correlation value using Spearman’s correlation coefficient, which Harwalkar does:
wherein the correlation value is computed using Spearman's correlation coefficient (Harwalkar discloses “Spearman's rank correlation coefficient … is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function. … Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other. Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables. Spearman's coefficient is appropriate for both continuous and discrete ordinal variables” (P. 74-75, Sec. 4, Last Paragraph).).
Harwalkar teaches using Spearman’s correlation coefficient to rank and correlate variables is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the technique disclosed by Harwalkar to correlate and rank continuous and discrete ordinal variables. By correlating and ranking continuous and discrete ordinal variables, underlying relationships between variables can be discovered and variables can be ranked in order of their influence. Therefore, a better understanding of data can be achieved to make well-informed decisions.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wu in view of Valdyanathan, in further view of Mariia and Yao et al. (US 20190050711 A1), hereinafter Yao.
With respect to claim 10, the combination of Wu in view of Valdyanathan and in further view of Mariia teaches “the system of claim 9, wherein the one or more processors are further caused to” however, the combination does not teach generating intermediate values by interpolation, which is taught by Yao:
generate intermediate values for the second product at each of the first set of times of which there is no corresponding value for the second product from the second plurality of time-value pairs (Yao discloses “when different sensors acquire and record data, hardware faults or signal transmission faults may happen, the data acquisition frequencies of different sensors may also be different, thus, if the timestamps of the time series data from different sensors are different, missing values are filled into the time series data via a linear interpolation compensation method” [0078].
Yao further discloses “in step S81, when the timestamps of the time series data from different sensors are different, linear interpolation compensation is performed on the time series data with a low sampling frequency. For example, the sampling frequency of the data from the sensor 1 is 10 Hz, and the sampling frequency of the data from the sensor 2 is 100 Hz, so that the timestamps are different. The data with the sampling frequency of 10 Hz is interpolated to the high frequency of 100 Hz first, so that the data from the sensor 1 and the data from the sensor 2 are both 100 Hz and have the same timestamp” [0080-0081].),
wherein the intermediate values are determined by interpolating the second plurality of time-value pairs at each of the first set of times of which there is no corresponding value for the second product from the second plurality of time-value pairs (Yao discloses “in step S81, when the timestamps of the time series data from different sensors are different, linear interpolation compensation is performed on the time series data with a low sampling frequency. For example, the sampling frequency of the data from the sensor 1 is 10 Hz, and the sampling frequency of the data from the sensor 2 is 100 Hz, so that the timestamps are different. The data with the sampling frequency of 10 Hz is interpolated to the high frequency of 100 Hz first, so that the data from the sensor 1 and the data from the sensor 2 are both 100 Hz and have the same timestamp” [0080-0081].).
Yao teaches filling missing values of time series with low sampling frequencies by using linear interpolation is a known method in the art. Before the effective filing date of the claimed invention, it would have been obvious to combine the method of Wu with the interpolation method disclosed by Yao to create uniformly sampled data. By using linear interpolation to fill in missing values of a time series with a low sampling frequency, time series can be synchronized and have consistent time intervals. By using time series with uniformly sampled data in data analysis, more consistent and accurate results can be achieved since filling in missing data can remove biases and incorrect assumptions about underlying patterns.
Claims Novel/Non-obvious over Prior Art
After reviewing the prior art, the examiner finds that the claims are not anticipated or rendered obvious by any combination of prior art that the examiner was able to find. Although online publications have existed with some of the features claimed in claims 8 and 15, no art was found that taught all of the limitations claimed by the applicant. Please note that claim 20, although containing similar limitations, has a different BRI due to being a method claim containing contingent limitations. See MPEP § 2111.04(ii). The examiner notes these claims are subject to a 35 U.S.C. 101 rejection as described above.
The examiner notes that while most of what the applicant has claimed is individually known in the art, the examiner did not find it would have been obvious to one of ordinary skill at the time of the invention to arrive at the specific manner as claimed by the applicant. Further, as noted above, the claims are directed to an abstract idea without reciting significantly more.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bledsoe et al. (US 20180300737 A1) teaches a server that selects forecasting models based on time series characteristics, such as number of samples per model.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEDRO J MORALES whose telephone number is (571)272-6106. The examiner can normally be reached 8:30 AM - 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MIRANDA M HUANG can be reached at (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEDRO J MORALES/Examiner, Art Unit 2124
/VINCENT GONZALES/Primary Examiner, Art Unit 2124