Prosecution Insights
Last updated: April 19, 2026
Application No. 18/061,899

Selecting Influencer Variables in Time Series Forecasting

Non-Final OA §101§103§112
Filed
Dec 05, 2022
Examiner
SHALABY, AHMAD HUSSAM
Art Unit
2187
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
17 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
11.3%
-28.7% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Responsive to communications on 12/05/2022 Claims 1-20 pending in the application Claims 1-20 rejected Priority No claims to foreign or domestic priority made in application data sheet received on 12/05/2022 for claims filed on 12/05/2022. Information Disclosure Statement No IDS form received or considered by examiner as of 02/12/2026. Drawings Drawings received on 12/05/2022 received and accepted by the examiner. Specification Abstract received on 12/05/2022 is 150 words and contains no legal or implied phraseology. Abstract is accepted by the examiner. Specification received on 12/05/2022 received and accepted by the examiner. The use of the term “SAP Analytics Cloud Time Series Forecasting Model”, which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1,8, 10, 11, and 16 state “the new time series model.” There is a lack of antecedent basis in the claims for this term. “the new time series model” should either refer to the “first new time series model” or to the “another new time series model” to provide antecedent basis and clarity towards what time series model is being referred to in the claims. Claims 2-7, 9, 12-15, 17-20 are rejected based on their dependence to the above claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, and 11-15 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, an abstract idea, which has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Claim 1 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method comprising which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 1 recites calculating contributions of the original set of variables to the original time series model The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Calculating contributions of variables to a model is a mathematic calculation performed by an individual to receive a contribution result (a number) based on some mathematical formula (such as a p value). excluding variables falling below a cumulative contribution threshold Under broadest reasonable interpretation, excluding variables falling below a contribution threshold is crossing out variables from a list that do not have a minimum contribution value. The MPEP 2106.04(a)(2)(III)(B) states “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea.” An individual can reasonably record a list of variables on a piece of paper alongside contribution values, and exclude those variables which he feels does not meet a minimum contribution threshold. creating a first new time series model from remaining variables; Under broadest reasonable interpretation, creating a time series model covers creating a regression equation from the remaining variables on the list . For example, if the model is used to determine the weather, the model equation may use average rainfall for the past 30 days as a parameter. When creating a model, these variables will be used to gauge a prediction for the weather. This involves determining an equation to map a mathematical relationship. The MPEP 2106.04(a)(2)(I)(A) states “A mathematical relationship is a relationship between variables or numbers. A mathematical relationship may be expressed in words or using mathematical symbols. For example, pressure (p) can be described as the ratio between the magnitude of the normal force (F) and area of the surface on contact (A), or it can be set forth in the form of an equation such as p = F/A”. if the new time series model is valid based upon a performance horizon, iterating to further reduce a number of variables and generate another new time series model; Determining if a new time series model is valid based upon a performance horizon under broadest reasonable interpretation covers testing the model with known variables. This is an evaluation in the mind of the individual based on what they consider to be “valid” for the model. For example, if the model gives a value within a predetermined margin of error, an individual can interpret this model as valid. The MPEP 2106.04(a)(2)(III) states “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. “ Furthermore, as already stated, the steps of reducing variables and generating a new time series model were already discussed above as falling under abstract ideas. if the new time series model is not valid based upon the performance horizon, lowering the cumulative contribution threshold to exclude fewer of the original set of variables in order to generate another new time series model As already stated, determining if a new time series model is valid is an abstract idea of an evaluation. Lowering the cumulative contribution threshold is simply changing the threshold contribution value, ie: changing 5% contribution minimum to 2% contribution minimum, and then adding back variables that were crossed out in the list. Lastly, as already stated generating another new time series model is a recitation of math. Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 1 additionally recites receiving an original time series model and an original set of variables; The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” This limitation pertains to receiving an original time series model and an original set of variables. Under broadest reasonable interpretation, an original time series model is a regression equation, and an original set of variables is a list. This is the use of a computer to perform a regular task which is receiving information. storing the first new time series model in a non-transitory computer readable storage medium The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” This limitation pertains to storing the time series model, which is the use of a computer for its ordinary capacity. and outputting a selected set of influencer variables The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” This limitation pertains to transmitting (outputting) the selected set of variables. for the another new time series model. The presence of the limitation “time series model” in the claims is a field of use limitation. Where the abstract ideas above are used for the field of use of “time series modeling.” The MPEP 2106.05(h) states “limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application” Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated in Step 2A Prong 2, the claim does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 1 is not eligible under 35 USC 101. Claim 2 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites “A method as in claim 1” which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 2 recites wherein the original time series model is elastic-net linear regression. An elastic-net linear regression model is a mathematic model. Therefore, this does not influence the mathematic calculations performed in claim 1, and claim 2 still recites judicial exceptions. Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 2 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 2 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 2 is not eligible under 35 USC 101. Claim 3 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method as in claim 1 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 3 recites: wherein the original time series model is L1 trend filtering. An L1 trend filtering model is a mathematic model. Therefore, this does not influence the mathematic calculations performed in claim 1, and claim 3 still recites judicial exceptions. Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 3 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recite additional elements that amount to significantly more than the judicial exception. NO. Claim 3 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 3 is not eligible under 35 USC 101. Claim 4 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method as in claim 1 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 4 recites: further comprising subjecting variables to regularization prior to calculating the contributions. Subjecting variables to regularization means to use a mathematic formula to influence the contribution calculation. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 4 does not additionally recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. As stated in Step 2A Prong 2, Claim 4 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 4 is not eligible under 35 USC 101. Claim 5 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method as in claim 4 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 5 recites: wherein the regularization is lasso. Subjecting variables to lasso regularization means to use a mathematic formula to influence the coefficients of the model. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 5 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 5 does not additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 5 is not eligible under 35 USC 101. Claim 6 Step 1: Is the claimed invention one of the four statutory categories? : YES. The claim recites A method as in claim 4 which is a process. Step 2A Prong 1, inquiry "Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?": YES. Claim 6 recites: wherein the regularization is ridge. Subjecting variables to ridge regularization means to use a mathematic formula to influence the coefficients of the model. The MPEP 2106.04(a)(2)(I)(C) states “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Step 2A Prong 2, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. Claim 6 does not recite additional elements that integrate the judicial exception into a practical application. Step 2B, does the claim recites additional elements that amount to significantly more than the judicial exception. NO. Claim 6 does not recite additional elements that amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 6 is not eligible under 35 USC 101. Claim 11 Claim 11 contains all the limitations of claim 4, with the difference being that this claim pertains to a manufacture product rather than a method. Therefore this claim is directed to an abstract idea, Claim 11 is different to claim 4 in that it additionally recites A non-transitory computer readable storage medium embodying a computer program for performing a method, said method comprising The MPEP 2106.05(f)(2) states “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” Therefore, the presence of A non-transitory computer readable storage medium embodying a computer program for performing the method of claim 4does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 11 is not eligible under 35 USC 101. Claim 12 Claim 12 is an effective duplicate of claim 5 with the only difference being that it depends on claim 11. Due to the reasons discussed on claim 11 and claim 5, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 12 is not eligible under 35 USC 101. Claim 13 Claim 13 is an effective duplicate of claim 6 with the only difference being that it depends on claim 11. Due to the reasons discussed on claim 11 and claim 6, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 13 is not eligible under 35 USC 101. Claim 14 Claim 14 is an effective duplicate of claim 2 with the only difference being that it depends on claim 11. Due to the reasons discussed on claim 11 and claim 2, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 14 is not eligible under 35 USC 101. Claim 15 Claim 15 is an effective duplicate of claim 3 with the only difference being that it depends on claim 11. Due to the reasons discussed on claim 11 and claim 3, this claim is directed to an abstract idea, does not integrate the judicial exception into a practical application, and does not amount to significantly more than the judicial exception. Based on the above facts, the office concludes that claim 15 is not eligible under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over US 20210027108 A1 (Ishiguro_2021) in view of “Variable selection strategies and its importance in clinical prediction modelling” (Chowdhury_2020). Claim 1: Ishiguro_2021 makes obvious A method comprising (par 1: “The present invention relates to a data processing apparatus, a data processing method,” receiving an original time series model (par 26: “In the present embodiment, more specifically, a chemical mechanical polishing (CMP) apparatus, which is one of the semiconductor manufacturing apparatuses, will be described as the manufacturing apparatus. Here, the data serving as the explanatory variable is a monitor value of a processing condition during the processing, such as a rotation speed of a wafer and a slurry amount, and data related to a state of the apparatus itself, such as a time of using a polishing pad. On the other hand, the data serving as the objective variable is a polishing amount (removal rate: RR) of the wafer by processing in the CMP apparatus. A model formula to be created is a regression formula for predicting the RR based on the above apparatus data.”) Examiner note: Where the regression formula is the original time series model. ) see also par 3: “An average value, a standard deviation value, or the like of the time-series signal is one of the feature amounts.” Where the model takes in time-series signal data. and an original set of variables; (par 27: “The apparatus data recorded in the recording unit is computed by the computing unit, and is subjected to pre-analysis processing such as elimination of apparently abnormal data and extraction of the feature amounts.”) Examiner note: Where the extraction of feature amounts from the time series variable is the original set of variables. See also par 3: “In normal data processing, a time-series signal obtained from measurement performed by a sensor is not used as it is, and a feature amount that well represents a feature of the signal or a value referred to as a feature is often used” calculating contributions of the original set of variables to the original time series model; (See fig . 1 step S101 “create feature amount ranking”. Par 29: “ In the flowchart of FIG. 1, in the first step in the feature amount selection unit, firstly, significance of the feature amount, (Examiner note: a measure of contributions) that is, a feature amount ranking is created by using Fisher criterion which is one of filter methods (S101).” Examiner note: Where from the flowchart in Fig. 1 it can be seen that this step occurs before any iterations, and is therefore understood to be in respect to the original variables and model. excluding variables falling below a cumulative contribution threshold; See fig. 1 step S105 “Search optimal value of AIC (o(i)) of evolution index and delete feature amount based on optimal value” … Par 33: “FIG. 2 shows an example of a graph in which numbers of the respective subsets are plotted on a horizontal axis and the corresponding AICs are plotted on a vertical axis. In the fourth step of the present embodiment, unnecessary feature amounts are determined and deleted based on the AIC calculated in the third step (S105). (Examiner note: a measure of contribution) In the graph 201 of FIG. 2, the AIC has a smallest AIC at a No. 84 subset. This indicates that feature amounts whose ranking is from the first to the 84th contribute to improving the prediction performance, but the 85th and subsequent feature amounts do not contribute to improving the prediction performance. Therefore, in the fourth step, the feature amount corresponding to a ranking of the subset number following the subset number with the smallest AIC is deleted. That is, in the present embodiment, the feature amounts in the 85th and higher rankings are deleted.” Examiner note: a exclusion of variables 85 – 100 falling below the minimal contribution threshold. creating a first new time series model from remaining variables; par 32: “ In the third step in the feature amount selection unit, for all the subsets created in the second step, an evaluation index, which is a value serving as an index for evaluating prediction performance in a regression or classification problem, that is, a model formula evaluation index, is calculated and created (S104). The present embodiment involves a regression problem of estimating the RR, and the above-described AIC is adopted as an index for evaluating the prediction performance. The third step is to calculate the respective AIC for all subsets.” Examiner note: See Fig 1 steps S104 – S107. Where S104 “create model formula evaluation index for each subset” implies that each subset (remaining variables after each iteration) is used to create a new model which is being evaluated in each iteration. Where while the deleting step S105 occurs after S104, it is understood that each subset is necessary excluding variables from the original set of variables, as well as that this process is occurring literately and therefore repeats. storing the first new time series model in a non-transitory computer readable storage medium (par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.” Par 39: “Using the subset including the selected feature amount in the above procedure, the computing unit creates a regression model for estimating the RR. Information on the regression model and the subset including the feature amount is stored in the recording unit. As described above, a step of acquiring both the apparatus data and the RR data for creating a model for a desired period or a desired amount and selecting the feature amount to create the model is generally referred to as a training step.”) Examiner note: Where although not expressed in the workflow of Fig.1, it is understood and obvious to one ordinarily skilled in the art that the regression model is stored. Where the presence of a “storage unit” makes obvious non-transitory computer readable storage medium if the new time series model is valid based upon a performance horizon, ( (Par 38: “ The iteration from the second step to the fifth step is performed until a minimum value of an AIC obtained in the third step in an m-th iteration is equal to a minimum value of an AIC obtained in the third step in an (m−1)th iteration. After the iteration is ended, a subset having a lowest AIC becomes a subset including the selected feature amount.” (Examiner note: Where a determination of an AIC is a check of the validity of the model based on an AIC performance horizon.) iterating to further reduce a number of variables and generate another new time series model;par 37: After the fifth step is ended, in the present embodiment, the second to fifth steps are iterated (S102). FIG. 3 shows graphs of AIC values in respective subsets obtained in the third step of respective iterations by iteration. It can be seen that the feature amounts selected by the iteration decrease, and a minimum value of the AIC also decreases, that is, the prediction performance improves.) Examiner note: Where as stated step S104 in the iteration implies a model being generated. if the new time series model is not valid based upon the performance horizon, ( (Par 38: “ The iteration from the second step to the fifth step is performed until a minimum value of an AIC obtained in the third step in an m-th iteration is equal to a minimum value of an AIC obtained in the third step in an (m−1)th iteration. After the iteration is ended, a subset having a lowest AIC becomes a subset including the selected feature amount.” (Examiner note: Where a determination of an AIC is a check of the validity of the model based on an AIC performance horizon.) lowering the cumulative contribution threshold to par 37: After the fifth step is ended, in the present embodiment, the second to fifth steps are iterated (S102). FIG. 3 shows graphs of AIC values in respective subsets obtained in the third step of respective iterations by iteration. It can be seen that the feature amounts selected by the iteration decrease, and a minimum value of the AIC also decreases, that is, the prediction performance improves.) Examiner note: Whereas stated step S104 in the iteration implies a model being generated. Please see also Fig. 2. Which depicts the AIC of 50 subsets to be too low, and increasing the threshold number of samples to 84. and outputting a selected set of influencer variables (Par 39: “Using the subset including the selected feature amount in the above procedure, the computing unit creates a regression model for estimating the RR. Information on the regression model and the subset including the feature amount is stored in the recording unit.”) for the another new time series model. (Par 39-40: ”As described above, a step of acquiring both the apparatus data and the RR data for creating a model for a desired period or a desired amount and selecting the feature amount to create the model is generally referred to as a training step. On the other hand, an operation step of acquiring only the apparatus data and predicting the RR based on the data is referred to as a testing step. ”) Examiner note: The feature amount is used for a another new model in the testing step. Ishiguro_2021 does not expressly recite Chowdhury_2019 however makes obvious exclude fewer of the original set of variables (page 4 col 2 par 3: “Stepwise selection methods are a widely used variable selection technique, particularly in medical applications. This method is a combination of forward and backward selection procedures that allows moving in both directions, adding and removing variables at different steps. … if stepwise selection starts with backward elimination, the variables are deleted from the full model based on statistical significance and then added back if they later appear significant. ” Ishiguro_2021 and Chowdhury_2019 are analogous art to the claimed invention because they are from the same field of endeavor called variable selection. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021 and Chowdhury_2019. The rationale for doing so would have been to follow a teaching and motivation proposed by Chowdhury_2019. Ishiguro_2021 performs feature selection by creating subsets, and then deleting feature amounts that do not improve those subsets. This is similar to the method of backward elimination as taught by Chowdhury_2019 where page 3 col 2 par 3: “This method starts with a full model that considers all of the variables to be included in the model. Variables then are deleted one by one from the full model until all remaining variables are considered to have some significant contribution to the outcome.” Where backward elimination is being done to each subset. One benefit of using stepwise selection, as stated by Chowdhury_2019 is page 5 col 1 par 3: “This method (examiner note: stepwise) allows researchers to examine models with different combinations of variables that otherwise may be over looked.6 The method is also comparatively objective as the same variables are generally selected from the same data set even though different persons are conducting the analysis. This helps reproduce the results and validate in model.” Also, as stated by Chowdhury_2019page 5 col 1 par 3: “The stepwise selection method is perhaps the most widely used method of variable selection.” Therefore, it would have been obvious to combine the feature elimination and subset workflow of Ishiguro_2021 with the method of stepwise selection of Chowdhury_2019 for the benefit of having an objective model and allowing researches to examine different combinations to obtain the invention as specified in the claims. One reasonably skilled in the art would know to use the concepts of stepwise selection, as they are “widely used.” Claims 2, 4-6, and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020, and further in view of US 20210124089 A1 (Sarwat_2021). Claim 2: Ishiguro_2021 makes obvious A method as in claim 1 wherein the original time series model See claim 1) Ishiguro_2021 and Chowdhury_2019 do not expressly recite is elastic-net linear regression Sarwat_2021, however, makes obvious is elastic-net linear regression Sarwat_2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty. Given the tuning parameter λ that controls the penalty's magnitude, the model solves the objective function, F(⋅) defined in Equation (2.1) over its entire grid space. Let f(y, η) denote the negative log-likelihood function for the i.sup.th record. If the response is of type Gaussian, then f(y, η)=.sup.(y-η).sup.2. The variable α controls the elastic net penalty, with α=0 denoting ridge, α=1 denoting LASSO, and α∈(0, 1) denoting elastic net. Ishiguro_2021, Chowdhury_2019 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow of Ishiguro_2021 and Chowdhury_2019 with an elastic-net model of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 4: Ishiguro_2021 makes obvious A method as in claim 1 see claim 1) Ishiguro_2021 and Chowdhury_2019 do not expressly recite Sarwat_2021, however, makes obvious further comprising subjecting variables to regularization (par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. ) Ishiguro_2021, Chowdhury_2019 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called regression and time series data forecasting. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection.” Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow of Ishiguro_2021 and Chowdhury_2019 with the regularization of variables of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 5: A method as in claim 4 (see claim 4) Ishiguro_2021 and Chowdhury_2019 do not expressly recite wherein the regularization is lasso. Sarwat_2021 makes obvious wherein the regularization is lasso. (par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection) As stated in claim 4, it would have been obvious to combine the feature extraction workflow of Ishiguro_2021 and Chowdhury_2019 with the lasso regularization of variables of Sarwat_2021 for the same benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 6: A method as in claim 4 (see claim 4) Ishiguro_2021 and Chowdhury_2019 do not expressly recite wherein the regularization is ridge. Sarwat_2021 makes obvious wherein the regularization is ridge. (par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection) As stated in claim 4, it would have been obvious to combine the feature extraction workflow of Ishiguro_2021 and Chowdhury_2019 with the ridge regularization of variables of Sarwat_2021 for the same benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 11:The limitations of claim 11 are substantially the same as those of claim 4 and are therefore rejected due to the same reasons as outlined above for claim 4. Additionally, Ishiguro_2021 makes obvious the additional limitations of A non-transitory computer readable storage medium embodying a computer program for performing a method (par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.”) Claim 12: The limitations of claim 12 are substantially the same as those of claim 5 except that it depends from claim 11 and is therefore rejected due to the same reasons as outlined above for claims 5 and 11. Claim 13:The limitations of claim 13 are substantially the same as those of claim 6 except that it depends from claim 11 and is therefore rejected due to the same reasons as outlined above for claims 6 and 11. Claim 14:Ishiguro_2021 and Chowdhury_2019 make obvious A non-transitory computer readable storage medium as in claim 11 wherein the time series model (see claim 11) Ishiguro_2021 and Chowdhury_2019 do not expressly recite is elastic-net linear regression Sarwat_2021, however, makes obvious is elastic-net linear regression Sarwat_2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty. Given the tuning parameter λ that controls the penalty's magnitude, the model solves the objective function, F(⋅) defined in Equation (2.1) over its entire grid space. Let f(y, η) denote the negative log-likelihood function for the i.sup.th record. If the response is of type Gaussian, then f(y, η)=.sup.(y-η).sup.2. The variable α controls the elastic net penalty, with α=0 denoting ridge, α=1 denoting LASSO, and α∈(0, 1) denoting elastic net. Ishiguro_2021, Chowdhury_2019 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow of Ishiguro_2021 and Chowdhury_2019 with an elastic-net model of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020 and US 20210064998 A1 (Cheng_2021). Claim 3: Ishiguro_2021 makes obvious A method as in claim 1 wherein the original time series model see claim 1). Ishiguro_2021 and Chowdhury_2019 do not expressly recite is L1 trend filtering. Cheng_2021, however, makes obvious is L1 trend filtering. Par 28: “ In one example of single time series trend detection, the exemplary embodiments detect a trend in each time period. For each local trend, exemplary embodiments need to detect a time length and a slope. The exemplary embodiments can have threshold on length and slope to maintain only a subset of trends. The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ Ishiguro_2021, Chowdhury_2019 and Cheng_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 and Cheng_2021.The rationale for doing so would have been to follow a motivation proposed in the art by Cheng_2021. Cheng_2021 par 28 states: “The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ the inventor of Ishiguro_2021 discusses time series signal measurement from semiconductor manufacturing apparatus. This time series signal measurements may differ based on trends across time i.e.: increased production for seasonal demand. In such a scenario, the inventor of Ishiguro_2021 would be motivated to use L1 trend filtering In order to solve similar multivariate time series problems to detect the trend of a group as a whole, as it is a known time series model for that use. Therefore, it would have been obvious to combine the time series model workflow of Ishiguro_2021 and Chowdhury_2019 with the L1 trend filtering model of Cheng_2021 for the benefit of trend detection to obtain the invention as specified in the claims. Claims 8-10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020, and “Impact of Managed Acceleration on In-Memory Database Analytic Workloads” (Neil_2011) Claim 8: Ishiguro_2021 makes obvious A method as in claim 1 wherein: the non-transitory computer readable storage medium comprises an par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.” ) and Par 38: “ The iteration from the second step to the fifth step is performed until a minimum value of an AIC obtained in the third step in an m-th iteration is equal to a minimum value of an AIC obtained in the third step in an (m−1)th iteration. After the iteration is ended, a subset having a lowest AIC becomes a subset including the selected feature amount.” Examiner note: Where a determination of an AIC is a check of the validity of the model. ) Ishiguro_2021 and Chowdhury_2019 do not expressly recite in-memory database … an in-memory database engine of the in-memory database O’O’Neill_2011, however, makes obvious in-memory database … an in-memory database engine of the in-memory database abstract: “In-Memory Databases, such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand.”) Ishiguro_2021, Chowdhury_2019 , and O’O’Neill_2011 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Where O’O’Neill_2011 focuses on systems to perform the workflows outlined in Ishiguro_2021, and Chowdhury_2019.Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 , and O’O’Neill_2011. The rationale for doing so would have been to follow a teaching proposed by O’O’Neill_2011. O’O’Neill_2011 page 1 par 1 states “The emergence of In-Memory Databases (IMDBs) has helped to erode the bottle necks of disk access latency and thus speed-up query processing times. For analytic queries, having data in-memory avoids the need to read tables from disk and the data is always available for immediate processing.” The inventor of Ishiguro_2021 and Chowdhury_2019 , would recognize that they had memory functionality, and would likely have used an in-memory database to gain the benefit to allow them to process the regression analysis results and AIC for each subset quicker. Therefore, it would have been obvious to combine the workflow and variable extraction techniques of Ishiguro_2021, and Chowdhury_2019 with the in-memory databases of O’O’Neill_2011 for the benefit of faster processing to obtain the invention as specified in the claims. Claim 9:Ishiguro_2021 makes obvious A method as in claim 1 wherein:the non-transitory computer readable storage medium comprises an in-memory par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.” See fig . 1 step S101 “create feature amount ranking”. Par 29: “ In the flowchart of FIG. 1, in the first step in the feature amount selection unit, firstly, significance of the feature amount, (Examiner note: a measure of contributions) that is, a feature amount ranking is created by using Fisher criterion which is one of filter methods (S101).” Ishiguro_2021 and Chowdhury_2019 do not expressly recite in-memory database … an in-memory database engine of the in-memory database O’O’Neill_2011, however, makes obvious in-memory database … an in-memory database engine of the in-memory database abstract: “In-Memory Databases, such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand.”) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Where O’Neill_2011 focuses on systems to perform the workflows outlined in Ishiguro_2021, and Chowdhury_2019.Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011. The rationale for doing so would have been to follow a teaching proposed by O’Neill_2011. O’Neill_2011 page 1 par 1 states “The emergence of In-Memory Databases (IMDBs) has helped to erode the bottle necks of disk access latency and thus speed-up query processing times. For analytic queries, having data in-memory avoids the need to read tables from disk and the data is always available for immediate processing.” The inventor of Ishiguro_2021 and Chowdhury_2019 , would recognize that they had memory functionality, and would likely have used an in-memory database to gain the benefit to allow them to process the regression analysis results and AIC for each subset quicker. Therefore, it would have been obvious to combine the workflow and variable extraction techniques of Ishiguro_2021, and Chowdhury_2019 with the in-memory databases of O’Neill_2011 for the benefit of faster processing to obtain the invention as specified in the claims. Claim 10: Ishiguro_2021 makes obvious A method as in claim 1 wherein: the non-transitory computer readable storage medium comprises an in-memory par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.” ) par 32: “ In the third step in the feature amount selection unit, for all the subsets created in the second step, an evaluation index, which is a value serving as an index for evaluating prediction performance in a regression or classification problem, that is, a model formula evaluation index, is calculated and created (S104). The present embodiment involves a regression problem of estimating the RR, and the above-described AIC is adopted as an index for evaluating the prediction performance. The third step is to calculate the respective AIC for all subsets.” Examiner note: See Fig 1 steps S104 – S107. Where S104 “create model formula evaluation index for each subset” implies that each subset (remaining variables after each iteration) is used to create a new model which is being evaluated in each iteration. Where while the deleting step S105 occurs after S104, it is understood that each subset is necessary excluding variables from the original set of variables, as well as that this process is occurring literately and therefore repeats. Ishiguro_2021 and Chowdhury_2019 do not expressly recite in-memory database … an in-memory database engine of the in-memory database O’Neill_2011, however, makes obvious in-memory database … an in-memory database engine of the in-memory database abstract: “In-Memory Databases, such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand.”) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Where O’Neill_2011 focuses on systems to perform the workflows outlined in Ishiguro_2021, and Chowdhury_2019.Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011. The rationale for doing so would have been to follow a teaching proposed by O’Neill_2011. O’Neill_2011 page 1 par 1 states “The emergence of In-Memory Databases (IMDBs) has helped to erode the bottle necks of disk access latency and thus speed-up query processing times. For analytic queries, having data in-memory avoids the need to read tables from disk and the data is always available for immediate processing.” The inventor of Ishiguro_2021 and Chowdhury_2019 , would recognize that they had memory functionality, and would likely have used an in-memory database to gain the benefit to allow them to process the regression analysis results and AIC for each subset quicker. Therefore, it would have been obvious to combine the workflow and variable extraction techniques of Ishiguro_2021, and Chowdhury_2019 with the in-memory databases of O’Neill_2011 for the benefit of faster processing to obtain the invention as specified in the claims. Claim 16:The limitations of claim 16 are substantially the same as those of claim 1 and are therefore rejected due to the same reasons as outlined above for claim 1. Additionally, Ishiguro_2021 makes obvious the additional limitations of A computer system comprising: one or more processors :a software program, executable on said computer system, the software program configured to cause an in-memory par 24: “The data processing apparatus and the processing apparatus method according to the first embodiment will be described with reference to FIGS. 1 to 5. The data processing apparatus of the present embodiment includes, although not shown, a recording unit that records electronic data and a computing unit that computes the recorded electronic data. Since such a data processing apparatus can be implemented by a general computer that is represented by a personal computer (PC) and includes a central processing unit (CPU) for computing the electronic data, a storage unit for storing the electronic data and various processing programs, an input/output unit including a keyboard and a display, and a communication interface, the data processing apparatus is not shown.”) Ishiguro_2021 and Chowdhury_2019 do not expressly recite database engine of an in-memory database O’Neill_2011, however, makes obvious database engine of an in-memory database abstract: “In-Memory Databases, such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand.”) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Where O’Neill_2011 focuses on systems to perform the workflows outlined in Ishiguro_2021, and Chowdhury_2019.Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011. The rationale for doing so would have been to follow a teaching proposed by O’Neill_2011. O’Neill_2011 page 1 par 1 states “The emergence of In-Memory Databases (IMDBs) has helped to erode the bottle necks of disk access latency and thus speed-up query processing times. For analytic queries, having data in-memory avoids the need to read tables from disk and the data is always available for immediate processing.” The inventor of Ishiguro_2021 and Chowdhury_2019 , would recognize that they had memory functionality, and would likely have used an in-memory database to gain the benefit to allow them to process the regression analysis results and AIC for each subset quicker. Therefore, it would have been obvious to combine the workflow and variable extraction techniques of Ishiguro_2021, and Chowdhury_2019 with the in-memory databases of O’Neill_2011 for the benefit of faster processing to obtain the invention as specified in the claims. Claims 7, 17, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020, Sarwat_2021 and Neil_2011 Claim 7: A method as in claim 4 the non-transitory computer readable storage medium (see claim 4) Ishiguro_2021 and Chowdhury_2019 do not expressly recite comprises an in-memory database; and an in-memory database engine of the in-memory database performs the regularization. Sarwat_2021, however, makes obvious the non-transitory computer readable storage medium comprises an in-memory par 126: “The methods and processes described herein can be embodied as code and/or data. The software code and data described herein can be stored on one or more machine-readable media (e.g., computer-readable media), which may include any device or medium that can store code and/or data for use by a computer system. When a computer system and/or processor reads and executes the code and/or data stored on a computer-readable medium, the computer system and/or processor performs the methods and processes embodied as data structures and code stored within the computer-readable storage medium. It should be appreciated by those skilled in the art that computer-readable media include removable and non-removable structures/devices that can be used for storage of information, such as computer-readable instructions, data structures, program modules, and other data used by a computing system/environment. A computer-readable medium includes, but is not limited to, volatile memory such as random access memories (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only-memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM), and magnetic and optical storage devices (hard drives, magnetic tape, CDs, DVDs); network devices; or other media now known or later developed that are capable of storing computer-readable information/data. Computer-readable media and machine-readable media should not be construed or interpreted to include any propagating signals. A computer-readable medium of the subject invention can be, for example, a compact disc (CD), digital video disc (DVD), flash memory device, volatile memory, or a hard disk drive (HDD), such as an external HDD or the HDD of a computing device, though embodiments are not limited thereto. A computing device can be, for example, a laptop computer, desktop computer, server, cell phone, or tablet, though embodiments are not limited thereto.” and an in-memory par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection.” Ishiguro_2021, Chowdhury_2019 , and Sarwat_2021 do not expressly recite O’Neill_2011 however, makes obvious abstract: “In-Memory Databases, such as SAP HANA, enable new levels of database performance by removing the disk bottleneck and by compressing data in memory. The consequence of this improved performance means that reports and analytic queries can now be processed on demand.”) Ishiguro_2021, Chowdhury_2019 , Sarwat_2021, and O’Neill_2011 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Where O’Neill_2011 focuses on systems to perform the workflows outlined in Ishiguro_2021, Chowdhury_2019 , and Sarwat_2021. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019 , Sarwat_2021, and O’Neill_2011. The rationale for doing so would have been to follow a teaching proposed by O’Neill_2011. O’Neill_2011 page 1 par 1 states “The emergence of In-Memory Databases (IMDBs) has helped to erode the bottle necks of disk access latency and thus speed-up query processing times. For analytic queries, having data in-memory avoids the need to read tables from disk and the data is always available for immediate processing.” The inventor of Ishiguro_2021, Chowdhury_2019 , Sarwat_2021 would recognize that they had memory functionality, and would likely have used an in-memory database to gain the benefit to allow them to process the regression analysis results and AIC for each subset quicker. Therefore, it would have been obvious to combine the workflow and variable extraction techniques of Ishiguro_2021, Chowdhury_2019 , Sarwat_2021 with the in-memory databases of O’Neill_2011 for the benefit of faster processing to obtain the invention as specified in the claims. Claim 17: Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 make obvious A computer system as in claim 16 wherein the in-memory database engine is further configured see claim 16) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 do not expressly recite subject variables to regularization Sarwat_2021, however, makes obvious further comprising subjecting variables to regularization (par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. ) Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called regression and time series data forecasting. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection.” Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow and database engine of Ishiguro_2021, Chowdhury_2019, and O’Neill_2011 with the regularization of variables of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 18: Ishiguro_2021, Chowdhury_2019 , Sarwat_2021 and O’Neill_2011 make obvious A computer system as in claim 17 wherein the regularization (see claim 17) comprises lasso or ridge. Ishiguro_2021 does not expressly recite comprises lasso or ridge. Sarwat_2021 makes obvious comprises lasso or ridge. (par 35: “ To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection) Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called regression and time series data forecasting. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection.” Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow and database engine of Ishiguro_2021, Chowdhury_2019, O’Neill_2011 with the regularization of variables using lasso or ridge of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 20:Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 make obvious A computer system as in claim 16 wherein the time series model (see claim 16) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 do not expressly recite is elastic-net linear regression. Sarwat_2021 however makes obvious is elastic-net linear regression. par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty. Given the tuning parameter λ that controls the penalty's magnitude, the model solves the objective function, F(⋅) defined in Equation (2.1) over its entire grid space. Let f(y, η) denote the negative log-likelihood function for the i.sup.th record. If the response is of type Gaussian, then f(y, η)=.sup.(y-η).sup.2. The variable α controls the elastic net penalty, with α=0 denoting ridge, α=1 denoting LASSO, and α∈(0, 1) denoting elastic net. Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Sarwat_2021. The rationale for doing so would have been to follow a teaching and motivation proposed in the prior art. Sarwat_2021 teaches that 2021 par 35-36: “To reduce overfitting and improve the ordinary least squares estimates of a linear regression model, two types of penalization techniques are used: ridge regularization to minimize the residual sum of squares with respect to the L.sub.2 norm of the coefficients that keeps all predictors in the model; and least absolute shrinkage and selection operator (LASSO) regularization to minimize the residual sum of squares contingent on the L.sub.1 norm of the coefficients through continuous shrinkage and automatic variable selection. In datasets with high correlation between the predictors, LASSO's variable selection performs poorly. Also, for datasets where the dimensionality, p, is much less when compared to the number of observations n, ridge regularization outperforms LASSO. An elastic net is a combination of ridge and LASSO regularization techniques that applies an elastic net penalty Ishiguro_2021 wants to also decrease overfitting, where the focus of Ishiguro_2021 is to reduce it. See par 11: “As mentioned above, the AIC may be used for this evaluation index. However, the AIC is effective as an index for securing the generalization performance of the model and preventing the over-learning, but there are cases where the over-learning occurs even when optimization is performed by evaluation using the AIC.” Therefore, it would have been obvious to combine the feature extraction workflow and database engine of Ishiguro_2021 Chowdhury_2019, and O’Neill_2011 with an elastic-net model of Sarwat_2021 for the benefit of reducing overfitting to have a more accurate model to obtain the invention as specified in the claims. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020, Sarwat_2021, and Cheng_2021 Claim 15:Ishiguro_2021, Chowdhury_2019 and Sarwat_2021 make obvious A non-transitory computer readable storage medium as in claim 11 wherein the time series model (see claim 11) Ishiguro_2021, Chowdhury_2019 and Sarwat_2021 do no expressly recite is Li trend filtering. Cheng_2021, however, makes obvious is L1 trend filtering. Par 28: “ In one example of single time series trend detection, the exemplary embodiments detect a trend in each time period. For each local trend, exemplary embodiments need to detect a time length and a slope. The exemplary embodiments can have threshold on length and slope to maintain only a subset of trends. The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ Ishiguro_2021, Chowdhury_2019, Sarwat_2021, and Cheng_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019, Sarwat_2021 and Cheng_2021.The rationale for doing so would have been to follow a motivation proposed in the art by Cheng_2021. Cheng_2021 par 28 states: “The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ the inventor of Ishiguro_2021 discusses time series signal measurement from semiconductor manufacturing apparatus. This time series signal measurements may differ based on trends across time i.e.: increased production for seasonal demand. In such a scenario, the inventor of Ishiguro_2021 would be motivated to use L1 trend filtering In order to solve similar multivariate time series problems to detect the trend of a group as a whole, as it is a known time series model for that use. Therefore, it would have been obvious to combine the time series model workflow of Ishiguro_2021, Chowdhury_2019, and Sarwat_2021 with the L1 trend filtering model of Cheng_2021 for the benefit of trend detection to obtain the invention as specified in the claims. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro_2021, Chowdhury_2020, Neil_2011, and Cheng_2021 Claim 19: Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 make obvious A computer system as in claim 16 wherein the time series model (see claim 16) Ishiguro_2021, Chowdhury_2019 , and O’Neill_2011 do not expressly recite is LI trend filtering. Cheng_2021, however, makes obvious is L1 trend filtering. Par 28: “ In one example of single time series trend detection, the exemplary embodiments detect a trend in each time period. For each local trend, exemplary embodiments need to detect a time length and a slope. The exemplary embodiments can have threshold on length and slope to maintain only a subset of trends. The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Cheng_2021 are analogous art to the claimed invention because they are from the same field of endeavor called machine learning. Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to combine Ishiguro_2021, Chowdhury_2019, O’Neill_2011 and Cheng_2021.The rationale for doing so would have been to follow a motivation proposed in the art by Cheng_2021. Cheng_2021 par 28 states: “The multivariate time series in the same group, e.g., stocks in the same sector or vehicle speed in the same road segment during a period time, usually has similar trend patterns. The challenge is how to detect the trend of the group as a whole characteristic for group behavior analysis. To address such issue, the exemplary embodiments use an l.sub.1 trend filtering method on the whole multi-variate time series. The exemplary embodiments learn the piecewise linear trends for all the time series jointly using the following equation:“ the inventor of Ishiguro_2021 discusses time series signal measurement from semiconductor manufacturing apparatus. This time series signal measurements may differ based on trends across time i.e.: increased production for seasonal demand. In such a scenario, the inventor of Ishiguro_2021 would be motivated to use L1 trend filtering In order to solve similar multivariate time series problems to detect the trend of a group as a whole, as it is a known time series model for that use. Therefore, it would have been obvious to combine the time series model workflow and engine of Ishiguro_2021, Chowdhury_2019, and O’Neill_2011 with the L1 trend filtering model of Cheng_2021 for the benefit of trend detection to obtain the invention as specified in the claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMAD HUSSAM SHALABY whose telephone number is (571)272-7414. The examiner can normally be reached Mon-Fri 7:30am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at 5712723652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.H.S./Examiner, Art Unit 2187 /EMERSON C PUENTE/Supervisory Patent Examiner, Art Unit 2187
Read full office action

Prosecution Timeline

Dec 05, 2022
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103, §112
Mar 12, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month