Prosecution Insights
Last updated: April 19, 2026
Application No. 17/136,567

METHOD AND SYSTEM FOR DESIGNING A PREDICTION MODEL

Final Rejection §103
Filed
Dec 29, 2020
Examiner
SHINE, NICHOLAS B
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Bull SAS
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
5y 1m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
14 granted / 37 resolved
-17.2% vs TC avg
Strong +45% interview lift
Without
With
+44.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
25 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 37 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is responsive to amendments and remarks filed on 11/10/2025. Claim 1 is amended. Claim 2, 3, and 13 have been previously cancelled, and there are no new claims. Claims 1, 4–12, and 14–16 are pending for examination. Response to Arguments In reference to 35 USC § 101 Applicant’s arguments and claim amendments, filed on 11/10/2025, with respect to the § 101 rejections have been fully considered and are persuasive. Applicant argues, beginning on Pg. 6 in the Remarks, that the newly amended limitations “are not insignificant extra-solution activities, and they tie the abstract ideas of evaluating prediction models and analyzing instructions and determining if they include authorization into a practical application.” Examiner agrees. Examiner notes that while the claims recite several limitations that are abstract ideas (mental concepts), the claims as a whole are not directed to an abstract idea. Applicant has amended the claims, which recite a specific collection of steps thereby integrating the mental concepts of evaluating and analyzing data. The limitations include (“wherein the method further comprises a step, performed by the model designer, of removing variables from the optimized business dataset that are redundant and the removed variables are sent automatically to the analyst client, who will confirm whether these variables are redundant” and “if authorization is not obtained, at least one of the instructions received from the business client includes data and proposals for changes to performance data assigned to the variables, and the designer device analyzes the instructions transmitted so as to extract performance values, to be used during a step of generating a new plurality of variables”) are not abstract ideas, respectively (see MPEP 2106.04(a)(1)). Thus, these limitations must be considered additional elements to the abstract idea. Examiner notes that these additional element integrates the abstract idea into a practical application because the entire claim amounts to a detailed recitation of how a set of hardware processes information to achieve the claimed methods of designing a prediction model for monitoring an industrial process (as opposed to a broad recitation of calculations performed at a high level of generality), and the specific method of steps recited in the additional element amounts to an improvement to the functioning of a technological field, as set forth by MPEP 2106.05(a)), which states “the claim must include the components or steps of the invention that provide the improvement described in the specification.” Pursuant to this requirement set forth by the MPEP, Examiner points out that the Specification states in at least [0161, 0167]: “This can allow verification of changes by the one or more other clients and thus improve the collaborative design of the prediction model.” Thus, the additional elements reflects the improvement set forth and explains what the resulting improvement is. Thus the § 101 rejections are withdrawn. In reference to 35 USC § 103 Applicant’s arguments filed on 11/10/2025, with respect to the newly amended claims, have been fully considered but are not persuasive. Applicant argues, beginning on Pg. 7, that “Runkana fails to disclose a step of removing variables from the optimized business dataset that are redundant and the removed variables are sent automatically to the analyst client.” Examiner respectfully disagrees. Examiner points to the newly examiner limitations in the § 103 rejections below. Furthermore Examiner contends, Runkana indeed teaches a step of removing variables from the optimized dataset and sending them to the user automatically. Runkana teaches “In the preferred embodiment, the iterative process of outlier removal, imputation and clustering is stopped when the number of clusters and the number of data points in each cluster do not change. Unit-wise pre-processed datasets are obtained at the end of this step. For each variable, the number/percentage of outliers removed, the technique used for imputation, and mean, median and standard deviation before and after pre-processing are presented to the user as outputs. List of discarded variables is also presented to the user.” See Runkana ¶0034. Examiner notes that Runkana does not explicitly teach removing redundant variables. However, Examiner relies on Garvey to teach removing variables that are redundant (Garvey Fig. 2, Col. 9, Lines 58–63: “Before the training set of time-series data is used to train correlation prediction models, correlation modelling logic 132 analyzes different demands and filters out/removes demands that contain redundant information. Demand filtering reduces processing and storage overhead by eliminating the training and storage of redundant models.”) Applicant argues, beginning on Pg. 7 of the Remarks, that “Runkana also fails to disclose receiving from the business client includes data and proposals for changes to the performance data assigned to the variables and using by the designer model the received performance values.” Examiner respectfully disagrees. Examiner points to the Examiner points to the newly examiner limitations in the § 103 rejections below. Furthermore Examiner contends, Runkana indeed teaches the aforementioned limitation because Runkana teaches the instructions received from the user’s option includes additional features (i.e., data and change proposals to the performance data) for each dataset in at least paragraphs ¶¶0042–0043, ¶0046: “The user is given the option to add additional features or delete existing features from the supersets.” Thus, Examiner maintains the § 103 rejections and includes newly cited portions of the cited art in response to the arguments and amended claims. See § 103 below for a detailed analysis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4–10, 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Runkana et al., (US 20180330300 A1), hereinafter “Runkana”, in view of Garvey et al., (US 10817803 B2), hereinafter “Garvey”. Runkana teaches: a method for designing a prediction model for monitoring an industrial process, said method being implemented by a model designer device configured to operate within a computer system, said computer system comprising: the model designer device, an analyst client of an expert user in the design of prediction models and a business client of an expert user in the business domain (Runkana Abstract: “A system and method for performing data-based optimization of performance indicators of process and manufacturing plants. The system consists of modules for collecting and merging data from industrial processing units, pre-processing the data to remove outliers and missingness. Further, the system generates customized outputs from data and identifies important variables that affect a given process performance indicator. The system also builds predictive models for key performance indicators comprising the important features and determines operating points for optimizing the key performance indicators with minimum user intervention. In particular, the system receives inputs from users on the key performance indicators to be optimized and notifies the users of outputs from various steps in the analysis that help the users to effectively manage the analysis and take appropriate operational decisions”; see also Runkana ¶0003: “Indicators such as productivity, product quality, energy consumption, percentage uptime, emission levels etc. are used to monitor the performance of manufacturing industries and process plants. Industries today face the challenge of meeting ambitious production targets, minimizing their energy consumption, meeting emission standards and customizing their products, while handling wide variations in raw material quality and other influencing parameters such as ambient temperature, humidity etc. Industries strive to continuously improve their performance indicators by modulating few parameters that are known to influence or affect them. This is easy when a process involves limited number of variables. However, most industrial processes consists of many units in series and/or parallel and involve thousands of variables or parameters. Identification of variables that influence key performance indicators (KPIs) and (their) optimum levels in such situations is not straightforward, and doing the same requires a lot of time and expertise. Data analytics methods such as statistical techniques, machine learning and data mining have the potential to solve these complex optimization problems, and can be used to analyze industrial data and discover newer regimes of operation”; see also Runkana ¶0008: “In one aspect, the following presents a system for analyzing a plurality of data from one or more industrial processing units for optimizing the key performance indicators of the industry. The system comprises a memory with instructions, at least one processor communicatively coupled with the memory, a plurality of interfaces and a plurality of modules.”—[(emphasis added) wherein the BRI of model device is any computing system (see present disclosure Pg. 16), and the BRI of analyst client and business client is software stored on a computer device with functionality to perform analysis of the data (see present disclosure Pg. 9 and 17) and wherein the users are experts in their respective domains because they provide key performance indicators which in-turn helps the users to effectively manage the analysis and take appropriate operational decisions]); said model designer device including a communication module configured to connect to the analyst client, the business client and a database, a data processing unit and a data memory said data memory comprising a plurality of instructions enabling the model designer device to carry out the method (Runkana, Fig. 1 – Data Management Server 128, ¶¶0066, 0069: “A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example”—[emphasis added wherein the model designer device connects to desired device/module (e.g., the client modules, data management server (i.e., database), and/or memory) via the communication adapter and/or the system bus]); said designing method comprising: (a) receiving a business dataset by the communication module (Runkana ¶0008: “A receiving module is configured to receive the plurality of data of one or more industrial processing units, wherein the plurality of data comprising of characteristics of raw materials, characteristics of intermediate products, by-products and end products, process parameters and condition of process equipment”—[(emphasis added) wherein the BRI of business dataset is any file or parameter related to a business (see present disclosure Pg. 10)]), said business dataset comprising data generated by industrial production sensors (Runkana ¶0035: “In the preferred embodiment, referring FIG. 6, the enterprise level fusion module 116 is configured to integrate the pre-processed data of each of the one or more industrial processing units with one or more values of simulated variables of one or more physics based models and one or more domain inputs from user to obtain enterprise level dataset, wherein the unit-wise datasets are merged and synchronized taking into account the time lags due to residence times in various units, times of transportation between one or more industrial processing units and response time of one or more sensors of the processing units. If the transportation time between two process units is greater than the sampling frequency of data, then the observation IDs for one of the process units is shifted by appropriate number of time-units before integration. For example, if the sampling frequency is daily and it takes 2 days for material to travel from process unit A to process unit B, then all the observation IDs in the dataset of process A are shifted by 2 days before merging datasets from both the processes”—[(emphasis added)]); (b) generating, by the processing unit, at least one optimized business dataset from the business dataset (Runkana Fig. 1, ¶0030: “In the preferred embodiment, the unit level fusion module 110 is configured to merge the received plurality of data to obtain unit-wise dataset of each of the one or more industrial processing units, wherein the unit-wise dataset of each processing unit comprising of a desired sampling frequency. In the process of merging, the one or more variables from all the files or datasets are merged as per specific observation ID corresponding to the sampling frequency, e.g. date in case of daily data, hours in case of hourly data, etc. If the sampling frequency is inconsistent across various files/datasets, values of variables are averaged wherever possible. If averaging is not possible, the same data is used across, e.g. when hourly analysis is to be performed and only daily data is available, daily data value is used for all hours in that particular day. At the end of the process, unit-wise datasets with rows corresponding to the observation ID and columns corresponding to all the variables in the process unit are obtained”—[wherein the BRI of optimized business dataset is any data manipulated by the processing unit (see present disclosure Pg. 19, 22, and 25), and wherein unit level fusion module (i.e., the processing unit; e.g., processor configure to execute this module) merges the plurality of received data (i.e., business dataset) to obtain a unit-wise dataset (i.e., optimized business dataset) of each of the industrial processes]); (c) designing, by the processing unit, a plurality of variables from the business dataset (Runkana ¶0008: “A data pre-processing module is configured to pre-processing the verified plurality of data to obtain pre-processed dataset of each of the one or more industrial processing units, wherein the pre-processing is an iterative process comprising the steps of outlier removal, imputation of missing values and clustering. An enterprise level fusion module is configured to integrate the pre-processed data of each of the one or more industrial processing units with one or more values of simulated variables of one or more physics based models, and one or more domain inputs from user to obtain enterprise level dataset”—[(emphasis added) wherein the BRI of designing a plurality of variables is merely a step of data pre-processing]); (d) generating, by the processing unit and from preselected learning models stored in the repository and the plurality of variables, at least one prediction model (Runkana ¶0009: “In another aspect, the following presents a method for analyzing a plurality of data from one or more industrial processing units for optimizing the key performance indicators of the industry. The method comprising steps of receiving the plurality of data of one or more industrial processing units, wherein the plurality of data comprising of characteristics of raw materials, characteristics of intermediate products, by-products and end products, process parameters and condition of process equipment, merging the received plurality of data to obtain unit-wise dataset of each of the one or more industrial processing units, verifying the merged unit-wise dataset of the one or more industrial processing units, wherein presence of junk values, percentage availability, standard deviation and inter-quartile range of all the variables of the processing unit are calculated, pre-processing the verified plurality of data to obtain pre-processed dataset of each of the one or more industrial processing units, wherein the pre-processing is an iterative process comprising the steps of outlier removal, imputation of missing values and clustering, integrating the pre-processed datasets of each of the one or more industrial processing units with one or more values of simulated variables of one or more physics-based models, and one or more domain inputs from user to obtain enterprise level dataset, wherein the unit-wise datasets are merged and synchronized taking into account the time lags due to residence times in various units, times of transportation of materials from one or more industrial processing units and response time of one or more sensors of the processing units, identifying one or more operating regimes using one or more clustering techniques on the enterprise level dataset, wherein one or more clustering techniques comprising of distance based clustering, density based clustering and hierarchical clustering, determining ranges of one or more variables corresponding to the KPIs of the enterprise level dataset based on predefined baseline statistics and the one or more operating regimes, wherein the determined ranges of one or more variables is being used to generate one or more plots of KPIs during the time period of analysis is being carried out, selecting one or more features of the enterprise level dataset to obtain a superset of one or more selected features of the enterprise level dataset, wherein the feature selection is performed on all the regime-wise datasets as well as the enterprise level dataset, developing one or more predictive models for each KPI, wherein the one or more predictive models using enterprise level dataset and the superset of one or more selected features of the enterprise level dataset and optimizing at least one KPI based on one or more predictive models and constraints on the one or more KPIs using one or more optimization techniques, wherein one or more optimization techniques includes gradient search, linear programming, goal programming, simulated annealing and evolutionary algorithms”—[wherein the one or more industrial processing units (i.e., processing unit) and the physics-based models (i.e., pre-selected model) uses the pre-processed dataset of each of the one or more industrial processing units (i.e., plurality of variables) to develop one or more predictive models]), said generating comprising using the optimized business dataset for training the at least one prediction model for monitoring the industrial process (Runkana ¶0003: “Indicators such as productivity, product quality, energy consumption, percentage uptime, emission levels etc. are used to monitor the performance of manufacturing industries and process plants. Industries today face the challenge of meeting ambitious production targets, minimizing their energy consumption, meeting emission standards and customizing their products, while handling wide variations in raw material quality and other influencing parameters such as ambient temperature, humidity etc. Industries strive to continuously improve their performance indicators by modulating few parameters that are known to influence or affect them. This is easy when a process involves limited number of variables. However, most industrial processes consists of many units in series and/or parallel and involve thousands of variables or parameters. Identification of variables that influence key performance indicators (KPIs) and (their) optimum levels in such situations is not straightforward, and doing the same requires a lot of time and expertise. Data analytics methods such as statistical techniques, machine learning and data mining have the potential to solve these complex optimization problems, and can be used to analyze industrial data and discover newer regimes of operation”; see also Runkana ¶0043: “Referring FIGS. 9(a) and 9(b), the model building module 124 of the system 100 is configured to develop one or more predictive models for each KPI on the training dataset, wherein the one or more predictive models using enterprise level dataset and the superset of one or more selected features of the enterprise level dataset. It would be appreciated that a three-step model building approach is used. The first step involves building predictive models using basic model building algorithms. The one or more predictive models include stepwise regression, principal component regression, multivariate adaptive regression splines, independent component regression, lasso regression, kriging, random forest, partial least squares, gradient boosted trees, generalized linear modeling, linear and nonlinear support vector machines and artificial neural networks. The second step involves tuning the model building parameters in order to optimize the prediction performance of the models. The prediction performance of the models is evaluated using the test dataset and is expressed in terms of root mean square error (RMSE) of prediction, mean absolute error (MAE) of prediction, akaike information criterion (AIC), corrected akaike information criterion (AICc) and the Bayesian information criterion (BIC) and hit rate (% of points with a given predictive accuracy) as shown in FIG. 10. It would be appreciated that if in any case none of the predictive models meet the RMSE and/or MAE, the user is given the option to go back to the feature selection where additional variables or transformed variables can be added to the superset of important variables and repeat the model building step.”—[(emphasis added)]); and (e) evaluating, by the processing unit, performance of the prediction model, said evaluation including calculating performance data comprising a prediction quality indicator (Runkana Fig. 9a, 9b, 10, ¶¶0043–0045: “The prediction performance of the models is evaluated using the test dataset and is expressed in terms of root mean square error (RMSE) of prediction, mean absolute error (MAE) of prediction, akaike information criterion (AIC), corrected akaike information criterion (AICc) and the Bayesian information criterion (BIC) and hit rate (% of points with a given predictive accuracy) as shown in FIG. 10. It would be appreciated that if in any case none of the predictive models meet the RMSE and/or MAE, the user is given the option to go back to the feature selection where additional variables or transformed variables can be added to the superset of important variables and repeat the model building step. The third step involves model discrimination and selection in which for the integrated dataset and the regime-wise datasets, the top three predictive models with values of root mean square error and mean absolute error lower than user specified values are chosen. A robustness score (RS) is evaluated for the top three models and used for model discrimination. At least ten thousand data points containing values of all variables included in the models are randomly generated and used to predict the KPI. The robustness score for each model is then determined using, PNG media_image1.png 108 599 media_image1.png Greyscale The predictive models with the highest robustness score greater than 95% is selected for sensitivity analysis and optimization. Variance based sensitivity analysis is performed to assess the sensitivity of the KPI to unit changes in the variables in the model. Sensitivity scores for each of the variables in the models are obtained, with a higher score indicating a higher change in the value of the KPI with unit change in the value of the variable. It would be appreciated that if the robustness score for all of the three predictive models is lower than 95%, the user can modify the superset of important features and repeat the model building step”—[(emphasis added)]); wherein for at least two steps selected from steps (b), (c), and (d), the method further includes: transmitting, by the communication module, data to the analyst client and to the business client, said data comprising data related to the generated optimized dataset, the plurality of variables and/or performance data (Runkana ¶0027: “In the preferred embodiment, the memory 104 contains instructions that are readable by the processor 102. The plurality of interfaces 106 comprising of graphical user interface, server interface, a physics based model interface and a solver interface. The graphical user interface is used to receive inputs such as the KPIs of interest and the time period of analysis from the user and forward them to the plurality of modules. The server interface forwards the request-for-data received from the one of the plurality of modules to the data management server 128 and the data received from the data management server 128 to the plurality of modules. The physics based model interface sends the integrated dataset received from the one of the plurality of modules after enterprise level fusion to physics-based models, if any, available for the industrial process, receives the values of simulated variables from the physics-based models and forwards them to the one of the plurality of modules”—[wherein the server interface (i.e., communication module) sends (i.e., transmits) the data to the plurality of modules (i.e., the analyst client and the business client)]), and initiating a following step by the model designer device from steps (c) and (d) if any and only if both said instructions authorize said designer device to do so (Runkana ¶0031: “In the preferred embodiment, the verification module 112 is configured to verify the merged unit-wise dataset of the one or more industrial processing units, wherein presence of absurd values, percentage availability, standard deviation and inter-quartile range of all the variables of the processing unit are calculated. Data quality verification is performed on the unit-wise datasets obtained for each of the process units. Missingness maps depicting the percentage and pattern of availability of the variables are also created for each process units. The data quality metrics and the missingness maps are sent as outputs to the user via the user interface. Depending on the availability of the data, the user can decide whether or not to proceed with the rest of the analysis. The user can also suggest deletion of some of the variables with very low availability before executing the rest of the steps”; see also Runkana ¶0045: “The predictive models with the highest robustness score greater than 95% is selected for sensitivity analysis and optimization. Variance based sensitivity analysis is performed to assess the sensitivity of the KPI to unit changes in the variables in the model. Sensitivity scores for each of the variables in the models are obtained, with a higher score indicating a higher change in the value of the KPI with unit change in the value of the variable. It would be appreciated that if the robustness score for all of the three predictive models is lower than 95%, the user can modify the superset of important features and repeat the model building step”—[(emphasis added) wherein the BRI of a following step is anything step/action/process/calculation taken by the system after authorization and the BRI of authorization is any step/action/process that causes the process to proceed. Furthermore, this is a contingent limitation and the BRI of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. See MPEP 2111.04]). wherein the method further includes storing the prediction model in the data memory with the instructions related thereto and the performance indicators associated therewith (Runkana Figs. 1, Memory 104—[wherein the entire model building process is depicted as being performed in the memory 104]); wherein the method further comprises a step, performed by the model designer, of removing variables from the optimized business dataset [that are redundant] (Runkana ¶0031, ¶0034: “Data quality verification is performed on the unit-wise datasets obtained for each of the process units. Missingness maps depicting the percentage and pattern of availability of the variables are also created for each process units. The data quality metrics and the missingness maps are sent as outputs to the user via the user interface” and “In the preferred embodiment, the iterative process of outlier removal, imputation and clustering is stopped when the number of clusters and the number of data points in each cluster do not change. Unit-wise pre-processed datasets are obtained at the end of this step. For each variable, the number/percentage of outliers removed, the technique used for imputation, and mean, median and standard deviation before and after pre-processing are presented to the user as outputs. List of discarded variables is also presented to the user. The user is also provided with the option of visualizing the trends of original and pre-processed variables”—[(emphasis added)]) and the removed variables are sent automatically to the analyst client, who will confirm whether these variables are redundant (Runkana ¶0031, ¶0034: “Data quality verification is performed on the unit-wise datasets obtained for each of the process units. Missingness maps depicting the percentage and pattern of availability of the variables are also created for each process units. The data quality metrics and the missingness maps are sent as outputs to the user via the user interface. Depending on the availability of the data, the user can decide whether or not to proceed with the rest of the analysis. The user can also suggest deletion of some of the variables with very low availability before executing the rest of the steps” and “In the preferred embodiment, the iterative process of outlier removal, imputation and clustering is stopped when the number of clusters and the number of data points in each cluster do not change. Unit-wise pre-processed datasets are obtained at the end of this step. For each variable, the number/percentage of outliers removed, the technique used for imputation, and mean, median and standard deviation before and after pre-processing are presented to the user as outputs. List of discarded variables is also presented to the user. The user is also provided with the option of visualizing the trends of original and pre-processed variables”—[(emphasis added)]), and if authorization is not obtained, at least one of the instructions received from the business client includes data and proposals for changes to performance data assigned to the variables (Runkana ¶¶0042–0043, ¶0046: “The user is given the option to add additional features or delete existing features from the supersets. For each dataset, parallel coordinate plots are also displayed to the user … It would be appreciated that if in any case none of the predictive models meet the RMSE and/or MAE, the user is given the option to go back to the feature selection where additional variables or transformed variables can be added to the superset of important variables and repeat the model building step” and “For self-learning, original data used for developing the models and data for the newer time period are combined, and the model building step is repeated on the combined dataset. Self-learning can be triggered either automatically on a periodic basis (e.g. every week or every month) or by the user based on statistical measures related to the models or the newer dataset”—[wherein the instructions received from the user’s option includes additional features (i.e., data and change proposals to the performance data) for each dataset. Examiner notes this limitation is a contingent limitation, and the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. See MPEP § 2111.04.]), and the designer device analyzes the instructions transmitted so as to extract performance values, to be used during a step of generating a new plurality of variables (Runkana ¶0046, ¶0056: “For self-learning, original data used for developing the models and data for the newer time period are combined, and the model building step is repeated on the combined dataset. Self-learning can be triggered either automatically on a periodic basis (e.g. every week or every month) or by the user based on statistical measures related to the models or the newer dataset. Statistical measures related to the models could be model performance metrics such as root mean square error, mean absolute error, akaike information criterion, corrected akaike information criterion, Bayesian information criterion or hit rate while statistical measures related to the newer dataset could be mean percentage deviation of newer data from the original data or multivariate distance between original dataset and newer dataset” and “At the step 414, the baseline statistics module determines ranges of one or more variables corresponding to the KPIs of the enterprise level dataset, based on predefined baseline statistics and the one or more operating regimes, wherein the determined ranges of one or more variables is being used to generate one or more plots of KPIs during the time period of analysis is being carried out”—[wherein the device’s self-learning is triggered by the instructions from the user, the device determines (i.e., analyzes) one or more variables corresponding to the KPIs (i.e., performance values) to generate one or more plots of KPIs (i.e., generate new variables)]). Runkana does not appear to explicitly teach: said model designer device being configured to operate with a repository of preselected learning models, stored in the data memory or in the database that have been previously trained using supervised learning techniques; receiving, by the communication module, an instruction from each of the analyst client and the business client; transmitting instructions received by a given client to the other client among the business and the analyst clients; analyzing the instructions and determining if they include authorization; otherwise, initiating a step of generating a new plurality of variables or reverse engineering of the optimized business dataset, reverse engineering of the plurality of variables or reverse engineering of the prediction model, depending on the data contained in the instruction of the business client and after validation by the analyst client; and [wherein the method further comprises a step, performed by the model designer, of removing variables from the optimized business dataset] that are redundant. However, Garvey teaches: said model designer device being configured to operate with a repository of preselected learning models, stored in the data memory or in the database that have been previously trained using supervised learning techniques (Garvey Col. 16, lines 22–34: “Once the adjustments to the demands are complete, what-if analytic determines 138 whether there are any resource prediction models associated with the adjusted demands (Operation 750). For example, what-if analytic 138 may search data store 140 for a trained resource prediction model that maps an adjusted demand to a resource performance metric. In some cases, a resource prediction model may not be available. This scenario may occur if none of the adjusted demands are correlated with a relevant resource performance metric. If none of the adjusted demands are correlated to resource performance, then the process may proceed without making any adjustments to a resource performance metric”—[wherein the what-if system (i.e., model designer device) searches data store 140 (i.e., a database) for trained resource prediction models (i.e., stored preselected learning models)]); receiving, by the communication module, an instruction from each of the analyst client and the business client (Garvey Figs. 1, 2, Col. 27, lines 19–33: “Computer system 1100 also includes a communication interface 1118 coupled to bus 1102. Communication interface 1118 provides a two-way data communication coupling to a network link 1120 that is connected to local network 1122. For example, communication interface 1118 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1118 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 1118 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information”; see also Garvey Col. 7, lines 55–67: “During an evaluation phase, what-if analytic 138 receives, as input, (a) scenario parameters 144, (b) trained correlation prediction models (including demand propagation and resource prediction models generated by correlation modelling logic 132), (c) trained forecast models (from forecast modelling logic 136). Scenario parameters 144 comprise a set of values that define a particular scenario to simulate. In one or more embodiments, scenario parameters 144 define at least one adjustment to a demand or resource time-series value. For example, a scenario may be defined as follows: ‘What if my system saw X % more user-demand’. Based on this scenario definition, historical and/or forecast simulations may be evaluated by what-if analytic 138”; see also Garvey Col. 15, lines 16–36: “the process includes receiving a set of historical simulation parameters (Operation 710). In one or more embodiments, the scenario parameters define a historical adjustment … In this scenario definition, the user identifies an adjustment/modification and a particular target resource, but does not define the timeframe. Any default timeframe (e.g., the past week, month, six months, etc.) may be selected for analysis in this case”—[wherein the BRI of communication module is any part of the device that facilitates communication and wherein the communication interface (i.e., communication module) facilitates the communication in the computer system and receives the scenario parameters (i.e., business client instruction) and the historical simulation parameters (i.e., analytic client instructions)]); transmitting instructions received by a given client to the other client among the business and the analyst clients (Garvey Col. 7, Lines 11–30: "Clients 150 a-k represent one or more clients that may access analytic services 130 to evaluate what-if scenarios. A “client” in this context may be a human user, such as an administrator, a client program, or some other application instance. A client may execute locally on the same host as time-series analytic or may execute on a different machine. If executing on a different machine, the client may communicate with analytic services 130 via one or more data communication protocols according to a client-server model, such as by submitting HTTP requests invoking one or more of the services and receiving HTTP responses comprising results generated by one or more of the services. Analytic services 130 may provide clients 150 a-k with an interface through which one or more of the provided services may be invoked. Example interfaces may comprise, without limitation, a graphical user interface (GUI), an application programming interface (API), a command-line interface (CLI) or some other interface that allows a user to interact with and invoke one or more of the provided services"; see also Garvey Col. 8, Lines 35–51: "Scenario outputs may be stored in data repository 140, provided to clients 150 a-k (e.g., by sending over a network to another host device or notifying a separate application executing on a host) and/or presented via an interactive display. An application or other user may process the scenario outputs to make adjustments to targets 112 a-i. For example, if a scenario simulation indicates that predicted performance metrics for a scenario will not satisfy a threshold, additional resources may be deployed and/or existing resources may be reconfigured (e.g., through load balancing, pooling etc.) to boost performance. The number of resources that are deployed may also be selected based on scenario simulations that satisfy the target performance threshold. In other cases, resources may be brought offline or other responsive actions may be taken to optimize system 100 for a particular scenario"—[(emphasis added) wherein the system sends the instructions (i.e., adjustments to targets) to the clients (e.g., by sending over a network to another host device or notifying a separate application executing on a host) and If executing on a different machine, the client may communicate with analytic services 130 via one or more data communication protocols]); analyzing the instructions and determining if they include authorization (Garvey Fig. 1A, 1B, Col. 7, lines 23–44: “Analytic services 130 may provide clients 150 a-k with an interface through which one or more of the provided services may be invoked. Example interfaces may comprise, without limitation, a graphical user interface (GUI), an application programming interface (API), a command-line interface (CLI) or some other interface that allows a user to interact with and invoke one or more of the provided services. FIG. 1B illustrates an example dataflow for simulating scenarios and generating scenario outputs, in accordance with one or more embodiments. During a training phase, analytic services 130 receives, as input, demand and resource time-series data 142. Correlation modelling logic 132, seasonality modelling logic 134, and forecast modelling logic 136 each process demand and resource time-series data 142 to build respective time-series models. For example, correlation modelling logic 132 may train correlation prediction models, seasonality modelling logic 134 may train seasonal pattern models, and forecasting modelling logic 136 may train forecast models. Example operations for building and training these time-series models are described in further detail below”; see also Garvey Col. 7, lines 55–67: “During an evaluation phase, what-if analytic 138 receives, as input, (a) scenario parameters 144, (b) trained correlation prediction models (including demand propagation and resource prediction models generated by correlation modelling logic 132), (c) trained forecast models (from forecast modelling logic 136). Scenario parameters 144 comprise a set of values that define a particular scenario to simulate. In one or more embodiments, scenario parameters 144 define at least one adjustment to a demand or resource time-series value. For example, a scenario may be defined as follows: ‘What if my system saw X % more user-demand’. Based on this scenario definition, historical and/or forecast simulations may be evaluated by what-if analytic 138”; see also Garvey Col. 15, lines 16–36: “the process includes receiving a set of historical simulation parameters (Operation 710). In one or more embodiments, the scenario parameters define a historical adjustment … In this scenario definition, the user identifies an adjustment/modification and a particular target resource, but does not define the timeframe. Any default timeframe (e.g., the past week, month, six months, etc.) may be selected for analysis in this case”—[(emphasis added) wherein the BRI of controller client is software stored on a computer device with functionality to perform analysis of the data (see present disclosure Pg. 9 and 17), and the BRI of authorization is any step/action/process that causes the process to proceed, and wherein the analytics service analyzes the received input including the user adjustment/modifications (e.g., scenario parameters) and may or may not evaluate the scenario after receiving the input (i.e., determining if its authorized or not)]); and otherwise, initiating a step of generating a new plurality of variables or reverse engineering of the optimized business dataset, reverse engineering of the plurality of variables or reverse engineering of the prediction model, depending on the data contained in the instruction of the business client and after validation by the analyst client (Garvey Col. 8, lines 28–51: “Based on the evaluation, what-if analytic 138 generates scenario outputs 146. In one or more embodiments, scenario outputs 146 capture adjustments made to demand and/or resource time-series data for a scenario definition. In the example scenario ‘What if my system saw X % more transactions’, for instance, the scenario output may reflect historical and/or forecasted changes to one or more resource performance metrics and/or other demand metrics. Scenario outputs may be stored in data repository 140, provided to clients 150a-k (e.g., by sending over a network to another host device or notifying a separate application executing on a host) and/or presented via an interactive display. An application or other user may process the scenario outputs to make adjustments to targets 112a-i. For example, if a scenario simulation indicates that predicted performance metrics for a scenario will not satisfy a threshold, additional resources may be deployed and/or existing resources may be reconfigured (e.g., through load balancing, pooling etc.) to boost performance. The number of resources that are deployed may also be selected based on scenario simulations that satisfy the target performance threshold. In other cases, resources may be brought offline or other responsive actions may be taken to optimize system 100 for a particular scenario”—[wherein the BRI of reverse engineering is updating the model based on an output of the model (see present disclosure Pg. 11), and wherein the system is reconfigured (i.e., updated) based on predicted performance metrics (i.e., an output of the system)]); and [wherein the method further comprises a step, performed by the model designer, of removing variables from the optimized business dataset] that are redundant (Garvey Fig. 2, Col. 9, Lines 58–63: “Before the training set of time-series data is used to train correlation prediction models, correlation modelling logic 132 analyzes different demands and filters out/removes demands that contain redundant information. Demand filtering reduces processing and storage overhead by eliminating the training and storage of redundant models”—[(emphasis added)]). The methods of Runkana, the teachings of Garvey, and the instant application are analogous art because they pertain to model prediction and business dataset manipulation. It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the methods of Runkana with the teachings of Garvey to provide a communication link for receiving instructions from different modules. One would be motivated to do so to optimize system resources and configurations by making adjustments based on predicted performance (Garvey Col. 5, lines 17–30: “In one or more embodiments, the what-if analytic may be used to optimize system resources and configurations. For example, one or more simulations may be run to determine how various adjustments would affect resource performance. Based on the output of the simulations, one or more responsive actions may be taken. Example responsive actions may include deploying additional resources, bringing resources offline, and adjusting system configurations. If a system administrator is expecting additional demands on a resource, for instance, various scenarios may be evaluated to determine which scenario leads to the optimal resource configuration, which may be the configuration that maximizes performance (or satisfies a threshold level) using the fewest of resources”). Regarding claim 4, Runkana in view of Garvey teaches all the limitations of claim 1. Garvey teaches: generating graphical indicators for modeling the prediction models and their associated results, to a business user, in order to boost implementation of the prediction models (Garvey Col. 8, lines 28–51: “Based on the evaluation, what-if analytic 138 generates scenario outputs 146. In one or more embodiments, scenario outputs 146 capture adjustments made to demand and/or resource time-series data for a scenario definition. In the example scenario ‘What if my system saw X % more transactions’, for instance, the scenario output may reflect historical and/or forecasted changes to one or more resource performance metrics and/or other demand metrics. Scenario outputs may be stored in data repository 140, provided to clients 150a-k (e.g., by sending over a network to another host device or notifying a separate application executing on a host) and/or presented via an interactive display. An application or other user may process the scenario outputs to make adjustments to targets 112a-i. For example, if a scenario simulation indicates that predicted performance metrics for a scenario will not satisfy a threshold, additional resources may be deployed and/or existing resources may be reconfigured (e.g., through load balancing, pooling etc.) to boost performance. The number of resources that are deployed may also be selected based on scenario simulations that satisfy the target performance threshold. In other cases, resources may be brought offline or other responsive actions may be taken to optimize system 100 for a particular scenario”; see also Garvey Col. 17, lines 5–31: “Once the evaluation phase is complete, what-if analytic 138 presents the time-series datasets for one or more demands and one or more resources, including any generated adjustments (Operation 770). The presentation of the adjusted time-series datasets may vary from implementation to implementation. For example, the time-series datasets may be displayed via an interactive display that is coupled to a client computing system. The interactive display may allow a user to drill down to view adjustments to individual demands, resources, or custom groups of demands and/or resources. In one or more embodiments, an interactive display presenting the results of a historical simulation may allow a user to select responsive actions, including on or more of the responsive actions previously described, for the system to perform. For example, system 100 may adjust the configurations of 112a-i, deploy additional resources, and/or perform load balancing to replicate a simulated scenario”—[wherein the interactive display presents (i.e., generates) the outputs (i.e., graphical indicators) to a user (e.g., business user) and additional resources are deployed/configured to boost performance (i.e., boost the implementation)]). The same motivation that was utilized for combining Runkana with Garvey, as set forth in claim 1, is equally applicable to claim 4. Regarding claim 5, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein the prediction quality indicator is measured after each of steps (b), (c) and (d) (Runkana Fig. 9a, 9b, 10, ¶0043–0046: “Referring FIGS. 9(a) and 9(b), the model building module 124 of the system 100 is configured to develop one or more predictive models for each KPI on the training dataset, wherein the one or more predictive models using enterprise level dataset and the superset of one or more selected features of the enterprise level dataset. It would be appreciated that a three-step model building approach is used. The first step involves building predictive models using basic model building algorithms. The one or more predictive models include stepwise regression, principal component regression, multivariate adaptive regression splines, independent component regression, lasso regression, kriging, random forest, partial least squares, gradient boosted trees, generalized linear modeling, linear and nonlinear support vector machines and artificial neural networks. The second step involves tuning the model building parameters in order to optimize the prediction performance of the models. The prediction performance of the models is evaluated using the test dataset and is expressed in terms of root mean square error (RMSE) of prediction, mean absolute error (MAE) of prediction, akaike information criterion (AIC), corrected akaike information criterion (AICc) and the Bayesian information criterion (BIC) and hit rate (% of points with a given predictive accuracy) as shown in FIG. 10. It would be appreciated that if in any case none of the predictive models meet the RMSE and/or MAE, the user is given the option to go back to the feature selection where additional variables or transformed variables can be added to the superset of important variables and repeat the model building step. The third step involves model discrimination and selection in which for the integrated dataset and the regime-wise datasets, the top three predictive models with values of root mean square error and mean absolute error lower than user specified values are chosen. A robustness score (RS) is evaluated for the top three models and used for model discrimination. At least ten thousand data points containing values of all variables included in the models are randomly generated and used to predict the KPI. The robustness score for each model is then determined using, PNG media_image1.png 108 599 media_image1.png Greyscale The predictive models with the highest robustness score greater than 95% is selected for sensitivity analysis and optimization. Variance based sensitivity analysis is performed to assess the sensitivity of the KPI to unit changes in the variables in the model. Sensitivity scores for each of the variables in the models are obtained, with a higher score indicating a higher change in the value of the KPI with unit change in the value of the variable. It would be appreciated that if the robustness score for all of the three predictive models is lower than 95%, the user can modify the superset of important features and repeat the model building step. It would be appreciated that the predictive performance of the models is likely to decrease with time as newer/future data is used for prediction and a ‘self-learning’ option is provided to the user to improve the accuracy of the predictive models. For self-learning, original data used for developing the models and data for the newer time period are combined, and the model building step is repeated on the combined dataset. Self-learning can be triggered either automatically on a periodic basis (e.g. every week or every month) or by the user based on statistical measures related to the models or the newer dataset. Statistical measures related to the models could be model performance metrics such as root mean square error, mean absolute error, akaike information criterion, corrected akaike information criterion, Bayesian information criterion or hit rate while statistical measures related to the newer dataset could be mean percentage deviation of newer data from the original data or multivariate distance between original dataset and newer dataset”—[(emphasis added) wherein the models are scored (i.e., measure quality indicator) after the three-step model building approach (i.e., steps b, c, and d)]). Regarding claim 6, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein the transmission step by the communication modul (Runkana ¶0027: “In the preferred embodiment, the memory 104 contains instructions that are readable by the processor 102. The plurality of interfaces 106 comprising of graphical user interface, server interface, a physics based model interface and a solver interface. The graphical user interface is used to receive inputs such as the KPIs of interest and the time period of analysis from the user and forward them to the plurality of modules. The server interface forwards the request-for-data received from the one of the plurality of modules to the data management server 128 and the data received from the data management server 128 to the plurality of modules. The physics based model interface sends the integrated dataset received from the one of the plurality of modules after enterprise level fusion to physics-based models, if any, available for the industrial process, receives the values of simulated variables from the physics-based models and forwards them to the one of the plurality of modules”—[wherein the BRI of controller client is software stored on a computer device with functionality to perform analysis of the data (see present disclosure Pg. 9 and 17), and wherein the BRI of a subsequent step is anything step/action/process/calculation taken by the system after authorization, and the BRI of authorization is any step/action/process that causes the process to proceed, and wherein the server interface (i.e., communication module) sends (i.e., transmits) the data to the plurality of modules (i.e., the controller client) after receiving input from the user (i.e., authorization). Furthermore, this “subsequent step” limitation is a contingent limitation and the BRI of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. See MPEP 2111.04]). Regarding claim 7, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: transmitting outliers to the business client and receiving a status for each of the transmitted outliers (Runkana Figs. 4a and 4b, ¶¶0032–0034: “The material variables with availability less than desired availability and following no specific pattern in missingness are discarded from the dataset. A univariate outlier analysis is initially carried out to detect and remove outliers in the dataset, including inconsistent values arising due to instrument failure/malfunction. In case the production of a unit is zero, all variables for the unit for that time period are neglected. The variables are then categorized into various subsets based on the percentage availability of the variable. While multivariate imputation is used for process parameters and non-seasonal material characteristic variables, time series imputation is used for seasonal quality variables. After the missingness in all the variables is appropriately imputed, clustering is performed on the unit-wise dataset to identify clusters, if any, present in data. These clusters are representative of different regimes of operation. Each unit-wise dataset is then divided into different datasets based on the identified clusters. The divided datasets are taken through the steps of outlier removal and imputation as shown in FIGS. 4(a) and 4(b). In the preferred embodiment, the iterative process of outlier removal, imputation and clustering is stopped when the number of clusters and the number of data points in each cluster do not change. Unit-wise pre-processed datasets are obtained at the end of this step. For each variable, the number/percentage of outliers removed, the technique used for imputation, and mean, median and standard deviation before and after pre-processing are presented to the user as outputs. List of discarded variables is also presented to the user. The user is also provided with the option of visualizing the trends of original and pre-processed variables”—[wherein the BRI of transmitting and receiving is sharing information between different software processes, and wherein the outliers are displayed to the user as outputs (i.e., transmitted to the business client) and wherein the system stops the iterative process of outlier removal when there data does not change (i.e., received status)]). Regarding claim 8, Runkana in view of Garvey teaches all the limitations of claim 6. Runkana teaches: wherein variables selected from the business dataset are each transmitted to the controller client and the controller client returns a relevance value for each of the selected variables (Runkana Fig. 4, ¶0056: “At the step 414, the baseline statistics module determines ranges of one or more variables corresponding to the KPIs of the enterprise level dataset, based on predefined baseline statistics and the one or more operating regimes, wherein the determined ranges of one or more variables is being used to generate one or more plots of KPIs during the time period of analysis is being carried out. The outputs to the user from the baseline statistics module include the percentages of time period KPIs are in the desired and undesired ranges, the ranges of variables that correspond to desired and undesired ranges of KPIs, the ranges of KPIs at different productivity levels, correlation coefficients between KPIs and other variables, trend plots and box plots of KPIs and other variables, scatter plots between KPIs and variables of interest, and heat maps of mean values of the KPIs”—[wherein the enterprise level dataset is passed to the statistics module (i.e., transmit dataset to the controller client) and the statistics module outputs to the user (i.e., returns) correlation coefficients (i.e., relevance value) between the KPIs and other variables]). Regarding claim 9, Runkana in view of Garvey teaches all the limitations of claim 7. Runkana teaches: wherein variables selected from the business dataset are each transmitted to the controller client and the controller client returns a relevance value for each of the selected variables (Runkana Fig. 4, ¶0056: “At the step 414, the baseline statistics module determines ranges of one or more variables corresponding to the KPIs of the enterprise level dataset, based on predefined baseline statistics and the one or more operating regimes, wherein the determined ranges of one or more variables is being used to generate one or more plots of KPIs during the time period of analysis is being carried out. The outputs to the user from the baseline statistics module include the percentages of time period KPIs are in the desired and undesired ranges, the ranges of variables that correspond to desired and undesired ranges of KPIs, the ranges of KPIs at different productivity levels, correlation coefficients between KPIs and other variables, trend plots and box plots of KPIs and other variables, scatter plots between KPIs and variables of interest, and heat maps of mean values of the KPIs”—[wherein the enterprise level dataset is passed to the statistics module (i.e., transmit dataset to the controller client) and the statistics module outputs to the user (i.e., returns) correlation coefficients (i.e., relevance value) between the KPIs and other variables]). Regarding claim 10, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein variables selected from the business dataset are each transmitted to the business client and the business client returns a relevance value for each of the selected variables (Runkana Fig. 4, ¶0056: “At the step 414, the baseline statistics module determines ranges of one or more variables corresponding to the KPIs of the enterprise level dataset, based on predefined baseline statistics and the one or more operating regimes, wherein the determined ranges of one or more variables is being used to generate one or more plots of KPIs during the time period of analysis is being carried out. The outputs to the user from the baseline statistics module include the percentages of time period KPIs are in the desired and undesired ranges, the ranges of variables that correspond to desired and undesired ranges of KPIs, the ranges of KPIs at different productivity levels, correlation coefficients between KPIs and other variables, trend plots and box plots of KPIs and other variables, scatter plots between KPIs and variables of interest, and heat maps of mean values of the KPIs”—[wherein the enterprise level dataset is passed to the statistics module (i.e., transmit dataset to the business client) and the statistics module outputs to the user (i.e., returns) correlation coefficients (i.e., relevance value) between the KPIs and other variables]). Regarding claim 12, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein the business client further transmits to the designer device instructions for changing a hierarchy of the generated prediction models (Runkana ¶0038: “In the preferred embodiment, the regime identification module 118 is configured to identify one or more operating regimes using one or more clustering techniques on the enterprise level dataset, wherein one or more clustering techniques comprising of distance based clustering, density based clustering and hierarchical clustering”; see also Runkana ¶0040: “In the preferred embodiment, the feature selection module 122 is configured to select one or more features of the enterprise level dataset to obtain a superset of one or more selected features of the enterprise level dataset, wherein the feature selection is performed on all the regime-wise datasets as well as the enterprise level dataset. The integrated dataset is divided into two or more datasets depending on the number of regimes identified during the regime identification step”; see also Runkana ¶0044: “The third step involves model discrimination and selection in which for the integrated dataset and the regime-wise datasets, the top three predictive models with values of root mean square error and mean absolute error lower than user specified values are chosen. A robustness score (RS) is evaluated for the top three models and used for model discrimination. At least ten thousand data points containing values of all variables included in the models are randomly generated and used to predict the KPI”—[wherein regime identification module 118 is configured to identify hierarchy which is passed (i.e., transmitted) to the feature selection module (i.e., designer device) and is used to evaluate the models for model discrimination (i.e., changing a hierarchy)]). Regarding claim 14, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein the industrial production sensors include: connected objects, machine sensors, environmental sensors and/or computing probes (Runkana ¶0035: “In the preferred embodiment, referring FIG. 6, the enterprise level fusion module 116 is configured to integrate the pre-processed data of each of the one or more industrial processing units with one or more values of simulated variables of one or more physics based models and one or more domain inputs from user to obtain enterprise level dataset, wherein the unit-wise datasets are merged and synchronized taking into account the time lags due to residence times in various units, times of transportation between one or more industrial processing units and response time of one or more sensors of the processing units. If the transportation time between two process units is greater than the sampling frequency of data, then the observation IDs for one of the process units is shifted by appropriate number of time-units before integration. For example, if the sampling frequency is daily and it takes 2 days for material to travel from process unit A to process unit B, then all the observation IDs in the dataset of process A are shifted by 2 days before merging datasets from both the processes”; see also Runkana ¶0050: “At the step 402, where the receiving module receives the plurality of data of one or more industrial processing units, wherein the plurality of data comprising of characteristics of raw materials, characteristics of intermediate products, by-products and end products, process parameters, environment, market demand, availability of raw materials and condition of process equipment”—[(emphasis added)]). Claims 11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Runkana in view of Garvey and further in view of Kumar et al., (US 9395262 B1), hereinafter “Kumar”. Regarding claim 11, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein step (d) includes generating several prediction models, … the generated prediction models being prioritized according to their performance (Runkana Fig. 9a, 9b, 10, ¶0043–0045: “The prediction performance of the models is evaluated using the test dataset and is expressed in terms of root mean square error (RMSE) of prediction, mean absolute error (MAE) of prediction, akaike information criterion (AIC), corrected akaike information criterion (AICc) and the Bayesian information criterion (BIC) and hit rate (% of points with a given predictive accuracy) as shown in FIG. 10. It would be appreciated that if in any case none of the predictive models meet the RMSE and/or MAE, the user is given the option to go back to the feature selection where additional variables or transformed variables can be added to the superset of important variables and repeat the model building step. The third step involves model discrimination and selection in which for the integrated dataset and the regime-wise datasets, the top three predictive models with values of root mean square error and mean absolute error lower than user specified values are chosen. A robustness score (RS) is evaluated for the top three models and used for model discrimination. At least ten thousand data points containing values of all variables included in the models are randomly generated and used to predict the KPI. The robustness score for each model is then determined using, PNG media_image1.png 108 599 media_image1.png Greyscale The predictive models with the highest robustness score greater than 95% is selected for sensitivity analysis and optimization. Variance based sensitivity analysis is performed to assess the sensitivity of the KPI to unit changes in the variables in the model. Sensitivity scores for each of the variables in the models are obtained, with a higher score indicating a higher change in the value of the KPI with unit change in the value of the variable. It would be appreciated that if the robustness score for all of the three predictive models is lower than 95%, the user can modify the superset of important features and repeat the model building step”—[(emphasis added) wherein the at least three models are generated and the robustness score is determined for the top three which is used for discriminations to select the models for sensitivity and optimization]). Runkana in view of Garvey does not appear to explicitly teach: built via parallelization. However, Kumar teaches: built via parallelization (Kumar col. 14, lines 38–52: “The server 20 generates a prediction model for each of the stations in the subsystem based on the historical temporal sensor measurements. The prediction model for each station may be unique. In another example, the server 20 generates the prediction models for each station in parallel. For example, the server 20 generates a prediction model for a first station in the subsystem, as shown at block 1230. The prediction model predicts sensor measurements at the first station based on the sensor measurements at each station in the subsystem. Thus, the prediction model determines a relationship between the temporal sensor measurements of the stations in the subsystem, after synchronizing the temporal sensor measurements. For example, if the sensor measurement predicted is pressure, the prediction model determines a multivariate relationship, which may be expressed as in Table 3”—[(emphasis added)]). The methods of Runkana, the teachings of Kumar, and the instant application are analogous art because they pertain to generating prediction models for business data. It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the methods of Runkana with the teachings of Kumar to provide for parallelization to generate more than one model at the same time. One would be motivated to do so to determine multivariate relationships (Kumar col. 14, lines 38–52: “The server 20 generates a prediction model for each of the stations in the subsystem based on the historical temporal sensor measurements. The prediction model for each station may be unique. In another example, the server 20 generates the prediction models for each station in parallel. For example, the server 20 generates a prediction model for a first station in the subsystem, as shown at block 1230. The prediction model predicts sensor measurements at the first station based on the sensor measurements at each station in the subsystem. Thus, the prediction model determines a relationship between the temporal sensor measurements of the stations in the subsystem, after synchronizing the temporal sensor measurements. For example, if the sensor measurement predicted is pressure, the prediction model determines a multivariate relationship, which may be expressed as in Table 3”—[(emphasis added)]). Regarding claim 16, Runkana in view of Garvey teaches all the limitations of claim 1. Kumar teaches: generating a representation of relationships between the variables used by a prediction model (Kumar col. 14, lines 38–52: “The server 20 generates a prediction model for each of the stations in the subsystem based on the historical temporal sensor measurements. The prediction model for each station may be unique. In another example, the server 20 generates the prediction models for each station in parallel. For example, the server 20 generates a prediction model for a first station in the subsystem, as shown at block 1230. The prediction model predicts sensor measurements at the first station based on the sensor measurements at each station in the subsystem. Thus, the prediction model determines a relationship between the temporal sensor measurements of the stations in the subsystem, after synchronizing the temporal sensor measurements. For example, if the sensor measurement predicted is pressure, the prediction model determines a multivariate relationship, which may be expressed as in Table 3—[(emphasis added)]). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Runkana in view of Garvey and further in view of Cella et al., (US 20200225655 A1), hereinafter “Cella”. Regarding claim 15, Runkana in view of Garvey teaches all the limitations of claim 1. Runkana teaches: wherein the industrial process is selected from: a manufacturing production process [or a process for monitoring an IT infrastructure] (Runkana ¶0062: “The embodiments of present disclosure herein addresses unresolved problem of optimization of performance indicators to monitor the performance of manufacturing industries and process plants, in addition to, the pre-processing of the received industrial data from variety of sources having different formats and recording frequencies”—[(emphasis added)]); Garvey teaches: or a process for monitoring an IT infrastructure (Garvey Col. 3, lines 25–42: “What-if analytical models allow system administrators to simulate and observe the impact of changes within a system environment. One approach to the what-if analysis is for the system administrator to define performance capacity of a system and the performance requirements for software resources deployed within a system. In the context of a cloud service, for instance, a cloud administrator may define an actual or hypothetical host capacity and the requirements of running an instance of the cloud service on the host. The cloud administrator may then simulate adding and removing instances from the host to determine how system performance is affected. This approach allows administrators to more accurately predict how adding resources affects system performance. However, the approach relies on the administrator's domain knowledge. In complex system with variable inputs and demands, the administrator may not be aware of or be able to keep up with how changes will affect a particular resource”—[(emphasis added)]). The same motivation that was utilized for combining Runkana with Garvey, as set forth in claim 1, is equally applicable to claim 15. Runkana in view of Garvey does not appear to explicitly teach: data including an agri-food production process, a chemical synthesis process, or a packaging process. However, Cella teaches: data including an agri-food production process, a chemical synthesis process, or a packaging process (Cella ¶0891: “In an illustrative and non-limiting example, torsional analysis may facilitate the understanding of the health and expected life of various components associated with the drive trains of vehicles, such as cranes, bulldozers, tractors, haulers, backhoes, forklifts, agricultural equipment, mining equipment, boring and drilling machines, digging machines, lifting machines, mixers (e.g., cement mixers), tank trucks, refrigeration trucks, security vehicles (e.g., including safes and similar facilities for preserving valuables), underwater vehicles, watercraft, aircraft, automobiles, trucks, trains and the like, as well as drive trains of moving apparatus, such as assembly lines, lifts, cranes, conveyors, hauling systems, and others. The evaluation of the sensor data with the model of the system geometry and operating conditions may be useful in identifying unexpected torsion and the transmission of that torsion from the motor and drive shaft, from the drive shaft to the universal joint and from the universal joint to one or more wheel axles”; see also Cella ¶1057: “For example, when a new pressure reactor is installed in a chemical processing facility, data from the current data collection band may not accurately predict the state or metric of operation of the system, thus, the machine learning data analysis circuit may begin to iterate to determine if a new data collection band is better at predicting a state. Based on offset system data, such as from a library or other data structure, certain sensors, frequency bands or other smart band members may be used in the smart band initially and data may be collected to assess performance”; see also Cella ¶2244: “Yet other industrial heating applications may include packaging, sterilization, and the like. Particular packaging uses may include high-speed poly-coated paperboard sealing, high-speed heat shrink installations, material heat forming, curing adhesives, sterilizing bottles and cartons (e.g., through heating water and/or steam therefore), production and packaging of pharmaceuticals, sterilization and packaging of surgical tools and hardware, replacement dental features (e.g., crowns and the like), production and sealing of packaging material, and the like”—[(emphasis added)]). The methods of Runkana, the teachings of Cella, and the instant application are analogous art because they pertain to analyzing data with prediction models. It would be obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the methods of Runkana with the teachings of Cella to provide for model prediction including data from agriculture, packaging, and chemical industries. One would be motivated to do so to process, collect, and stream different types of data to sense aspects of industrial machines and facilitate use (Cella ¶0244: “Methods and systems described herein for industrial machine sensor data streaming, collection, processing, and storage may be configured to operate with existing data collection, processing, and storage systems while preserving access to existing format/frequency range/resolution compatible data. While the industrial machine sensor data streaming facilities described herein may collect a greater volume of data (e.g., longer duration of data collection) from sensors at a wider range of frequencies and with greater resolution than existing data collection systems, methods and systems may be employed to provide access to data from the stream of data that represents one or more ranges of frequency and/or one or more lines of resolution that are purposely compatible with existing systems. Further, a portion of the streamed data may be identified, extracted, stored, and/or forwarded to existing data processing systems to facilitate operation of existing data processing systems that substantively matches operation of existing data processing systems using existing collection-based data. In this way, a newly deployed system for sensing aspects of industrial machines, such as aspects of moving parts of industrial machines, may facilitate continued use of existing sensed data processing facilities, algorithms, models, pattern recognizers, user interfaces, and the like”). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS SHINE whose telephone number is (571)272-2512. The examiner can normally be reached M-F, 11am – 7pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached on (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.B.S./Examiner, Art Unit 2126 /VAN C MANG/Primary Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Dec 29, 2020
Application Filed
Mar 29, 2024
Non-Final Rejection — §103
Jul 11, 2024
Response Filed
Dec 03, 2024
Final Rejection — §103
Feb 05, 2025
Response after Non-Final Action
Apr 03, 2025
Request for Continued Examination
Apr 11, 2025
Response after Non-Final Action
Aug 08, 2025
Non-Final Rejection — §103
Nov 10, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579449
HYDROCARBON OIL FRACTION PREDICTION WHILE DRILLING
2y 5m to grant Granted Mar 17, 2026
Patent 12572440
AUTOMATICALLY DETECTING WORKLOAD TYPE-RELATED INFORMATION IN STORAGE SYSTEMS USING MACHINE LEARNING TECHNIQUES
2y 5m to grant Granted Mar 10, 2026
Patent 12561554
ERROR IDENTIFICATION FOR AN ARTIFICIAL NEURAL NETWORK
2y 5m to grant Granted Feb 24, 2026
Patent 12533800
TRAINING REINFORCEMENT LEARNING AGENTS TO LEARN FARSIGHTED BEHAVIORS BY PREDICTING IN LATENT SPACE
2y 5m to grant Granted Jan 27, 2026
Patent 12536428
KNOWLEDGE GRAPHS IN MACHINE LEARNING DECISION OPTIMIZATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
82%
With Interview (+44.6%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 37 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month