Prosecution Insights
Last updated: April 19, 2026
Application No. 18/071,212

SERVERS, SYSTEMS, AND METHODS FOR FAST DETERMINATION OF OPTIMAL SETPOINT VALUES

Non-Final OA §103
Filed
Nov 29, 2022
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aveva Software, LLC
OA Round
4 (Non-Final)
35%
Grant Probability
At Risk
4-5
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Non-Final Office action. In response to Examiner’s Non-Final Rejection of 10/16/25, Applicant, on 2/9/26, amended claims. Claims 1-18 are pending in this application and have been rejected below. Response to Amendment Applicant’s amendments are acknowledged. The objections are withdrawn based on the amendments correcting the spelling. The previous 112b rejection for claim 2 is withdrawn based on the amendment. Reasons for Subject Matter Eligibility under 35 USC 101 The claim 1 overcomes the 101 rejections because the claim is now : “cause, by the one or more processors, an operational control limit for the at least one component of the one or more components in the industrial process to be modified to the corresponding optimum setpoint value in response to the determination by the pseudo process model, the modification of the operational control limit causing the operation of the at least one component to change based on the optimum setpoint value.” When viewing the claim as a whole, the claim is no longer directed to an abstract idea and the claim is viewed as a using a judicial exception in a meaningful way under MPEP 2106.05(e), similar to Diamond v. Diehr where the “opening and closing of a mold” made the claim eligible, as here we now have a similar situation with extensive analysis for controlling a setpoint of a component in an industrial process. See also Applicant’s Remarks 9/3/25, pages 7-10 (e.g. optimize equipment operations ensures claims rooted in a technological solution). The same reasons also apply to claim 9 which has similar limitations, where the determined setpoints for components are used to modify the operation of the component. Claim 9 recites similar limitations and is eligible for the same reasons. All other claims depend from claim 1 and 9. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Herring (US 11,181,872) and Lee (US 2017/0344411). Concerning claim 1, Herring discloses: A system for the execution of optimum setpoints (Herring see col. 21, lines 30-39 - At step 212, a set-point 124 is generated for each of the top-ranked data sources 120. In at least one embodiment, the trained multivariate model 118 is executed and outputs an optimal value for each data source 120, the optimal value representing a value most consistently demonstrated by the data source 120 during a period of optimal performance by the machine 104 associated therewith.) comprising: one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processor (Herring See col. 36, lines 7-30 - it will be understood that various aspects of the processes described herein are software processes that execute on computer systems that form parts of the system. Embodiments within the scope of the present disclosure also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon; Col. 36, lines 47-56 - The computer will typically include one or more data storage devices for reading data from and writing data to. The data storage devices provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer.) to: receive, by the one or more processors, one or more equipment setpoints of at least one component in an industrial process (Herring – see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes).); receive, by the one or more processors, one or more key performance indicators (KPIs) that each include a measure of the at least one component during the operational timeframe (Herring Col. 15, lines 62-67, col. 16, lines 1-28 - In various embodiments, the data historian 113 may be operatively connected to one or more data sources 120 associated with performance metrics of one or more machines 104, and the data historian may receive raw performance data from the one or more data sources 120. In at least one embodiment, the data sources 120 include the identifiers for each external sensor 102 and internal sensor 106, as well as identifiers for additional variables (e.g., combinations of sensor readings, etc.).); correlate, by the one or more processors, each setpoint to the one or more key performance indicators (KPIs) (Herring – see col. 16, lines 29-49 - In some embodiments, a data source 120 includes multiple rankings 122, in which each ranking 122 is associated with a set of external conditions (e.g., a particular time period, time of year, level of machine wear, etc.). In at least one embodiment, the data sources 120 include set-points 124 that provide ideal values at which the data sources 120 are associated with optimal runtime performance on a corresponding machine 104. col. 19, lines 1-14 - the importance scores provide a metric for assessing a degree to which deviations in values of a particular data source 120 from a set-point are predictive for a pre-break performance state of the machine 104. see col. 19, lines 29-46 - At step 206, a correlation score is generated for each variable of the trained model 118 and is associated with the data source 120 corresponding thereto. See col. 20, lines 4-16 - the process 200 may be performed to obtain a plurality of data sources 120 associated with variables (generated from readings of the data sources 120) that are determined to be most predictive of an optimal performance state or a pre-error performance state (or other performance state) of the machine 104. See col. 28, lines 40-58 - At step 406, the plurality of optimal runtime datasets are further refined based on identifications of target time periods within the datasets in which the associated machine 104 performed within a target conditioned weight);. generate, by the one or more processors, a … process model in response to receiving the one or more equipment setpoints and the one or more KPIs (Applicant’s specification, FIG. 4, shows the “pseudo process model” as including a “data validation” and a “model generation”; where in [0010] as published it states a “data validation unit” can include “curve fitting data modeling techniques”. Herring discloses the limitation based on broadest reasonable interpretation in light of the specification – See FIG. 4, Col. 27, lines 40-51 - the process 400 generates one or more training datasets from the clean data 114. In one or more embodiments, the training datasets include training data 116 that is generated by extracting various portions of the clean data 114 to obtain data associated with one or more optimal runtime intervals and data associated with one or more pre-error intervals; See col. 29, lines 64-67; col. 30, lines 1-13 - At step 410, the plurality of optimal runtime datasets are further refined by identifying and removing one or more multivariate outliers from each of the plurality of optimal runtime datasets. In at least one embodiment, the optimization system 103 may leverage Mahalanobis distances to calculate multivariate similarity. See col. 32, lines 3-46 - At step 506, classifier model 118 is generated using the Y-variable and X-variables, and is trained using training data 116 (e.g., organized into one or more training datasets via the process 400; In one example, the classifier model 118 is fitted to a first training dataset to generate a first iteration of a trained classifier model 118. In the same example, the classifier model 118 is refitted to a second training dataset (e.g., including at least a subset of training data 116 that is distinct from training data 116 of the first training dataset) to generate a second iteration of the trained classifier model 118. According to one embodiment, approximately 80% of training data 116 in a training dataset is used for model training and the approximately 20% of remaining training data 116 is used to generate test datasets for assessing performance (e.g., accuracy) of the trained classifier model 118).) Herring also discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures (See co. 29, lines 45-63). Applicant’s specification explains that “pseudo model” is referring to [0011] of the specification and a survival model/matrix, ( Applicant’s specification, [0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix)”. Lee discloses: generate, by the one or more processors, a “pseudo” process model in response to receiving the one or more equipment setpoints and the one or more KPIs (Applicant’s specification, Applicant’s specification, [0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix).” Lee discloses the limitation based on broadest reasonable interpretation in light of the specification – see par 32 - Referring back to FIG. 1, at 114, risk failure analysis for the equipment may be performed based on the operation features stored in the target table. For instance, based on the data in the target table, failure risk analysis may be performed. An example of a failure risk analysis may include a survival analysis such as Cox regression; see par 41 - The survivor function is the probability of survival as a function of time. It gives the probability that the survival time of an individual exceeds a defined value; see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment. For example, the analysis may include computing the desired temperature of semiconductor fabricating chamber (e.g., X degrees Celsius) and pressure (e.g., Y atm). One or more signals comprising the control variable value may be sent to the process controller 410 (also referred to as an operations controller); see FIG. 4, see par 45 - One or more of the hardware processors 502 may perform failure risk analysis (e.g., survival analysis such as Cox regression) 512, for example, as described with reference to FIG. 4). Herring and Lee disclose: generate, by the one or more processors, a set of setpoint timeframe comprising one or more KPI achieved timeframes that include where one or more KPIs are above a predetermined value during the operational timeframe (Herring See col. 28, lines 59-67 - a value of the target conditioned weight is different for each grade or type of product, thus the optimization system 103 selects the proper value of the target condition weight based on the grade or type of product associated with each of the one or more optimal runtime datasets. see col. 29, lines 6-23 - In various embodiments, the optimization system 103 identifies one or more optimal speed time periods (also referred to as “at-speed” time periods) within each of the one or more optimal runtime datasets, the one or more at-speed time periods representing time intervals during which the machine 104 performed at a minimum optimal speed. In at least one embodiment, a value of the minimum optimal speed may be different for each grade or type of product, thus the optimization system 103 selects the proper value of the minimum speed based on the grade or type of product associated with each of the one or more optimal runtime datasets) and exclude one or more non-KPI achieved timeframes where the one or more KPIs are below the predetermined value during the operational timeframe (Herring –See col. 28, lines 59-67 - , the optimization system 103 removes, from each of the one or more optimal runtime datasets, data falling outside of the one or more target time periods. In one or more embodiments, wherein the optimization system 103 removes data from a dataset, the optimization system 103 may remove all data at a given timestamp (e.g., the optimization system 103 may remove entire rows from a dataset representing values for all data sources 120 at the timestamps corresponding to the rows). see col. 29, lines 6-23 - In one or more embodiments, the optimization system 103 removes, from each of the one or more optimal runtime datasets, data falling outside of the one or more at-speed time periods.); execute, by the one or more processors, the pseudo process model using the set of setpoint timeframes, the pseudo process model (Herring - See col. 30, lines 14-30 - In at least one embodiment, one or more test datasets are generated that each include, but are not limited to, a second subset of the plurality of optimal runtime datasets that excludes the first subset thereof and a second subset of the plurality of pre-error datasets that excludes the first subset thereof; see also Lee [as above, discloses “pseudo process model”] - see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment. For example, the analysis may include computing the desired temperature of semiconductor fabricating chamber (e.g., X degrees Celsius) and pressure (e.g., Y atm). One or more signals comprising the control variable value may be sent to the process controller 410 (also referred to as an operations controller); see FIG. 4, see par 45 - One or more of the processors 502 may select and connect to the database nodes of the cluster for specific operation and time periods. By invoking distributed processing operation, e.g., … average, minimum, maximum, standard deviation of temperature, pressure, voltage, current associated with processed operation) are extracted and built into a target data table 508. One or more of the hardware processors 502 may perform failure risk analysis (e.g., survival analysis such as Cox regression) 512, for example, as described with reference to FIG. 4) configured to determine one or more optimum setpoint values for the one or more components that correspond to the one or more equipment setpoints during the one or more KPI achieved timeframes (Herring see col. 19, lines 48-67 - the multiplication results in a combined index of variables, which may be used for ranking and identifying a plurality of data sources 120 to be included in a visualization 126 for monitoring performance of the machine 104 and to be used in a modeling process for generating set-points 124 for the plurality of data sources 120; See 29, lines 45-67 - For example, one or more components of the given machine may experience greater failure rates (e.g., and, thus, more paper breaks) during colder seasons, thus the optimization system 103 may determine the minimum speed to be lower during colder seasons to compensate for the increase in component failure rates (e.g., and attempt to reduce the frequency of component failures). In at least one embodiment, to account for such drifts and shifts in machine performance, the optimization system 103 inspects current readings of one or more external sensors 102, such as, for example, weather-related sensors 106, and accordingly adjusts the determined minimum optimal speed). cause, by the one or more processors, an operational control limit for the at least one component of the one or more components in the industrial process to be modified to the corresponding optimum setpoint value in response to the determination by the pseudo process model (Lee – as above - see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment), the modification of the operational control limit causing the operation of the at least one component to change based on the optimum setpoint value (Herring – see col. 13, lines 35-42 - the computing device 107 includes a Distributed Control System (DCS) interface (e.g., in the form of a virtual portal and/or a set of physical controls), in which a user may automatically or manually, or by combination, adjust one or more set-points 124 or other parameters related to monitoring and control of a machine 104 or machine environment 101. see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes; see col. 23, lines 1-19 - the corrective actions may include information providing actions that may return a machine 104 to a state of optimal performance. The corrective actions may include, but are not limited to, automatically updating system operating conditions by providing command instructions or signals to machine operating systems; see col. 34, lines 57-67 - via the visualization 126A, the optimization system 103 enables real time visualization of machine performance, wherein the visualization scheme may support a Six Sigma approach to monitoring, correcting and controlling machine performance.). Herring and Lee are analogous art as they are directed to analyzing performance/operation of equipment (see Herring Abstract – set-point recommendations; Lee Abstract). Herring also discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures and removing outliers, periods of errors, from optimal runtime datasets and having targets for output (See col. 28-col. 29) . Lee improves upon Herring by disclosing using a survival analysis for setting process variables to extend the life of equipment. One of ordinary skill in the art would be motivated to further include using survival analysis, failure risk analysis to efficiently improve upon the plan set point generation, optimal datasets, and validation of a trained model for setpoints and the changing of operating speeds to reduce failures (See col. 28-30) in Herring. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the optimal runtime performance for intervals of equipment in Herring to further include determination of survival/failure analysis for controllable process variables as disclosed in Lee, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 9, Herring and Lee disclose: A system for the execution of optimum setpoints (Herring see col. 21, lines 30-39 - At step 212, a set-point 124 is generated for each of the top-ranked data sources 120. In at least one embodiment, the trained multivariate model 118 is executed and outputs an optimal value for each data source 120, the optimal value representing a value most consistently demonstrated by the data source 120 during a period of optimal performance by the machine 104 associated therewith) comprising: one or more processors; and a non-transitory computer readable medium storing a plurality of instructions, which when executed, cause the one or more processor (Herring See col. 36, lines 7-30 - it will be understood that various aspects of the processes described herein are software processes that execute on computer systems that form parts of the system. Embodiments within the scope of the present disclosure also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon; Col. 36, lines 47-56 - The computer will typically include one or more data storage devices for reading data from and writing data to. The data storage devices provide nonvolatile storage of computer-executable instructions, data structures, program modules, and other data for the computer) to: receive, by the one or more processors, historical operational data from one or more sensors monitoring a process, the historical operational data including one or more tags and one or more setpoints associated with the one or more tags ([0032] as published –“In some embodiments, equipment tags include one or more of controllable tags (typically setpoints)” ) Herring – see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes)); receive, by the one or more processors, one or more key performance indicators (KPIs) that each include a measure of the at least one component during the operational timeframe (Herring Col. 15, lines 62-67, col. 16, lines 1-28 - In various embodiments, the data historian 113 may be operatively connected to one or more data sources 120 associated with performance metrics of one or more machines 104, and the data historian may receive raw performance data from the one or more data sources 120. In at least one embodiment, the data sources 120 include the identifiers for each external sensor 102 and internal sensor 106, as well as identifiers for additional variables (e.g., combinations of sensor readings, etc); determine, by the one or more processors, one or more KPIs achieved timeframes during the operational timeframe where the KPIs are above a predetermined value (Herring See col. 28, lines 59-67 - a value of the target conditioned weight is different for each grade or type of product, thus the optimization system 103 selects the proper value of the target condition weight based on the grade or type of product associated with each of the one or more optimal runtime datasets. see col. 29, lines 6-23 - In various embodiments, the optimization system 103 identifies one or more optimal speed time periods (also referred to as “at-speed” time periods) within each of the one or more optimal runtime datasets, the one or more at-speed time periods representing time intervals during which the machine 104 performed at a minimum optimal speed. In at least one embodiment, a value of the minimum optimal speed may be different for each grade or type of product, thus the optimization system 103 selects the proper value of the minimum speed based on the grade or type of product associated with each of the one or more optimal runtime datasets); execute, by the one or more processors, … to create statistical data, the statistical data including one or more of a mean and a standard deviation for each of the one or more setpoints Herring – see col. 21, lines 40-62 - According to one embodiment, the threshold is generated based on computation of a normal distribution about the set-point 124. In at least one embodiment, pairs of threshold values are determined for a set-point 124, each pair of threshold values representing one standard deviation from the set-point 124 (e.g., or from a previous pair of threshold values). According to one embodiment, the thresholds are stored along with the set-points 124. In at least one embodiment, threshold pairs and set-points associated therewith are used to calibrate an intensity scale 407 (see FIG. 6); see col. 30, lines 31-43 - In various embodiments, the system 100 calculates, from the optimal runtime datasets of the one or more training datasets, a median value and a standard deviation associated with each data source 120 for each grade or type of product processed by the machine 104. In one or more embodiments, the median value and standard deviation are stored by the data processing system 105 and are used to compute various metrics, such as, for example, overall machine health metrics; see col. 34, lines 35-57 - In one or more embodiments, the window 609 includes, but is not limited to, the label 603B associated with the data source 120, a timestamp 611 corresponding to the time at which readings associated with the data source 120 were sampled, a current value 613 of the data source 120, a historical value 615 computed from an average value of the data source 120 computed from one or more optimal runtime datasets, a set-point 124 associated with the data source 120, and other information, such as, for example, remedial data and one or more corrective actions associated with the data source 120). Herring discloses using standard deviation and average, and even “having a data source “sampled” (See col. 34, lines 35-57). Lee discloses: execute, by the one or more processors, “a down-sample of the historical operational data to create statistical data,” the statistical data including one or more of a mean and a standard deviation for each of the one or more setpoints (Applicant’s [0017] as published states “In some embodiments, a program step includes a command to execute, by the one or more processors, a down-sample command configured to reduce a number of time series data points in the setpoint historical data before generation of the pseudo model. In some embodiments, the one or more setpoint values includes a mean value and/or standard deviation value.” [0032] as published states “In some embodiments, the down sampled data is obtained by taking the mean and standard deviation of the historical operational data for a component operating at a setpoint.” Lee discloses the limitations based on broadest reasonable interpretation in light of the specification –See par 25 - The nodes 310, 312, 314 are selected to execute the distributed processing operation, e.g., MapReduce operation on the data in parallel, as shown at 316. The distributed processing operation extracts operation features, which are used to construct a target table. For example, a map operation may computes characteristics such as the average, minimum, maximum, and/or standard deviation values of all the products processed, and of all the operations features at each node (e.g., at 310, 312 and 314). A reduce operation may aggregate the values computed at each node, for example, compute an aggregated characteristics computed at a subset of nodes, e.g., the nodes (e.g., 310, 312 and 314). For example, the map operation may compute an average, minimum, maximum and standard deviation associated with temperature, pressure and power associated with the equipment during the operation, and the reduce operation may aggregate the average, minimum, maximum and standard deviation, computed in the plurality of nodes or servers, associated with temperature, pressure and power associated with the equipment during the operation. See par 28 - The above-described processing reduces data size in memory and allows for faster processing and detection of equipment risk failure. See par 31 - The feature calculation may also be performed for minimum, maximum, and or standard deviation. This computation of features for each record of maintenance data may be performed in one embodiment against each node of the operations data. Each node stores smaller size of operations data than before the distribution.) Herring and Lee disclose: execute, by the one or more processors, a setpoint calculation configured to determine one or more optimum setpoint values for the one or more components that correspond to the one or more setpoints during one or more KPI achieved timeframes (Applicant’s specification [0005] as published gives examples of “Non-limiting examples of KPIs may include throughput, defects, and yield according to some embodiments.”) (Herring see col. 19, lines 48-67 - the multiplication results in a combined index of variables, which may be used for ranking and identifying a plurality of data sources 120 to be included in a visualization 126 for monitoring performance of the machine 104 and to be used in a modeling process for generating set-points 124 for the plurality of data sources 120; See 29, lines 45-67 - For example, one or more components of the given machine may experience greater failure rates (e.g., and, thus, more paper breaks) during colder seasons, thus the optimization system 103 may determine the minimum speed to be lower during colder seasons to compensate for the increase in component failure rates (e.g., and attempt to reduce the frequency of component failures). In at least one embodiment, to account for such drifts and shifts in machine performance, the optimization system 103 inspects current readings of one or more external sensors 102, such as, for example, weather-related sensors 106, and accordingly adjusts the determined minimum optimal speed); and cause, by the one or more processors, a current setpoint for the one or more components to be modified based on the determined one or more optimum setpoint values, the modification of the operational control limit causing the operation of the at least one component to change based on the optimum setpoint value (Herring – see col. 13, lines 35-42 - the computing device 107 includes a Distributed Control System (DCS) interface (e.g., in the form of a virtual portal and/or a set of physical controls), in which a user may automatically or manually, or by combination, adjust one or more set-points 124 or other parameters related to monitoring and control of a machine 104 or machine environment 101. Herring – see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes; see col. 23, lines 1-19 - the corrective actions may include information providing actions that may return a machine 104 to a state of optimal performance. The corrective actions may include, but are not limited to, automatically updating system operating conditions by providing command instructions or signals to machine operating systems; see col. 34, lines 57-67 - via the visualization 126A, the optimization system 103 enables real time visualization of machine performance, wherein the visualization scheme may support a Six Sigma approach to monitoring, correcting and controlling machine performance; Lee – as above - see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment). It would have been obvious to combine Herring and Lee for the same reasons as discussed with regards to claim 1. In addition, Herring discloses using standard deviation and average, and even “having a data source “sampled” (See col. 34, lines 35-57). Lee improves upon Herring by further reducing data size or aggregating values based on averages and standard deviations associated with the equipment. One of ordinary skill in the art would be motivated to further include using reduce data and aggregating data based on average and standard deviation to efficiently improve upon the plan set point generation and validation of a trained model for setpoints (See col. 18, 21, 26) and the use of standard deviation and average for datasets (See col. 34) in Herring. Concerning claim 2, Herring and Lee disclose: The system of claim 1, the one or more non-transitory computer readable media further comprising program instructions stored thereon that when executed cause the one or more computers to: display, by the one or more processors, the one or more optimum setpoint values on a graphical user interface (GUI) (Herring – see col. 13, lines 17-50 - a computing device 107 includes a display 117 for rendering various screens, interfaces, and etc. In one example, the display 117 renders one or more visualizations 126 for optimization and monitoring of a particular machine 104, the one or more visualizations 126 being generated at the optimization system 103 and retrieved from the data processing system 105; see col. 22, lines 22-43 - an intensity map represents a time-series visualization for tracking the deviation of multiple data sources 120 (y-axis) from the optimal runtime values across time intervals (X-axis), including a current time interval.). Concerning claim 3, Herring discloses: The system of claim 2, the one or more non-transitory computer readable media further comprising program instructions stored thereon that when executed cause the one or more computers to: execute, by the one or more processors, a command to change the one or more equipment setpoints of the at least one component in the industrial process to the one or more optimum setpoint values (Herring – see col. 13, lines 35-42 - the computing device 107 includes a Distributed Control System (DCS) interface (e.g., in the form of a virtual portal and/or a set of physical controls), in which a user may automatically or manually, or by combination, adjust one or more set-points 124 or other parameters related to monitoring and control of a machine 104 or machine environment 101. see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes; see col. 23, lines 1-19 - the corrective actions may include information providing actions that may return a machine 104 to a state of optimal performance. The corrective actions may include, but are not limited to, automatically updating system operating conditions by providing command instructions or signals to machine operating systems). Concerning claim 4, Herring discloses: The system of claim 2, the one or more non-transitory computer readable media further comprising program instructions stored thereon that when executed cause the one or more computers to: generate, by the one or more processors, the GUI comprising an optimum setpoint limit input, the optimum setpoint limit input configured to enable a user to implement a setpoint value limit and a setpoint range limit for the one or more setpoint values (Herring – see col. 13, lines 17-50 - a computing device 107 includes a display 117 for rendering various screens, interfaces, and etc. see col. 21, lines 49-57 - In one or more embodiments, a threshold or set of thresholds or ranges is generated for each set-point 124 that represents boundary conditions for the values of readings from the data source 120 associated with the set-point 124. According to one embodiment, the threshold is generated based on computation of a normal distribution about the set-point 124. In at least one embodiment, pairs of threshold values are determined for a set-point 124, each pair of threshold values representing one standard deviation from the set-point 124; see col. 33, lines 47-67 - According to one embodiment, the intensity scale 607 relates particular wavelengths of color to deviation from a set-point 124 or from a median optimal runtime value. In at least one embodiment, sampling frames of sampled data source 120 values meeting the set-point 124 or optimal median may be colored with colors of longer wavelengths, such as blue, while those progressing through a series of standard deviations away from the determined values may be colored with colors of longer wavelengths, such as red. In one example, maximum deviation in a sampling frame 606 is denoted with a red coloring.) Concerning claim 5, Herring discloses using standard deviation and average, and even “having a data source “sampled” (See col. 34, lines 35-57). Lee discloses: The system of claim 4, the one or more non-transitory computer readable media comprising program instructions stored thereon that when executed cause the one or more computers to: execute, by the one or more processors, a down-sample command configured to reduce a number of time series data points in the setpoint historical data before generation of the pseudo model (Applicant’s [0017] as published states “In some embodiments, a program step includes a command to execute, by the one or more processors, a down-sample command configured to reduce a number of time series data points in the setpoint historical data before generation of the pseudo model. In some embodiments, the one or more setpoint values includes a mean value and/or standard deviation value.” [0032] as published states “In some embodiments, the down sampled data is obtained by taking the mean and standard deviation of the historical operational data for a component operating at a setpoint.” Lee discloses the limitations based on broadest reasonable interpretation in light of the specification –See par 25 - The nodes 310, 312, 314 are selected to execute the distributed processing operation, e.g., MapReduce operation on the data in parallel, as shown at 316. The distributed processing operation extracts operation features, which are used to construct a target table. For example, a map operation may computes characteristics such as the average, minimum, maximum, and/or standard deviation values of all the products processed, and of all the operations features at each node (e.g., at 310, 312 and 314). A reduce operation may aggregate the values computed at each node, for example, compute an aggregated characteristics computed at a subset of nodes, e.g., the nodes (e.g., 310, 312 and 314). For example, the map operation may compute an average, minimum, maximum and standard deviation associated with temperature, pressure and power associated with the equipment during the operation, and the reduce operation may aggregate the average, minimum, maximum and standard deviation, computed in the plurality of nodes or servers, associated with temperature, pressure and power associated with the equipment during the operation. See par 28 - The above-described processing reduces data size in memory and allows for faster processing and detection of equipment risk failure. See par 31 - The feature calculation may also be performed for minimum, maximum, and or standard deviation. This computation of features for each record of maintenance data may be performed in one embodiment against each node of the operations data. Each node stores smaller size of operations data than before the distribution). It would have been obvious to combine Herring and Lee for the same reasons as discussed with regards to claim 1 and 5. Concerning claim 6, Herring and Lee disclose: The system of claim 2, wherein the one or more setpoint values includes a mean value and/or standard deviation value (Herring – see col. 21, lines 40-62 - According to one embodiment, the threshold is generated based on computation of a normal distribution about the set-point 124. In at least one embodiment, pairs of threshold values are determined for a set-point 124, each pair of threshold values representing one standard deviation from the set-point 124 (e.g., or from a previous pair of threshold values); see col. 30, lines 31-43 - In various embodiments, the system 100 calculates, from the optimal runtime datasets of the one or more training datasets, a median value and a standard deviation associated with each data source 120 for each grade or type of product processed by the machine 104; see col. 34, lines 35-57 - In one or more embodiments, the window 609 includes, but is not limited to, the label 603B associated with the data source 120, a timestamp 611 corresponding to the time at which readings associated with the data source 120 were sampled, a current value 613 of the data source 120, a historical value 615 computed from an average value of the data source 120 computed from one or more optimal runtime datasets, a set-point 124 associated with the data source 120, and other information, such as, for example, remedial data and one or more corrective actions associated with the data source 120). Concerning claim 7, Herring discloses: The system of claim 6, wherein the system is configured to set an optimum setpoint range to the standard deviation value (Herring – see col. 21, lines 49-57 - In one or more embodiments, a threshold or set of thresholds or ranges is generated for each set-point 124 that represents boundary conditions for the values of readings from the data source 120 associated with the set-point 124. According to one embodiment, the threshold is generated based on computation of a normal distribution about the set-point 124. In at least one embodiment, pairs of threshold values are determined for a set-point 124, each pair of threshold values representing one standard deviation from the set-point 124). Concerning claim 8, Herring discloses: The system of claim 7, wherein the optimum setpoint range is less than the setpoint range limit (Herring – see col. 21, lines 49-57 - In one or more embodiments, a threshold or set of thresholds or ranges is generated for each set-point 124 that represents boundary conditions for the values of readings from the data source 120 associated with the set-point 124. According to one embodiment, the threshold is generated based on computation of a normal distribution about the set-point 124. In at least one embodiment, pairs of threshold values are determined for a set-point 124, each pair of threshold values representing one standard deviation from the set-point 124; see col. 29, lines 45-63 - For example, one or more components of the given machine may experience greater failure rates (e.g., and, thus, more paper breaks) during colder seasons, thus the optimization system 103 may determine the minimum speed to be lower during colder seasons to compensate for the increase in component failure rates (e.g., and attempt to reduce the frequency of component failures). In at least one embodiment, to account for such drifts and shifts in machine performance, the optimization system 103 inspects current readings of one or more external sensors 10; see col. 33, lines 46-67, col. 34, lines 1-28 - In one example, the intensity scale 607 uses a rainbow-based color scheme in which increasing deviation corresponds to changes in color from blue (e.g., corresponding to the set-point value 124) to red, passing through the other colors of the rainbow as the deviations increases towards a magnitude associated with the red color. In one example, an intensity scale 607 ranges in value between 0.0 and 1.0, and each value in the intensity scale 607 represents a level of deviation (e.g., a standard deviation normalized to the 0.0-1.0 scale) from a set-point 124. I). Concerning claim 10, Herring discloses: The system of claim 9, wherein the one or more non-transitory computer readable media further comprise program instructions stored thereon that when executed cause the one or more computers to: correlate, by the one or more processors, each setpoint to the one or more key performance indicators (KPIs) (Herring – see col. 16, lines 29-49 - In some embodiments, a data source 120 includes multiple rankings 122, in which each ranking 122 is associated with a set of external conditions (e.g., a particular time period, time of year, level of machine wear, etc.). In at least one embodiment, the data sources 120 include set-points 124 that provide ideal values at which the data sources 120 are associated with optimal runtime performance on a corresponding machine 104. col. 19, lines 1-14 - the importance scores provide a metric for assessing a degree to which deviations in values of a particular data source 120 from a set-point are predictive for a pre-break performance state of the machine 104. see col. 19, lines 29-46 - At step 206, a correlation score is generated for each variable of the trained model 118 and is associated with the data source 120 corresponding thereto. See col. 20, lines 4-16 - the process 200 may be performed to obtain a plurality of data sources 120 associated with variables (generated from readings of the data sources 120) that are determined to be most predictive of an optimal performance state or a pre-error performance state (or other performance state) of the machine 104. See col. 28, lines 40-58 - At step 406, the plurality of optimal runtime datasets are further refined based on identifications of target time periods within the datasets in which the associated machine 104 performed within a target conditioned weight.)); return, by the one or more processors, one or more setpoint values that include the one or more setpoints during the one or more setpoint timeframes (Herring see col. 18, lines 41-53 - Following step 202, the optimization system 103 and data processing system 105 execute a classification training and execution process 300 (FIG. 3). The classification training process 300 results in a trained model 118 for classifying datasets as either optimal runtime or pre-error (or, in some embodiments, pre-break).; see col. 19, lines 19-67 – at step 206, correlation score is computed; at step 208, then compute combined scores; In various embodiments, the combined scores are generated by multiplying, for each variable, the mutual information correlation value by the corresponding importance value; then generate rankings of data sources 120 to be used in modeling process for generating set-points; see col. 21, lines 30-39 - At step 212, a set-point 124 is generated for each of the top-ranked data sources 120). Concerning claim 11, Herring also discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures (See co. 29, lines 45-63). Applicant’s specification explains that “pseudo model” is referring to [0011] of the specification and a survival model/matrix, ( Applicant’s specification, [0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix)”. Lee discloses: The system of claim 10, wherein the one or more non-transitory computer readable media further comprise program instructions stored thereon that when executed cause the one or more computers to: generate, by the one or more processors, a survival model (Applicant’s specification, Applicant’s specification, [0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix).” Lee discloses the limitation based on broadest reasonable interpretation in light of the specification – see par 32 - Referring back to FIG. 1, at 114, risk failure analysis for the equipment may be performed based on the operation features stored in the target table. For instance, based on the data in the target table, failure risk analysis may be performed. An example of a failure risk analysis may include a survival analysis such as Cox regression; see par 41 - The survivor function is the probability of survival as a function of time. It gives the probability that the survival time of an individual exceeds a defined value; see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment. For example, the analysis may include computing the desired temperature of semiconductor fabricating chamber (e.g., X degrees Celsius) and pressure (e.g., Y atm). One or more signals comprising the control variable value may be sent to the process controller 410 (also referred to as an operations controller); see FIG. 4, see par 45 - One or more of the hardware processors 502 may perform failure risk analysis (e.g., survival analysis such as Cox regression) 512, for example, as described with reference to FIG. 4). Herring and Lee together disclose: Generate, by the one or more processors, a survival model (Lee above) that includes an optimum value that includes an optimum highest value for each of the one or more setpoints that correlate to the highest KPI values and/or an optimum longest value for each of the one or more setpoints that correlate to a longest duration of meeting or exceeding a predetermined KPI value (Herring see col. 19, lines 48-67 - the multiplication results in a combined index of variables, which may be used for ranking and identifying a plurality of data sources 120 to be included in a visualization 126 for monitoring performance of the machine 104 and to be used in a modeling process for generating set-points 124 for the plurality of data sources 120; See col. 29, lines 45-67 - For example, one or more components of the given machine may experience greater failure rates (e.g., and, thus, more paper breaks) during colder seasons, thus the optimization system 103 may determine the minimum speed to be lower during colder seasons to compensate for the increase in component failure rates (e.g., and attempt to reduce the frequency of component failures). In at least one embodiment, to account for such drifts and shifts in machine performance, the optimization system 103 inspects current readings of one or more external sensors 102, such as, for example, weather-related sensors 106, and accordingly adjusts the determined minimum optimal speed; See col. 23, lines 20-46 - In the same example, in response to detecting the deviation, the system 100 retrieves historical performance data of the paper making machine, identifies one or more time periods therein in which the historical readings of the moisture temperature significantly deviated from the set-point, and retrieves remedial data associated with the one or more time periods. Continuing this example, the remedial data indicates that corrective actions were taken to return the moisture readings to levels within the predetermined threshold; see col. 32, lines 59-65 - Referring to FIG. 6, shown is an exemplary visualization 126A displaying various details regarding current and previous performance states of a machine 104. The visualization 126A includes an intensity map 601 relating current values of data sources 120 to set-points 124 generated from one or more processes described herein. For alternative of “longest value” – see also col. 21, lines 30-40 - In at least one embodiment, the trained multivariate model 118 is executed and outputs an optimal value for each data source 120, the optimal value representing a value most consistently demonstrated by the data source 120 during a period of optimal performance by the machine 104 associated therewith. See col. 28, lines 40-50 - At step 406, the plurality of optimal runtime datasets are further refined based on identifications of target time periods within the datasets in which the associated machine 104 performed within a target conditioned weight. In various embodiments, the optimization system 103 further refines the one or more optimal runtime datasets). It would have been obvious to combine Herring and Lee for the same reasons as discussed with regards to claim 1. Claims 12-18 are rejected under 35 U.S.C. 103 as being unpatentable over Herring (US 11,181,872) and Lee (US 2017/0344411), as applied to claims 1-3, 6, and 9-10 above, and further in view of Garrity (US 2020/0084601). Concerning claim 12, Herring discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures (See co. 29, lines 45-63) and also validating a model (See col. 27, lines 20-34). Lee discloses using a “survival analysis” and failure risk analysis for setting controllable process variables (See par 32, 41, 43). Garrity discloses: The system of claim 11, further comprising wherein the one or more non-transitory computer readable media further comprise program instructions stored thereon that when executed cause the one or more computers to: execute, by the one or more processors, a data validation of the survival model using curve fitting data modeling techniques. (Garrity –See par 84-85 - FIG. 5A illustrates estimated base hazard rate from observed physical asset lifetimes using a Weibull distribution. The base hazard rate can be obtained by, e.g., fitting a parametric Weibull model using maximum likelihood estimation as shown in FIG. 5A. As the drawing shows, this model fit corresponds to a monotonically increasing hazard function, consistent with failure probability increasing with physical asset age. See par 90 - Metrics can be selected based on considerations for prioritizing maintenance and minimizing in-service failure rates (e.g., remediation module 128). After fitting the model on observed training data, these metrics can be calculated on held-out validation data to generate an unbiased measure of model performance (e.g., validation module 134). Herring, Lee, and Garrity are analogous art as they are directed to analyzing performance of equipment/assets (see Herring Abstract – set-point recommendations; Lee Abstract; Garrity – par 18 – predict future performance of assets). Herring discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures (See co. 29, lines 45-63) and also validating a model (See col. 27, lines 20-34). Lee discloses using a “survival analysis” and failure risk analysis for setting controllable process variables (See par 32, 41, 43). Garrity improves upon Herring and Lee by disclosing using a “survival model” for considering maintenance while also maximizing makespan, and managing usage in view of constraints of availability and cost and using curve fitting. One of ordinary skill in the art would be motivated to further include the use of curve fitting for validation to efficiently improve upon the validation of a model in Herring and the use of a survival analysis for setting controllable process variables in Lee. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the optimal runtime performance for intervals of equipment in Herring to further include determination of survival/failure analysis for controllable process variables as disclosed in Lee, and to further use fitting with survival analysis as disclosed in Garrity, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning claim 13, Herring, Lee, and Garrity disclose: The system of claim 12, wherein the one or more non-transitory computer readable media further comprise program instructions stored thereon that when executed cause the one or more computers to: adjust, by the one or more processors, one or more process setpoints based on the results (Herring – see col. 13, lines 35-42 - the computing device 107 includes a Distributed Control System (DCS) interface (e.g., in the form of a virtual portal and/or a set of physical controls), in which a user may automatically or manually, or by combination, adjust one or more set-points 124 or other parameters related to monitoring and control of a machine 104 or machine environment 101. see col. 14, lines 56-67, col. 15, lines 1-4 - In at least one embodiment, the data historian 113 is configured to communicate with a Distributed Control System (DCS), such that the DCS may be provided with set-points 124 in various control schemes for controlling one or more machines 104 (for example, Proportional-Integral-Derivate (“PID”) schemes; see col. 23, lines 1-19 - the corrective actions may include information providing actions that may return a machine 104 to a state of optimal performance. The corrective actions may include, but are not limited to, automatically updating system operating conditions by providing command instructions or signals to machine operating systems; see col. 34, lines 57-67 - via the visualization 126A, the optimization system 103 enables real time visualization of machine performance, wherein the visualization scheme may support a Six Sigma approach to monitoring, correcting and controlling machine performance; Lee – as above - see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment.). It would have been obvious to combine Herring and Lee and Garrity for the same reasons as discussed with regards to claim 12. Concerning claim 14, Herring, Lee, and Garrity disclose: The system of claim 12, wherein the system does not include a first-principals equation model to determine the optimum value (Herring, Lee, Garrity – do not disclose “first-principals” model/equation). The claim is viewed as obvious in light of Herring, Lee, and Garrity do “not” disclose a system using first-principals equation model to determine the optimum model. Concerning claim 15, Herring also discloses that if greater “failure rates” occur in colder seasons, the optimization system can compensate by lowering the minimum speed to reduce the frequency of failures (See co. 29, lines 45-63). Applicant’s specification explains that “pseudo model” is referring to [0011] of the specification and a survival model/matrix, ( Applicant’s specification, [0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix)”. Lee discloses: The system of claim 12, wherein the system does not include an iteration of a first-principals equation to determine the optimum value (Applicant’s Abstract - the system is able to save computer resources by reducing processing power through the use of a survival matrix as opposed to an iterative model. Lee - discloses the limitation based on broadest reasonable interpretation in light of the specification – see par 32 - Referring back to FIG. 1, at 114, risk failure analysis for the equipment may be performed based on the operation features stored in the target table. For instance, based on the data in the target table, failure risk analysis may be performed. An example of a failure risk analysis may include a survival analysis such as Cox regression; see par 41 - The survivor function is the probability of survival as a function of time. It gives the probability that the survival time of an individual exceeds a defined value; see par 43, Fig. 4 - At 408, equipment failure risk analysis in one embodiment may optimally set controllable process variables that extend the life of the equipment., and Garrity – both “survival” and do not disclose “iteration” models). The claim is viewed as obvious in light of Herring, Lee, and Garrity “not” disclosing use of first-principals equation model. Concerning claim 16, Herring and Lee disclose: The system of claim 12, wherein the system does not include an iteration model to determine the optimum value (Applicant’s specification – Abstract states “In some embodiments, the system is able to save computer resources by reducing processing power through the use of a survival matrix as opposed to an iterative model.” [0022] as published states “ In some embodiments, the system does not include an iteration model to determine the optimum value. In some embodiments, the survival model includes a mean and a standard deviation for each of the one or more setpoints.” Lee discloses the limitations based on broadest reasonable interpretation in light of the specification –See par 25 - The nodes 310, 312, 314 are selected to execute the distributed processing operation, e.g., MapReduce operation on the data in parallel, as shown at 316. The distributed processing operation extracts operation features, which are used to construct a target table. For example, a map operation may computes characteristics such as the average, minimum, maximum, and/or standard deviation values of all the products processed, and of all the operations features at each node (e.g., at 310, 312 and 314). A reduce operation may aggregate the values computed at each node, for example, compute an aggregated characteristics computed at a subset of nodes, e.g., the nodes (e.g., 310, 312 and 314). For example, the map operation may compute an average, minimum, maximum and standard deviation associated with temperature, pressure and power associated with the equipment during the operation, and the reduce operation may aggregate the average, minimum, maximum and standard deviation, computed in the plurality of nodes or servers, associated with temperature, pressure and power associated with the equipment during the operation. See par 28 - The above-described processing reduces data size in memory and allows for faster processing and detection of equipment risk failure. See par 31 - The feature calculation may also be performed for minimum, maximum, and or standard deviation. This computation of features for each record of maintenance data may be performed in one embodiment against each node of the operations data. Each node stores smaller size of operations data than before the distribution). It would have been obvious to combine Herring, Lee, and Garrity for the same reasons as discussed with regards to claim 1 and 12. Concerning claim 17, Herring discloses: The system of claim 12, wherein the survival model includes a mean and a standard deviation for each of the one or more setpoints (Lee par 45 - . By invoking distributed processing operation, e.g., MapReduce, operation features (e.g., average, minimum, maximum, standard deviation of temperature, pressure, voltage, current associated with processed operation) are extracted and built into a target data table 508. One or more of the hardware processors 502 may perform failure risk analysis (e.g., survival analysis such as Cox regression) 512, for example, as described with reference to FIG. 4. See also Garrity – see par 97 – analytics module 122 can identify features or parameters (e.g. “standard deviations from the mean”) during and/or before period when hazard function and degradation function first increased; see par 34 - survival module 126 can derive and/or re-derive survival functions based on the hazard function(s).). It would have been obvious to combine Herring, Lee, and Garrity for the same reasons as discussed with regards to claim 1 and 12. Concerning claim 18, Herring discloses: The system of claim 12, wherein the one or more non-transitory computer readable media further comprise program instructions stored thereon that when executed cause the one or more computers to: generate, by the one or more processors, a graphical user interface, the graphical user interface including one or more bar charts, the one or more bar charts depicting a duration for each of the one or more optimum values (Herring – see col. 13, lines 17-50 - a computing device 107 includes a display 117 for rendering various screens, interfaces, and etc. see also col. 21, lines 30-40 - In at least one embodiment, the trained multivariate model 118 is executed and outputs an optimal value for each data source 120, the optimal value representing a value most consistently demonstrated by the data source 120 during a period of optimal performance by the machine 104 associated therewith. see col. 21, lines 49-57 - In one or more embodiments, a threshold or set of thresholds or ranges is generated for each set-point 124 that represents boundary conditions for the values of readings from the data source 120 associated with the set-point 124; See col. 28, lines 40-50 - At step 406, the plurality of optimal runtime datasets are further refined based on identifications of target time periods within the datasets in which the associated machine 104 performed within a target conditioned weight. In various embodiments, the optimization system 103 further refines the one or more optimal runtime datasets; see col. 32, lines 59-65 - Referring to FIG. 6, shown is an exemplary visualization 126A displaying various details regarding current and previous performance states of a machine 104. The visualization 126A includes an intensity map 601 relating current values of data sources 120 to set-points 124 generated from one or more processes described herein; see col. 33, lines 37-60 - intensity map includes a time axis 602 with intervals 610; for each label 603, the intensity map 601 includes a row 604 including one or more sampling frames 606. According to one embodiment, each sampling frame 606 is rendered with a color or other visual indicator (shown as patterning in FIG. 6) based on a value of the data source 120 associated with the label 603 of the row 604 as compared to an intensity scale 607. According to one embodiment, the intensity scale 607 relates particular wavelengths of color to deviation from a set-point 124 or from a median optimal runtime value); see also Lee FIG. 3 – showing different operations, different “lifespans,” (e.g. 40 days, 126 days, 260 days) in combination with different feature values; see par 16 - Equipment failure risk analysis may be also used to optimally set controllable process variables that extend the life of the equipment.). It would have been obvious to combine Herring and Lee for the same reasons as discussed with regards to claim 1. Lee improves upon Herring and Garrity by disclosing showing the duration (e.g. days) for different target amount of features that are process variables (See par 16). One of ordinary skill in the art would be motivated to further include showing features (e.g. process variables) along with the duration to efficiently improve upon the FIG. 6 of Herring and the identifications of time periods (col. 21, 28) associated with setpoints in Herring. Response to Arguments Applicant's arguments filed 2/9/26 have been fully considered but they are not persuasive and/or are moot in view of the new rejections. With regards to 103, Applicant argues that Herring does not disclose identifying time intervals as achieved based on KPI performance and therefore does not disclose the claim limitation “generate a set of setpoint timeframes comprising one or more KPI achieved timeframes that include where one or more KPIs are above a predetermined value during the operational timeframe” and “exclude one or more non-KPI achieved timeframes where the one or more KPIs are below the predetermined during the operational timeframe” because “removing columns having or low coefficients of variance” is “not an exclusion” and does “not indicate whether a KPI meets a predetermined threshold” and is “not conditioned on KPI achievement or failure.” Remarks, page 8. In response, Examiner respectfully disagrees based on updated citations to col. 28-29 where optimal datasets have time periods with target conditioned weight and output and removing pre-error and post-error intervals. Applicant argues that Lee does not disclose “generate, by the one or more processors, a pseudo process model in response to receiving the one or more equipment setpoints and the one or more KPIs” because “statistical aggregation is not pseudo process modeling tied to setpoint timeframes.” Remarks, page 9. Examiner respectfully disagrees. The specification states “0011] states “setpoint optimizer unit (SOU)… is configured to automatically adjust one or more process setpoints based on the results of the pseudo model (survival model/matrix). Lee discloses using “survival analysis (See par 32) and has survivor function (See par 41), where failure risk may optimally set controllable process variables that extend the life of the equipment. Herring already discloses having different periods with setpoints (Col.. 18, lines 15-33 – operation data of particular machine for a period; col. 20, lines 61-67 - historical readings from the most-predictive data sources 120 that are determined to correspond to a time interval of optimal performance by the machine 104; col. 28). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Nov 29, 2022
Application Filed
Aug 10, 2024
Non-Final Rejection — §103
Jan 15, 2025
Response Filed
Feb 26, 2025
Final Rejection — §103
Sep 03, 2025
Request for Continued Examination
Sep 19, 2025
Response after Non-Final Action
Oct 14, 2025
Non-Final Rejection — §103
Feb 09, 2026
Response Filed
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month