DETAILED ACTION This office action is in response to application filed on October 6, 2023. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Specification The disclosure is objected to because of the following informalities: [0020]: Language “ Each asset 110 A, 110 B, 110 C can include a machine formed of multiple components 112A, 112B, 11 2C, 11 2D. For example, assets 110 A, 110 B, 110 C can include a compressor, a pump, a pump motor, a compressor, a motor, a heat exchanger, a turbine, a turbomachinery, or other machines …” should read “ Each asset 110A, 110B, 110C can include a machine formed of multiple components 112A, 112B, 112C, 112D. For example, assets 110A, 110B, 110C can include a compressor, a pump, a pump motor, a compressor, a motor, a heat exchanger, a turbine, a turbomachinery, or other machines …” in order to correct for minor informalities (e.g., remove repeated language). [0021]: Language “ In some embodiments, the computing system of the detection module 116 can include a data processor, and a memory storing non-transitory, computer-readable instructions, which when executed cause the data processor cause the processor to perform operations described herein ” should read “ In some embodiments, the computing system of the detection module 116 can include a data processor, and a memory storing non-transitory, computer-readable instructions, which when executed cause the data processor cause the processor to perform operations described herein ” in order to correct for minor informalities . [00 37 ]: Language “For example, an average RSD can be calculated for each of the window using all the available features” should read “For example, an average RSD can be calculated for each of the window windows using all the available features” in order to correct for minor informalities. [00 51 ]: Language “In some implementations, a confidence level of the identified asset anomaly can be determined and if the confidence level exceed a threshold, a trigger is automatically generated to minimize a risk of asset malfunction” should read “In some implementations, a confidence level of the identified asset anomaly can be determined and if the confidence level exceed exceeds a threshold, a trigger is automatically generated to minimize a risk of asset malfunction” in order to correct for minor informalities. Appropriate correction is required. Claim Objections Claim 1 is objected to because of the following informalities: Claim language “ infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert ” should read “ infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert ” in order to provide appropriate antecedence basis . Appropriate correction is required. Claim 2 is objected to because of the following informalities: Claim language should read “ The method of claim 1, further comprising: controlling operation of the industrial asset based on one or more of the predicted asset data, the deviation data and the deviations across the measurements over time ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 3 is objected to because of the following informalities: Claim language should read “ The method of claim 1, wherein the sensor is affixed to [[ an ]] the industrial asset in an industrial environment and the data further characterizes a state of health of the industrial asset ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 5 is objected to because of the following informalities: Claim language should read “ The method of claim 1, further comprising determining one or more states of the industrial asset based on the cleaned training data ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 9 is objected to because of the following informalities: Claim language should read “ The method of claim [[ 7 ]] 8 , wherein the one or more dynamic thresholds are determined based on the set standard deviation range to exclude an anomalous region ” in order to provide appropriate dependency for antecedence basis . Appropriate correction is required. Claim 14 is objected to because of the following informalities: Claim language “ generate a predicted asset data using the model ” should read “ generate generating a predicted asset data using the model ” in order to correct for minor informalities . Claim language “ infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert ” should read “ infer inferring a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert ” in order to correct for minor informalities and provide appropriate antecedence basis. Appropriate correction is required. Claim 15 is objected to because of the following informalities: Claim language should read “ The system of claim 14, wherein the operations comprise: controlling operation of the industrial asset based on [[ the ]] one or more of the predicted asset data, the deviation data and the deviations across the measurements over time ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 1 6 is objected to because of the following informalities: Claim language should read “ The system of claim 14, wherein the sensor is affixed to [[ an ]] the industrial asset in an industrial environment and the data further characterizes a state of health of the industrial asset ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 1 8 is objected to because of the following informalities: Claim language “ The system of claim 14, wherein the data processor is further configured to perform operations comprising: ” should read “ The system of claim 14, wherein the data processor is further configured to perform the operations comprising: ” in order to provide appropriate antecedence basis. Claim language “ determining one or more states of the asset based on the cleaned training data ” should read “ determining one or more states of the industrial asset based on the cleaned training data ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim 1 9 is objected to because of the following informalities: Claim language should read “ The system of claim 18, wherein the data processor is further configured to provide one or more of the data characterizing the measurement data values, the cleaned training data, the one o r more states of the industrial asset, the portion of the data for training the model, the one or more dynamic thresholds, the predicted asset data, the deviation data and the deviations across the measurements over time to a graphical user interface display ” in order to provide appropriate antecedence basis. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are FILLIN "Enter claim indentification information" \* MERGEFORMAT rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim1 recites “ processing the cleaned training data to generate derived measurements; training a model using the cleaned training data and the derived measurements ” which is unclear as to what these “ derived measurements ” refer to . Independent claim 14 recites similar subject matter, with none of the dependent claims clarify ing these features. According to the original disclosure: “ In one aspect, a method includes: receiving data characterizing measurement data values acquired by a sensor coupled to an industrial asset, processing the data to determine cleaned training data that excludes derived measurements, training a model using the cleaned training data …” ([0005]); “In some embodiments, selecting the portion of the data for training the model can include removing outliers from the data to generate the cleaned training data ” (see [0007]; see also claim 7); “ In another aspect, a system includes receiving data characterizing measurement data values acquired by a sensor coupled to an industrial asset, processing the data to determine cleaned training data that excludes derived measurements …” ([0008]; see also [0010]). “ At 312, outliers are removed from the cleaned training data portion to generate a consistent training data portion . The mean and standard deviations are calculated and data within a set standard deviation range (e.g.,+/- 6) can be selected to exclude outliers ” ([0041]); “ At 314, related measurements are grouped using a correlation algorithm (e.g., Pearson correlation, cross correlation or autocorrelation). The values within the consistent training data portion with a correlation coefficient above a set threshold can be grouped to generate derived data including a grouped training data portion ” ([0042]); and “ At 316, predictive models are trained using data tags. The grouped training data portion and the tags can be provided as input to a model to train the model to predict a correlation of each tag with each group of the grouped training data portion ” ([0043]; see [0033] regarding “tags defining a data (process variable) type”). Therefore, the original disclosure describes removing outliers to generate cleaned training data, which excludes derived measurements, and using the cleaned training data for training the model , while also disclosing generating derived data by grouping the training data using a correlation algorithm, and using these data together with tags as inputs for training a model . For examination purposes, claim language is interpreted as described in the prior art of record (see below) . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim s 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Regarding claim 1, the examiner submits that under Step 1 of the 2024 Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence (see also 2019 Revised Patent Subject Matter Eligibility Guidance ) for evaluating claims for eligibility under 35 U.S.C. 101, the claim is to a process, which is one of the statutory categories of invention. Continuing with the analysis, under Step 2A - Prong One of the test : the limitation “ processing the data to determine cleaned training data ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mental processes and/or mathematical concepts to manipulate data ( e.g., remove outliers ; see specification at [00 07 ] , [0041]; see also claim 7 ). The limitation in the context of the claim mainly refers to performing a mental evaluation and/or applying mathematical concepts to filter data . the limitation “ processing the cleaned training data to generate derived measurements ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mental processes and/or mathematical concepts to manipulate data (see section “ Claim Rejections - 35 USC § 112 ” ). The limitation in the context of the claim mainly refers to performing a mental evaluation and/or applying mathematical concepts to manipulate data. the limitation “ generate a predicted asset data using the model ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mathematical concepts to manipulate data and obtain additional information (see specification at [0043]- [0044] ). The limitation in the context of the claim mainly refers to applying mathematical concepts to transform data. the limitation “ determining deviation data in a new sample of asset data based on a difference between the new sample of the asset data to the predicted asset data ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mental processes and/or mathematical concepts to compare data and obtain a result (see specification at [ 0045 ] -[0046] ). The limitation in the context of the claim mainly refers to performing a mental evaluation and/or applying mathematical concepts to compare data and obtain a result (i.e., deviation data) . the limitation “ determining, based on the deviation data and historical deviation data, deviations across measurements over time ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mental processes and/or mathematical concepts to compare data and obtain a result (see specification at [ 0047 ]). The limitation in the context of the claim mainly refers to performing a mental evaluation and/or applying mathematical concepts to compare data and obtain a result (i.e., deviations across measurements over time) . the limitation “ infer a severity of an anomaly based on the deviations across the derived measurements over time ” is a process that, under its broadest reasonable interpretation in light of the specification, covers performance of the limitation using mental processes and/or mathematical concepts to compare data and obtain additional information (see specification at [0047]). The limitation in the context of the claim mainly refers to performing a mental evaluation and/or applying mathematical concepts to compare data and obtain a result (i.e., severity of an anomaly) . Therefore, the claim recites a judicial exception under Step 2A - Prong One of the test . Furthermore, under Step 2A - Prong Two of the test, this judicial exception is not integrated into a practical application when considering the claim as a whole . In particular, the additional elements recited in the claim: generally link the use of the judicial exception to a particular technological environment or field of use ( e.g., anomaly detection, see MPEP 2106.05(h)) ; “ receiving data characterizing measurement data values acquired by a sensor coupled to an industrial asset ” adds extra-solution activities (e.g., mere data gathering, source/type of data to be manipulated ) using elements recited at a high level of generality (i.e., a sensor coupled to an industrial asset ) (see MPEP 2106.05(g)) ; “ training a model using the cleaned training data and the derived measurements ” adds the words “apply it” (or an equivalent) with the judicial exception , or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea ( see specification at [0043]; see MPEP 2106.05(f)) ; and “ the severity being used to generate an alert ” adds extra-solution activities (e.g., mere data outputting ) (see MPEP 2106.05(g)) . Accordingly, these additional elements, when considered individually and in combination, do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considering the claim as a whole. The claim is directed to a judicial exception under Step 2A of the test . Additionally, under Step 2B of the test, the claim, when considered as a whole, does not include additional elements that, when considered individually and in combination, are sufficient to amount to significantly more than the judicial exception because the additional elements : generally link the use of the judicial exception to a particular technological environment or field of use ( e.g., anomaly detection), which as indicated in the MPEP: “As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible “simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use.” Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application” (see MPEP 2106.05(h)) recite extra-solution activities (i.e., mere data gathering /outputting ) using elements (i.e., a sensor coupled to an industrial asset ) specified at a high level of generality, which as indicated in the MPEP : “Another consideration when determining whether a claim integrates the judicial exception into a practical application in Step 2A Prong Two or recites significantly more in Step 2B is whether the additional elements add more than insignificant extra-solution activity to the judicial exception. The term “extra-solution activity” can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Extra-solution activity includes both pre-solution and post-solution activity. An example of pre-solution activity is a step of gathering data for use in a claimed process … An example of post-solution activity is an element that is not integrated into the claim as a whole … ” (see MPEP 2106.05(g)) ; and “Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more” (see MPEP 2106.05(b) , section III) ; and append computer implementation ( e.g., training a model; see specification at [00 43 ]), which as indicated in the MPEP: “As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “ ‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’ ” . Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” (see MPEP 2106.05( f )) . The claim, when considered as a whole, does not provide significantly more under Step 2B of the test. Based on the analysis, the claim is not patent eligible . Similarly, independent claim 14 is directed to a judicial exception (abstract idea) without significantly more as explained above with regards to claim 1 . With regards to the dependent claims they are also directed to the non-statutory subject matter because : they just extend the abstract idea of the independent claims by additional limitations (Claims 5-9, 11-13, 18 and 20 ), that under the broadest reasonable interpretation in light of the specification, cover performance of the limitations using mental processes and/or mathematical concepts, and the additional elements recited in the dependent claims, when considered individually and in combination , refer to extra-solution activities (e.g., mere data gathering /outputting using a data type or source), generic computer components /implementation and/or field of use (Claims 2-4, 10, 15-17 and 19 ), which as indicated in the Office’s guidance does not integrate the judicial exception into a practical application ( Step 2A – Prong Two ) and/or does not provide significantly more ( Step 2B ) when considering the claimed invention as a whole . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chan ( US 20200387818 A1 ), hereinafter ‘Chan’ , in view of Sepe ( US 20230106311 A1 ), hereinafter ‘Sepe’ . Regarding claim 1. Chan discloses: A method (Fig. 2A, item 100; [0005], [0018], [0081]: a method for building and deploying a model to optimize assets in an industrial process is presented ) comprising: receiving data characterizing measurement data values acquired by a sensor coupled to an industrial asset (Fig. 2A, item 1 2 0 ; Fig. 2C, item 120-1 ; [0081] , [00 89 ] -[0091] , [0121], [0149] : process data from assets is obtained using sensors (see also Fig. 1C, [0052] , [0067], [0102] , [0146] , [0161] ) ) ; processing the data to determine cleaned training data (Fig. 2A, item 120; Fig. 2C, item 120-2; [0081], [009 2 ]-[0 102 ]: data is cleansed (e.g., to remove outliers) (see also Fig. 1C, [0053], [0152]) ) ; processing the cleaned training data to generate derived measurements (Fig. 2A, item 120 ; Fig. 2C, item 120-4 ; [0081], [ 0107] -[01 11 ]: cross-correlation is performed in the cleansed data to incorporate highly correlated inputs (see also Fig. 1C, [0054], [0070], [0153]) ) ; training a model using the cleaned training data and the derived measurements ( Fig. 2A, item 130; [0081], [0118]-[0119]: models are build using the processed data (see also Fig. 1C, [0038], [005 7 ] -[0058] , [0070]) ) ; and generate a predicted asset data using the model ( Fig. 2 A , item 140; [0062]-[0063] , [0081] : model generates predictions of asset failure in process industry (see [0043], [0066], [0071], [0077]) ) . Chan does not explicitly disclose: determining deviation data in a new sample of asset data based on a difference between the new sample of the asset data to the predicted asset data; determining, based on the deviation data and historical deviation data, deviations across measurements over time; and infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert. Sepe teaches: “ Then, an anomaly detection sub-step 74 is executed, which identifies if a signal or a group of signals received from the gas turbines 21, 22, and 23 of the fleet 2 has an anomaly or not. In some embodiments this methodology can be performed by machine learning algorithms, implementing supervised and/or unsupervised methodologies. Anomaly detection aims at identifying which signals have an anomalous features pattern with respect to the pattern of the healthy reference features. This detection can be done by using signal reconstruction techniques like AAKR (Auto-Associative Kernel Regression), used in this case to reconstruct the expected signal features with respect to the healthy features pattern available as configuration parameters. The comparison between the reconstructed features and the measured ones is performed by using a distance metric or a similarity metric (likelihood) and comparing them with respect to a threshold .. . ” ([0066] : anomaly detection in industrial assets is performed by comparing model predictions and measurements using a distance/similarity metric (analogous to deviation) ) ; “ The unsupervised approach can be executed to check periodically the accuracy of the classifier in order to establish if anomaly classes are stable and/or if new anomaly type has to be added to the class list. The clustering will be executed on the features extracted over the last timeframes (number of timeframes can be any). If clusters are centered with respect to anomalous/healthy clusters assigned during model setup, the model is stable and no update is needed; otherwise a new model setup will be executed ” ([0076] : after anomaly detection and as part of anomaly classification, the model is periodically checked by determining whether anomalies are stable (analogous to determining deviations across measurements over time) or model needs recalibration ) ; and “ In case of the anomaly classification 75 detects a (or at least one) sensor malfunction, a severity assignment 771 step is executed, where a severity is assigned to t he anomalies identified as sensor malfunctions. The severity assignment 771 assigns a severity score to anomalies classified as sensor malfunction basing on anomaly type and time lasting and sensor redundancy … Also, frequency and time lasting of an anomaly will be considered to evaluate the severity ” ([0089] : based on results of the anomaly classification, anomalies are assigned severity scores using information such as time lasting of the anomaly (analogous to infer a severity of an anomaly based on deviations across the measurements over time ) ; examiner notes that alarms are usually generated when anomalies are determined in order to alert workers ) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to determin e deviation data in a new sample of asset data based on a difference between the new sample of the asset data to the predicted asset data; to determin e , based on the deviation data and historical deviation data, deviations across measurements over time; and to infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert , in order to improve maintenance of assets by predicting any possible failure of a plant, so to increase the profitability of the service system and reduce any downtime risks of the plant, as discussed by Sepe ([0004], [0010]) . Regarding claim 2. Chan in view of Sepe discloses all the features of claim 1 as described above. Chan further discloses: controlling operation of the asset based on one or more of the predicted asset data, the deviation data and the deviations across measurements over time ([0062]: based on the model predictions, the system executes an adjustment of plant operation (see also [0043], [0064], [0077]) ) . Regarding claim 3. Chan in view of Sepe discloses all the features of claim 1 as described above. Chan does not explicitly disclose: the sensor is affixed to an asset in an industrial environment and the data further characterizes a state of health of the asset. Sepe further teaches: “ In some embodiments, each gas turbine 21, 22, and 23 may be equipped with a signal acquisition module, respectively indicated with the reference numbers 211, 221, and 231, each configured to receive the detection signals, usually electric signals, from the sensors installed on the gas turbine 21, 22, and 23 , and eventually to process said signals, e.g. filtering and amplifying the same before any signal is further processed ” ([0037]: sensors are installed to asset for collecting signals for anomaly detection (see also [0048], [0058], [0062]) ); and “ As it can be seen, the online calculation step 7 performs anomaly detection and classification. Also, it adds an assessment of anomaly severity by distinguishing system and sensor malfunctions, basing on multivariate analysis of different signals anomalies. System anomalies have usually different signature from sensor anomalies and might i nvolve more than one signal. The algorithm is able to recognize several classes of anomalies both for system than sensor behaviors ” ([0067]: system and sensor malfunctioning is determined from sensor signals (see also [0063], [0077]-[0078], [0089]) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to affix the sensor to an asset in an industrial environment and to incorporate the data to further characteriz ing a state of health of the asset , in order to improve asset data acquisition accuracy while also checking equipment operation for robust anomaly detection analysis . Regarding claim 4. Chan in view of Sepe discloses all the features of claim 3 as described above. Chan does not explicitly disclose: the sensor is included in a sensor health monitoring system associated with the industrial environment and the data further characterizes a state of health of the sensor. Sepe further teaches: “As it can be seen, the online calculation step 7 performs anomaly detection and classification. Also, it adds an assessment of anomaly severity by distinguishing system and sensor malfunctions, basing on multivariate analysis of different signals anomalies. System anomalies have usually different signature from sensor anomalies and might i nvolve more than one signal. The algorithm is able to recognize several classes of anomalies both for system than sensor behaviors” ([0067]: sensor malfunctioning is determined from sensor signals (see also [0063], [0077]-[0078], [0089]) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to include the sensor in a sensor health monitoring system associated with the industrial environment and to incorporate the data further characteriz ing a state of health of the sensor , in order to improve asset data acquisition accuracy while also checking sensor equipment operation for robust anomaly detection analysis. Regarding claim 5. Chan in view of Sepe discloses all the features of claim 1 as described above. Chan does not explicitly disclose: determining one or more states of the asset based on the cleaned training data. Sepe further teaches: “ The signals acquired online from a dynamic system are defined timeseries. The algorithm processes all the timeseries acquired from the assets in all the operating conditions, like steady state, transients and engine not running conditions ” ([0070]: signals are acquired from the assets in all operating conditions (states) (see also [0049] , [0056] , [0059] ; see also Chan at [0094] regarding identifying bad data based on sensor interruptions and process downtime ) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to determin e one or more states of the asset based on the cleaned training data , in order to provide a more accurate and robust analysis of industrial equipment anomalies based on evaluations corresponding to similar asset conditions ( states ) . Regarding claim 6. Chan in view of Sepe discloses all the features of claim 5 as described above. Chan does not explicitly disclose: selecting a portion of the data for training the model; and determining one or more dynamic thresholds for the selected portion of the data. Sepe further teaches: “ Also, the offline model setup step 6 comprises a preprocessing sub-step 62 of the signals, wherein a filtering and/or a signal decorrelation processing is carried out, to define the window of asset operating conditions, where each signal has to be processed and/or monitored. For instance, there are signals to be observed in a certain speed range or power range, or at not running engine conditions ” ([0049]: a window of asset operating conditions is defined during model setup (analogous to selecting a portion of the data for training the model) (see also [0059]) ); and “ As it can be seen, the anomaly detection is performed dynamically. The threshold of the detection depends on the specific engine condition because it uses the s ignal reconstruction technique ” ([0056]: anomaly detection uses thresholds that depend on specific equipment conditions (analogous to determining one or more dynamic thresholds for the selected portion of the data) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to select a portion of the data for training the model; and to determin e one or more dynamic thresholds for the selected portion of the data , in order to provide a more accurate and robust analysis of industrial equipment anomalies based on evaluations corresponding to similar asset conditions. Regarding claim 7. Chan in view of Sepe discloses all the features of claim 6 as described above. Chan further disclose s : selecting the portion of the data for training the model comprises removing outliers from the data to generate the cleaned training data (Fig. 2A, item 120; Fig. 2C, item 120-2; [0081], [0092]-[0102]: data is cleansed (e.g., to remove outliers) (see also Fig. 1C, [0053], [0152] ; see also Sepe, Fig. 3, items 62 and 72, [0037], [0049] regarding signal filtering ) ) . Regarding claim 8. Chan in view of Sepe discloses all the features of claim 7 as described above. Chan further discloses: selecting the portion of the data for training the model comprises resizing the portion of the data within a set standard deviation range ([0094]: bad quality data is identified based on statistics such as standard deviation for removal (see also [0011], [0104] ; see also Sepe at [0052], [0065] ) ) . Regarding claim 9. Chan in view of Sepe discloses all the features of claim 7 as described above. Chan does not disclose: the one or more dynamic thresholds are determined based on the set standard deviation range to exclude an anomalous region. Sepe further teaches: “As it can be seen, the anomaly detection is performed dynamically. The threshold of the detection depends on the specific engine condition because it uses the s ignal reconstruction technique” ([0056]: anomaly detection uses thresholds that depend on specific equipment conditions ( see also [0052], [0065] ) ; examiner notes that if data is filtered based on standard deviation range as described by Chan, thresholds should be based on the filtered data for accurately evaluating asset anomalies while confirming outliers removal ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to determine the one or more dynamic thresholds based on the set standard deviation range to exclude an anomalous region , in order to provide a more accurate and robust analysis of industrial equipment anomalies. Regarding claim 10. Chan in view of Sepe discloses all the features of claim 9 as described above. Chan further disclose s : the model comprises one or more machine learning models trainable to generate the predicted asset data ( [0057], [0071]: a deep learning neural network model is used for equipment failure prediction (see also [0119]) ) . Regarding claim 11. Chan in view of Sepe discloses all the features of claim 10 as described above. Chan does not explicitly disclose: the one or more machine learning models can be recalibrated and updated based on a fit of two or more estimated new samples falling outside of the one or more dynamic thresholds. However, Chan teaches: “ The system monitors its performance while generating predictions and solutions, and can perform model adaptions when model predictions and solutions become sub-optimal. In such a way, the system keeps its model and solutions updated and ensures a sustained performance ” ([0063]: model is calibrated when predictions are not optimal (see also Sepe at [0076]) ) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to recalibrate and update the one or more machine learning models based on a fit of two or more estimated new samples falling outside of the one or more dynamic thresholds , in order to improve the accuracy of the model based on the current system conditions . Regarding claim 12. Chan in view of Sepe discloses all the features of claim 1 as described above. Chan does not explicitly disclose: generating data mapping based on a data validation rule. However, Chan teaches: “ The loaded operations data includes continuous measurements for a number of process variables (process variable tags) for the subject production process, as, typically, measurements for hundreds or even thousands of process variables are stored in the plant historian or plant asset database over time for a production process. The method 100, at step 120, generates a raw dataset that contains the loaded original operation data (measurements) for the process variables of the subject process, formatted as a time-series based on timestamps associated with the opera tions data . The method 100, at step 120, generates a raw input dataset that contains the loaded operation measurements for the selected candidate process variables of the subject pro cess, formatted as a time-series based on the associated timestamps ” ([0090]-[0091]: data for selected process variables of the subject process is used to generate the dataset for analysis (analogous to data mapping based on a data validation rule) (see also [0067]) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to generat e data mapping based on a data validation rule , in order to build an accurate model based on corresponding data reflecting the system condition . Regarding claim 13. Chan in view of Sepe discloses all the features of claim 1 2 as described above. Chan does not explicitly disclose: the data validation rule verifies association between datatype of the data from the sensor and a pre-determined data tag. However, Chan teaches: “The loaded operations data includes continuous measurements for a number of process variables (process variable tags) for the subject production process, as, typically, measurements for hundreds or even thousands of process variables are stored in the plant historian or plant asset database over time for a production process. The method 100, at step 120, generates a raw dataset that contains the loaded original operation data (measurements) for the process variables of the subject process, formatted as a time-series based on timestamps associated with the opera tions data. The method 100, at step 120, generates a raw input dataset that contains the loaded operation measurements for the selected candidate process variables of the subject pro cess, formatted as a time-series based on the associated timestamps” ([0090]-[0091]: data for selected process variables (datatype of the data from the sensor) of the subject process is used to generate the dataset for analysis (see also [0067]) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to incorporate the data validation rule to verify association between datatype of the data from the sensor and a pre-determined data tag , in order to validate the data relevant to the particular model for improving prediction accuracy . Regarding claim 14. Chan discloses: A system (Fig. 6, item 50, 60; [0006]: a computer system for executing a method for building and deploying a model to optimize assets in an industrial process is presented (see also [0074], [0124], [0162]) ) comprising: a data processor (Fig. 6, item 84 – “central processor unit”; [0006]: computer system includes processor capabilities to execute the method (see also [0162]) ) , and a memory (Fig. 6, items 90 and 95 – “memory” and “disk storage”) storing non-transitory, computer-readable instructions, which when executed cause the data processor to perform operations ([0006]-[0007]: computer system includes memory capabilities to store instructions to perform the method (see also [0162]-[0163]) ) comprising: receiving data characterizing measurement data values acquired by a sensor coupled to an industrial asset (Fig. 2A, item 120; Fig. 2C, item 120-1; [0081], [0089]-[0091], [0121], [0149]: process data from assets is obtained using sensors (see also Fig. 1C, [0052], [0067], [0102], [0146], [0161]) ) ; processing the data to determine cleaned training data (Fig. 2A, item 120; Fig. 2C, item 120-2; [0081], [0092]-[0102]: data is cleansed (e.g., to remove outliers) (see also Fig. 1C, [0053], [0152]) ) ; processing the cleaned training data to generate derived measurements (Fig. 2A, item 120; Fig. 2C, item 120-4; [0081], [0107]-[0111]: cross-correlation is performed in the cleansed data to incorporate highly correlated inputs (see also Fig. 1C, [0054], [0070], [0153]) ) ; training a model using the cleaned training data and the derived measurements (Fig. 2A, item 130; [0081], [0118]-[0119]: models are build using the processed data (see also Fig. 1C, [0038], [0057]-[0058], [0070]) ) ; and generate a predicted asset data using the model (Fig. 2A, item 140; [0062]-[0063], [0081]: model generates predictions of asset failure in process industry (see [0043], [0066], [0071], [0077]) ) . Chan does not explicitly disclose: determining deviation data in a new sample of asset data based on a difference between the new sample of the asset data to the predicted asset data; determining, based on the deviation data and historical deviation data, deviations across measurements over time; and infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert . Sepe teaches: “Then, an anomaly detection sub-step 74 is executed, which identifies if a signal or a group of signals received from the gas turbines 21, 22, and 23 of the fleet 2 has an anomaly or not. In some embodiments this methodology can be performed by machine learning algorithms, implementing supervised and/or unsupervised methodologies. Anomaly detection aims at identifying which signals have an anomalous features pattern with respect to the pattern of the healthy reference features. This detection can be done by using signal reconstruction techniques like AAKR (Auto-Associative Kernel Regression), used in this case to reconstruct the expected signal features with respect to the healthy features pattern available as configuration parameters. The comparison between the reconstructed features and the measured ones is performed by using a distance metric or a similarity metric (likelihood) and comparing them with respect to a threshold ...” ([0066]: anomaly detection in industrial assets is performed by comparing model predictions and measurements using a distance/similarity metric (analogous to deviation) ); “The unsupervised approach can be executed to check periodically the accuracy of the classifier in order to establish if anomaly classes are stable and/or if new anomaly type has to be added to the class list. The clustering will be executed on the features extracted over the last timeframes (number of timeframes can be any). If clusters are centered with respect to anomalous/healthy clusters assigned during model setup, the model is stable and no update is needed; otherwise a new model setup will be executed ” ([0076]: after anomaly detection and as part of anomaly classification, the model is periodically checked by determining whether anomalies are stable (analogous to determining deviations across measurements over time) or model needs recalibration ); and “In case of the anomaly classification 75 detects a (or at least one) sensor malfunction, a severity assignment 771 step is executed, where a severity is assigned to t he anomalies identified as sensor malfunctions. The severity assignment 771 assigns a severity score to anomalies classified as sensor malfunction basing on anomaly type and time lasting and sensor redundancy … Also, frequency and time lasting of an anomaly will be considered to evaluate the severity” ([0089]: based on results of the anomaly classification, anomalies are assigned severity scores using information such as time lasting of the anomaly (analogous to infer a severity of an anomaly based on deviations across the measurements over time) ; examiner notes that alarms are usually generated when anomalies are determined in order to alert workers ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to determin e deviation data in a new sample of asset data based on a difference between the new sample of the asset data to the predicted asset data; to determin e , based on the deviation data and historical deviation data, deviations across measurements over time; and to infer a severity of an anomaly based on the deviations across the derived measurements over time, the severity being used to generate an alert , in order to improve maintenance of assets by predicting any possible failure of a plant, so to increase the profitability of the service system and reduce any downtime risks of the plant, as discussed by Sepe ([0004], [0010]). Regarding claim 15. Chan in view of Sepe discloses all the features of claim 1 4 as described above. Chan further discloses: the operations comprise: controlling operation of the asset based on the one or more of the predicted asset data, the deviation data and the deviations across measurements over time ([0062]: based on the model predictions, the system executes an adjustment of plant operation (see also [0043], [0064], [0077]) ) . Regarding claim 16 . Chan in view of Sepe discloses all the features of claim 1 4 as described above. Chan does not explicitly disclose: the sensor is affixed to an asset in an industrial environment and the data further characterizes a state of health of the asset . Sepe further teaches: “In some embodiments, each gas turbine 21, 22, and 23 may be equipped with a signal acquisition module, respectively indicated with the reference numbers 211, 221, and 231, each configured to receive the detection signals, usually electric signals, from the sensors installed on the gas turbine 21, 22, and 23 , and eventually to process said signals, e.g. filtering and amplifying the same before any signal is further processed” ([0037]: sensors are installed to asset for collecting signals for anomaly detection (see also [0048], [0058], [0062]) ); and “As it can be seen, the online calculation step 7 performs anomaly detection and classification. Also, it adds an assessment of anomaly severity by distinguishing system and sensor malfunctions, basing on multivariate analysis of different signals anomalies. System anomalies have usually different signature from sensor anomalies and might i nvolve more than one signal. The algorithm is able to recognize several classes of anomalies both for system than sensor behaviors” ([0067]: system and sensor malfunctioning is determined from sensor signals (see also [0063], [0077]-[0078], [0089]) ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Chan in view of Sepe to affix the sensor to an asset in an industrial environment and to incorporate the data to further characteriz ing a state of health of the asset , in order to improve asset data acquisition accuracy while also checking equipment operation for robust anomaly detection analysis. Regarding claim 17 . Chan in view of Sepe discloses all the features of claim 16 as described above. Chan does not explicitly disclose: the sensor is included in a sensor health monitoring system associated wit