Prosecution Insights
Last updated: April 19, 2026
Application No. 16/235,361

SCALABLE SYSTEM AND METHOD FOR FORECASTING WIND TURBINE FAILURE WITH VARYING LEAD TIME WINDOWS

Final Rejection §103§112
Filed
Dec 28, 2018
Examiner
KARAVIAS, DENISE R
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Utopus Insights, Inc.
OA Round
8 (Final)
63%
Grant Probability
Moderate
9-10
OA Rounds
3y 0m
To Grant
98%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
84 granted / 134 resolved
-5.3% vs TC avg
Strong +35% interview lift
Without
With
+34.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
17 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Application 16/235,361, filed 12/28/2018, claims no foreign priority. Response to Amendment This office action is in response to amendments submitted 09/02/2025 wherein claims 1-19 are pending and ready for examination. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding Independent claims 1, 10, and 19 the recited “the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input” (claim 1 line 21-26, claim 10 line 22-27, claim 19 line 19-24). The specification discloses historical sensor data and first filtered historical sensor data however, the specification does not describe, show, or provide an example of training the neural network using both first historical sensor data and first filtered historical sensor data or basing the first failure on “comparing outputs from the first input and the second input.” Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Jannis Tautz-Weinert et al., hereinafter Tautz, “Using SCADA data for wind turbine condition monitoring – a review” downloaded from: https://www.researchgate.net/publication/309211206_Using_SCADA_data_for_wind_turbine_condition_monitoring_-_A_review, in view of Gandhi et al., hereinafter Gandhi, U.S. Pub. No. 2018/0180515A1, in view of Ide et al., hereinafter Ide, U.S. Pub. No. 2018/0095004 A1, in view of Warde-Farley et al., hereinafter Warde-Farley, U.S. Pub. No. 2019/0354869 A1, in further view of Gandenberger, U.S. Pub. No. 2020/0103886 A1, and further in view of Andoni et al., hereinafter Andoni, U.S. Pub. No. 2018/0314938 A1 and as evidenced by Trevethan, “Sensitivity, Specificity, and Predictive Values” Foundations, Pliabilities, and Pitfalls in Research and Practice” downloaded from: https://www.frontiersin.org/articles/10.3389/fpubh.2017.00307/full. Regarding independent claim 1 Tautz teaches: “receiving historical wind turbine component failure data and wind turbine asset data from one or more SCADA systems during a first period of time” (Tautz, Abstract, page 11 col 1 4th and 5th paragraph, Tautz teaches “simple trending of SCADA data has demonstrated good abilities to detect anomalies” and “extensive historical failure data are required, if the methods are able to reliably diagnose failures” (page 11 col 1 4th and 5th paragraph) where “simple trending” and “extensive” discloses data is from “a first period of time” and “historical failure data” discloses “historical wind turbine component failure data” as the “SCADA data” is data is from wind turbines (abstract)). “receiving first historical sensor data of the first period of time, the first historical sensor data including sensor data from one or more sensors of one or more components of any number of renewable energy assets, the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time” (Tautz, page 2 col 2 4th paragraph, page 1 col 1 2nd paragraph-col 2 1st paragraph, page 3 col 1 last paragraph-col 2 1st paragraph: The specification states “The data extraction module 508 may optionally prepare the historical sensor data (sensor data over a past period of time” (¶ 0113). Tautz teaches collecting “data over a long period” (page 2 col 2 4th paragraph) thereby disclosing “historical” sensor data as “long period” discloses “a past period of time.” Tautz also teaches “all large utility scale WTs have a standard supervisory control and data acquisition (SCADA) system principally used for performance monitoring” where the system provides “a wealth of data” where “the range and type of signals recorded can vary widely from one turbine type to another” and the data collected is used for “early failure detection” (page 1 col 1 2nd paragraph). Therefore, Tautz teaches “sensor data from one or more sensors of one or more components” where the data used for “early failure detection” contains “sensor data indicating at least one first failure.” Additionally, Tautz teaches that a comparison of “results for a nine turbine onshore wind farm of 2 MW turbines were made” where “Historical and real time analyses helped the operator to detect problems” (page 3 col 1 last paragraph-col 2 1st paragraph) thereby teaching “the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time”). “dividing a period of time into different classes to train failure prediction models for a first component to create multi-class classifications” (Tautz, page 5-6 § 3.3.2 Artificial neural network: Tautz teaches “ANNs are a way of determining non-linear relationships between observations using training data” (§ 3.3.2) and “models used for fault detection were ANN, ANN ensemble, . . . k-nearest-neighbor ANN” where the “Modelling used several time-steps of wind speed, ‘wind deviation’ (assumed to stand for yaw err), blade pitch angle, generator torque and previous time-steps of the target variable as inputs using an ARX approach” (§ 3.3.2, page 6, 1st column, 2nd paragraph) where “time-steps of wind speed . . .” and “models used for fault detection” discloses “dividing a period of time into different classes to train failure prediction models” and thereby disclosing “a first component to create multi-class classifications” as the time-step classifications are used to determine a failure or non-failure classification). Tautz does not teach “different lead times.” Andoni teaches using “different lead times” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, the combination of Tautz and Andoni discloses the limitation “dividing a period of time into different classes to train failure prediction models for a first component using different lead times to create multi-class classifications”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089) as both Tautz and Andoni are concerned with failures in wind turbines. Tautz does not teach: “A non-transitory computer readable medium comprising executable instructions, the executable instructions being executable by one or more processors to perform a method,” “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data, the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure” training a first set of failure prediction models using a deep neural network, the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, the input of the first set of failure prediction models being balanced inputs,” “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models, each of the different lead times corresponding to a different lead time window before a predicted failure;” “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive predictive value; “comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models;” “selecting at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure;” “receiving first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset;” “applying the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components; “comparing the first failure prediction to a trigger criteria; and” “generating and transmitting a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction.” Gandhi teaches: “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data” and “filtered historical sensor data” (Gandhi, ¶ 0004, ¶ 0038-¶ 0041: Gandhi teaches “detecting abnormalities and failures related to the rotating equipment” (¶ 0004) by collecting historical data (¶ 0040). Gandhi teaches the “Data sets that are indicative of a failure mode can be removed from the historical data” (¶ 0040) thereby disclosing “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm to remove failure data from historical data to as taught by Gandhi as both Tautz and Gandhi are concerned with detecting failures in rotating equipment, in order to determine normal behavior for comparison to actual behavior when determining whether a deviation exists or not where the deviation “may be indicative of a failure of the rotating equipment” (Gandhi ¶ 0041). Ide teaches: “the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure” (Ide, ¶ 0005: Ide teaches using a “plurality of mixture models” where “each mixture model is a function of the plurality of variables, learning weighting factors” (¶ 0005) and “determining a Gaussian Markov random field (GMRF) model from surviving mixture models” etc., where the GMRF model is used to “detect anomalous sensor data values that could be indicative of an impending system failure” (¶ 0005) disclosing using “factor analysis” and “weighting factors to identify data associated with the first failure”) therefore the combination of Tautz, Gandhi, and Ide discloses the limitation “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure.”) Ide teaches: the input of the first set of failure prediction models being balanced inputs,” (Ide, ¶ 0005: Ide teaches “a method for detecting early indications of equipment failure in an industrial system” using “sensor training data collected from industrial equipment” where “the sensor training data includes samples of sensor values for a plurality of variables” (¶ 0005). Moreover, the sensor data is used for detecting patterns and to “initialize a plurality of mixture models” (¶ 0005) where “unimportant models” are removed from the plurality of mixture models (¶ 0005) disclosing the “first set of failure prediction models being balanced inputs”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm that utilizes factor analysis, a weighting factor to identify data associated with failure and balanced inputs as disclosed by Ide as both Tautz and Ide are concerned with identifying abnormalities in equipment, as factor analysis can find hidden patterns and weighting factors and balanced inputs allow for control in under or over-representation in the data thereby providing a system where “a control operator can judge if the current operation is good or bad” (Ide, ¶ 0004). Gandenberger teaches: “A non-transitory computer readable medium comprising executable instructions, the executable instructions being executable by one or more processors to perform a method” (Gandenberger, ¶ 0035). “training a first set of failure prediction models using a deep neural network, the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, Gandenberger, fig 1, ¶ 0050-¶ 0053, ¶ 0083, ¶ 0144-¶ 0145, ¶ 0150-¶ 0153, ¶ 0167: Gandenberger teaches an “event prediction model” referred to as “a predictive model” (¶ 0144) where the “predictive model” is the “data analytics operation” (¶ 0083) where the “asset data platform” that performs “data analytics” including “anomaly detection” and “failure prediction” (¶ 0053) disclosing “anomaly detection” and “failure prediction” are a result of a “predictive model.” Moreover “obtaining a set of training data for the event prediction model, which may comprise historical values for a set of data variables that are potentially suggestive of whether or not an event occurrence the given type is forth coming” (¶ 0151) where the “event prediction model” may be “artificial neural networks” (¶ 0151) which read on “a deep neural network” disclosing a “deep neural network” that is trained using “historical sensor data” to “identify anomalies” and predict “failures.” Moreover, “some representative types of assets that may be monitored by asset data platform 102” include but are not limited to “electric power generation equipment (e.g., wind turbines” (¶ 0050) Additionally, “different event prediction models may comprise event prediction models configured to preemptively predict event occurrences of the same given type that were created using different sets of training data” where “different sets of training data” disclose “the first input and the second input” and “event prediction models” discloses “outputs from the first input and the second input.”) While Gandenberger teaches using different sets of training data where the training data is made up of “historical sensor data” Gandenberger does not teach “first filtered historical sensor data.” Gandhi teaches: “first filtered historical sensor data” (see above). Andoni teaches: “different lead times” (see above). Therefore the combination of Tautz, Gandenberger, Gandhi, and Andoni teaches the limitation “training a first set of failure prediction models using a deep neural network, the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input.” Gandenberger teaches: “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive predictive value; comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models; selecting at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure” (Gandenberger, Table 1, Eqn. 1, Eqn. 2, ¶ 0158-¶ 0163, Gandenberger teaches using a “confusion matrix” to “quickly assess the event prediction model’s performance” (¶ 0159) disclosing “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions” where Gandenberger’s “confusion matrix” includes “metrics for true positives, false positives, true negatives, and false negatives” (see Table 1, ¶ 0159) Gandenberger also teaches using “precision” to evaluate prediction models where p r e c i s i o n = N T P N T P + N F P (Eqn. 1) (precision is part of the confusion matrix (¶ 0159)) where N T P “represents the number of individual ‘true positive’ predictions output by the event prediction model” (¶ 0159) and N F P “represents the number of individual ‘false positive’ predictions output by the event prediction model” (¶ 0159) thereby teaching using “a positive prediction value” as “a positive prediction value” is equal to N T P N T P + N F P as evidenced by Trevethan (2nd page, 2nd col, figure 1). Gandenberger teaches using “individual predictions output by the event prediction model falling into each of these four categories may also be used to calculate metrics that characterize aspects of the event prediction model’s performance” (¶ 0160). In addition to “precision” Gandenberger also teaches “recall= N T P N T P + N F N “ (Eqn 2) where " N F N represents the number of individual ‘false negative predictions’ output by the event prediction model” (¶ 0159) and using both “precision” and “recall” for a reliable comparison between different prediction models (¶ 0162-¶ 0163). Gandenberger also teaches that different lead times are taken into consideration when evaluating a prediction model (¶ 0156). Therefore, Gandenberger teaches “comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models”); “receiving first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset” (Gandenberger, fig 1, fig 6, ¶ 0050, ¶ 0168- ¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” (fig 1 step 602) disclosing “first current sensor data of a second time period” as the prediction models would have been developed using a different set of data. Gandenberger also teaches the data may be from “electric power generation equipment (e.g., wind turbines, …) (¶ 0050) thereby disclosing “sensor data from the one or more sensors of the one or more components of the renewable energy asset”); “applying the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components” (Gandenberger, fig 1, fig 6, ¶ 0053, ¶ 0168-¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” “evaluate predictions output by models using event windows,” and “determine ‘catch’ and ‘false flag’ numbers” (fig 6 step 602-606) thereby disclosing “applying the first selected failure prediction model to the current sensor data” as “applying models to test data” includes “the first selected failure prediction model” and “event occurrences of the same given type” includes “a failure of at least one component” as Gandenberger teaches the “asset data platform (102)” is programed to “perform data analytics operations based on the asset-related data received from data sources (104), including but not limited to failure prediction, …” (¶ 0053)); “comparing the first failure prediction to a trigger criteria; and generating and transmitting a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction” (Gandenberger, fig 6, ¶ 0114, ¶ 0179-¶ 0187, Gandenberger teaches a “catch” and a “false flag” where “a ‘catch’ is generally defined as a correct prediction that an event occurrence is forthcoming and a ‘false flag’ is generally defined as an incorrect prediction that an event occurrence is forthcoming” (¶ 0179) where “an event occurrence” may be a “failure prediction” (¶ 0114). Gandenberger also teaches the prediction models may have their output grouped into “alerts” based on a ‘criteria” where “the criteria that is used to group individual positive predictions into alerts may take various forms” (¶ 0181) one form being “the criteria may dictate that a new alert begins when the model changes from outputting a negative prediction to outputting a positive prediction and ends when the model changes from outputting a (positive) prediction back to outputting a negative prediction” where the “positive prediction” indicates “the failure prediction”). Both Tautz and Gandenberger are concerned with identifying abnormalities in wind turbines therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz as modified by Andoni and Ghani by including training a neural network to detect anomalies and predict failures, the confusion matrix, the positive predictive value (precision), and alerts with trigger criteria in determining a failure prediction model as disclosed by Gandenberger in order to provide a system and method where the “primary purpose of an event prediction model is to enable a data analytics platform to preemptively notify a user that an event occurrence of a given type is forthcoming sufficiently in advance of when the event occurrence actually happens, so that action can be taken to address the event occurrence before it actually happens” in order to “mitigate the costs that may otherwise result from an unexpected occurrence of an undesirable event like an asset failure” (¶ 0153). Warde-Farley teaches: “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models” (Warde-Farley, fig 1, fig 2, ¶ 0005, ¶ 0067-¶ 0071, Warde-Farley teaches a “deep neural network” (¶ 0005) where “the action selection network (110) may include a sequence of one or more convolutional layers, followed by a recurrent layer” (¶ 0067) and “an embedded network (112)” (that) “may include a sequence of one or more convolutional layers followed by a fully-connected output layer” (¶ 0069) disclosing “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network.” Additionally, Warde-Farley teaches “receiving an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment” (¶ 0008) and “a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques”(¶ 0010) which discloses creating “the first set of failure prediction models”); It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including applying the well-known layers of a neural network system of fully connected, recurrent, and convolutional to create a failure prediction model as taught by Warde-Farley. A person of ordinary skill in the art would understand applying the well know layers of fully connected, recurrent, and convolutional could be applied to any physical application of a neural network such as creating “a first set of failure prediction models” and that by applying Warde-Farley’s generic mathematical algorithms associated with fully connected, recurrent, and convolutional layers the limitation “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create a first set of failure prediction models” will be attained. Andoni teaches: “each of the different lead times corresponding to a different lead time window before a predicted failure” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, Andoni discloses “each of the different lead times corresponding to a different lead time window before a predicted failure”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089). Regarding claim 2 Tautz as modified teaches: “performing a quality check and applying an availability filter to the first historical sensor data” (Tautz, page 5 col 2 last paragraph, page 6 col 1 last paragraph-col 2 first paragraph: Tautz teaches “Input pre-processing was applied”(page 5 col 2 last paragraph) and then lists several types of “pre-processing” which discloses “performing a quality check.” Tautz also teaches “The selection of the training data was automated by using filtering and selection” thereby teaching “applying an availability filter to the first historical sensor data”). Regarding claim 3 Tautz as modified does not teach: “detecting missing sensor data and replacing the missing sensor data with a linear interpolation.” Andoni teaches: “detecting missing sensor data and replacing the missing sensor data with a linear interpolation” (Andoni, fig 3, ¶ 0044-¶ 0047, Andoni teaches “the data profiler (320) may drop a column that has at least a threshold percentage of missing or corrupted values” thereby teaching “detecting missing sensor data.” Additionally, Andoni teaches the “data profiler” may perform “cleaning/scaling operations (330) on data” where an “example of data cleaning operation is to perform imputation to determine missing data values” which may include “filling using a mean of valid values from surrounding rows” (¶ 0047) thereby teaching “replacing the missing sensor data with a linear interpolation”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including detecting and replacing missing data as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, to provide a valid estimate of an unknown value in order to provide “an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set” and to “simplify the neural network model to avoid overfitting and to reduce computing resources required to run the model” (Andoni, ¶ 0022). Regarding claim 4 Tautz as modified does not teach: “separating the first sensor data into training and validation based on failure events and separating the first filtered historical sensor data into a test set is separated based on time.” Andoni teaches: “separating the first sensor data into training and validation based on failure events and separating the first sensor data into a test set is separated based on time” (Andoni, ¶ 0049, Andoni teaches the “combined data source may be divided into training and testing sets” (¶ 0049) where “testing” discloses “validation” and “the input data set (102) may represent one or more training sets and one or more testing sets” (¶ 0049). Therefore “training sets” and “testing sets” may be “input data sets.” Andoni also teaches “the input data set (102) (which represents training sets and testing sets) for the AMB engine may be generated from available data sources to provide approximately a 50%-50% split between the success and failure states” (¶ 0061) thereby teaching “separating sensor data into training and validation based on failure events.” Additionally, Andoni teaches the “pre-processor (104) may perform various rule-based operation on such ‘raw’ data sources to determine the input data set (102) that is operated on by the automated model building engine” (¶ 0021) where a “rule” may be to determine “date/time data” therefore Andoni teaches “the test set is separated based on time”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including separating historical sensor data into training and validation as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, to evaluate how well the model makes predictions based on new data in order to provide “an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set” and to “simplify the neural network model to avoid overfitting and to reduce computing resources required to run the model” (Andoni, ¶ 0022). Tautz teaches “historical” sensor data. The specification states “The data extraction module 508 may optionally prepare the historical sensor data (sensor data over a past period of time” (¶ 0113). Tautz teaches collecting “data over a long period” (page 2 col 2 4th paragraph) thereby disclosing “historical” sensor data as “long period” discloses “a past period of time.” Regarding claim 5 Tautz as modified does not teach: “creating cohort instances based on the historical wind turbine component failure data and the wind turbine asset data, each cohort instance representing a subset of the wind turbines, the subset of wind turbines including a same type of controller and a similar geographical location, the similar geographical location of the wind turbines of the subset of wind turbines being within the wind turbine asset data.” Andoni teaches: “creating cohort instances based on the historical wind turbine component failure data and the wind turbine asset data, each cohort instance representing a subset of the wind turbines, the subset of wind turbines including a same type of controller and a similar geographical location, the similar geographical location of the wind turbines of the subset of wind turbines being within the wind turbine asset data” (Andoni, ¶ 0057-¶ 0059, ¶ 0180: The specification states “A cohort may be a set of wind turbines having the same controller type and operating in a similar geography” (¶ 0180) and that “the data extraction module (504) and/or the data preparation module (506) identifies similar or same controller types based on the asset data and the geolocation to generate any number of cohorts” (¶ 0180). Andoni teaches using “timestamped data from individual wind turbines on a wind farm” therefore teaches the wind turbines have “a similar geographical location.” The “timestamped” data includes “wind turbine asset data” for use in classification (fig 6A element 640) additionally data indicating “known failures.” This data may be used to solve a “combined classification/regression problem” where the “categorical outputs may be input into a softmax function” (¶ 0059). Therefore, Andoni teaches “creating cohort instances based on the historical wind turbine component failure data and the wind turbine asset data” where the “categorical outputs” discloses “cohort instances” representing “a subset of wind turbines including the same type of controller and a similar geographical location”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including creating cohort instances as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, as cohorts can be tailored to include specific data for analysis in order to provide “an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set” (Andoni, ¶ 0022). Regarding claim 6 Tautz as modified does not teach: “generating an event and alarm vendor agnostic representation of event and alarm data creating a feature matrix, wherein the feature matrix includes a unique feature identifier for each feature of the event and alarm data and one or more features from the event and alarm data, and extracting patterns of events based on the feature matrix, the training of the first set of failure prediction models using a deep neural network being further based on the patterns of events.” Andoni teaches: “generating an event and alarm vendor agnostic representation of event and alarm data creating a feature matrix, wherein the feature matrix includes a unique feature identifier for each feature of the event and alarm data and one or more features from the event and alarm data, and extracting patterns of events based on the feature matrix, the training of the first set of failure prediction models using a deep neural network being further based on the patterns of events” (Andoni, fig 1, ¶ 0057: The specification states “The data extraction module (504) and/or the data preparation module (506) may modify the event and alarm log data from the event and alarm log and or the alarm metadata to represent the event and alarm data in a vendor agnostic and machine readable way (e.g. by structuring the event and alarm log data” (¶ 00183) and “The example feature matrix includes an event description, event code, and unique feature identifier” (¶ 00184). Andoni teaches “user may upload or manually enter know failures, such as past time periods during which individual wind turbines were known to be in a failure state” thereby disclosing “generating an event and alarm vendor agnostic representation of event and alarm data creating a feature matrix” where the “upload(ed) or manually enter(ed) know failures” data represents “generating an event and alarm vendor agnostic representation of event and alarm data” as entering the data would be “structuring the event and alarm log data” as stated in the specification. Additionally, Andoni teaches a “feature matrix” as the “event description” would be the failure of “individual wind turbines,” “know failures” discloses an “event code,” and “past time periods” discloses a “unique feature identifier.” Additionally Andoni teaches “the pre-processor (104) may determine that a neural network is to be generated to solve a combined classification/regression problem that predicts, based on windfarm sensor data, a likelihood of failure at least a particular number of days in advance (e.g., the minimum bead time of the GUI (640) (¶ 0059) thereby teaching “extracting patterns of events based on the feature matrix, the training the first set of failure prediction models using a deep neural network being further is based on the patterns of events” as solving a classification problem discloses “extracting patterns of events based on the feature matrix” as the “feature matrix” which includes the past failure data is part of the “windfarm sensor data”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including generating an event and alarm vendor agnostic representation of event and alarm data and creating a feature matrix as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide “an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set” to “simplify the neural network model to avoid overfitting and to reduce computing resources required to run the model” (Andoni, ¶ 0022). Regarding claim 7 Tautz as modified does not teach: “the first set of failure prediction models is assessed through a softmax function prior to evaluation.” Andoni teaches: the first set of failure prediction models is assessed through a softmax function prior to evaluation (Andoni, ¶ 0054, Andoni teaches “The final classification output of the neural network may be based on a softmax of the probabilities” (¶ 0054) and “the automated model engine generates the neural network to have two output nodes” where the “two output nodes” indicate the classification. The “final classification output of the neural network may be based on a softmax” discloses the assessment “through a softmax function” is before or “prior to evaluation”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including assessment through a softmax function as taught by Andoni, as both Tautz and Andoni are concerned with failures in wind turbines, as softmax transforms input values into values between 0 and 1 thereby allowing these values to be interpreted as probabilities simplifying the neural network model. Regarding claim 8 Tautz as modified does not teach: “extracting patterns of events based on the feature matrix comprises counting a number of event codes of events that occurred during a time interval using the feature matrix and sequence the event codes to include dynamics of events in a longitudinal time dimension.” Andoni teaches: “extracting patterns of events based on the feature matrix comprises counting a number of event codes of events that occurred during a time interval using the feature matrix and sequence the event codes to include dynamics of events in a longitudinal time dimension” (Andoni, ¶ 0058-¶ 0059, Andoni teaches “user may upload or manually enter know failures, such as past time periods during which individual wind turbines were known to be in a failure state” (¶ 0058) where “know failures” reads on the “event codes” and “past time” reads on “a time interval.” Andoni teaches solving a classification problem using “the feature matrix” (see claim 6 above) where the classification may be based on “the number of event codes of events that occurred during a time interval.” The “past time” or the “time interval” would be considered part of the “dynamics of events.” The time frame over which the “known failures” occurs discloses a “longitudinal time dimension” as the “known failures” all occur within the time frame.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including extracting patterns of events as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to classify events according to a time frame in order to “simplify the neural network model to avoid overfitting and to reduce computing resources required to run the model” (Andoni, ¶ 0022). Regarding claim 9 Tautz as modified does not teach: “each of the first set of failure prediction models predict failures of multiple components.” Andoni teaches: “each of the first set of failure prediction models predict failures of multiple components” (Andoni, Fig 4, Andoni teaches in the “identify goal” step (fig 4 step 430) to “Predict Target(s)” then to “Forecast Failure” thereby teaching the models “predict failures of multiple components”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including predicting failures of multiple components as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to produce an accurate system reliability prediction to provide “an automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set” (Andoni, ¶ 0022). Regarding independent claim 10 Tautz teaches: “A component failure prediction system” (Tautz, Abstract) “receive historical wind turbine component failure data and wind turbine asset data from one or more SCADA systems during a first period of time” (Tautz, Abstract, page 11 col 1 4th and 5th paragraph, Tautz teaches “simple trending of SCADA data has demonstrated good abilities to detect anomalies” and “extensive historical failure data are required, if the methods are able to reliably diagnose failures” (page 11 col 1 4th and 5th paragraph) where “simple trending” and “extensive” discloses data is from “a first period of time” and “historical failure data” discloses “historical wind turbine component failure data” as the “SCADA data” is data is from wind turbines (abstract)). “receive first historical sensor data of the first period of time, the first historical sensor data including sensor data from one or more sensors of one or more components of any number of renewable energy assets, the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time” (Tautz, page 2 col 2 4th paragraph, page 1 col 1 2nd paragraph-col 2 1st paragraph, page 3 col 1 last paragraph-col 2 1st paragraph: The specification states “The data extraction module 508 may optionally prepare the historical sensor data (sensor data over a past period of time” (¶ 0113). Tautz teaches collecting “data over a long period” (page 2 col 2 4th paragraph) thereby disclosing “historical” sensor data as “long period” discloses “a past period of time.” Tautz also teaches “all large utility scale WTs have a standard supervisory control and data acquisition (SCADA) system principally used for performance monitoring” where the system provides “a wealth of data” where “the range and type of signals recorded can vary widely from one turbine type to another” and the data collected is used for “early failure detection” (page 1 col 1 2nd paragraph). Therefore, Tautz teaches “sensor data from one or more sensors of one or more components” where the data used for “early failure detection” contains “sensor data indicating at least one first failure.” Additionally, Tautz teaches that a comparison of “results for a nine turbine onshore wind farm of 2 MW turbines were made” where “Historical and real time analyses helped the operator to detect problems” (page 3 col 1 last paragraph-col 2 1st paragraph) thereby teaching “the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time”). “divide a period of time into different classes to train failure prediction models for a first component to create multi-class classifications” (Tautz, page 5-6 § 3.3.2 Artificial neural network: Tautz teaches “ANNs are a way of determining non-linear relationships between observations using training data” (§ 3.3.2) and “models used for fault detection were ANN, ANN ensemble, . . . k-nearest-neighbor ANN” where the “Modelling used several time-steps of wind speed, ‘wind deviation’ (assumed to stand for yaw err), blade pitch angle, generator torque and previous time-steps of the target variable as inputs using an ARX approach” (§ 3.3.2, page 6, 1st column, 2nd paragraph) where “time-steps of wind speed . . .” and “models used for fault detection” discloses “dividing a period of time into different classes to train failure prediction models” and thereby disclosing “a first component to create multi-class classifications” as the time-step classifications are used to determine a failure or non-failure classification). Tautz does not teach “different lead times.” Andoni teaches using “different lead times” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, the combination of Tautz and Andoni discloses the limitation “dividing a period of time into different classes to train failure prediction models for a first component using different lead times to create multi-class classifications”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089) as both Tautz and Andoni are concerned with failures in wind turbines. Tautz does not teach: “at least one processor; and” “memory containing instructions, the instructions being executable by the at least one processor to:” “apply an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data, the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure;” “train a first set of failure prediction models using a deep neural network, the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operations of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, the input of the first set of failure prediction models being balanced inputs, the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models, each of the different lead times corresponding to a different lead time before a predicted failure;” “evaluate each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive prediction value;” “compare the confusion matrix and the positive prediction value of each of the first set of failure prediction models;” “select at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure;” “receive first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset;” “apply the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components; and” “compare the first failure prediction to a trigger criteria; and” “generate and transmitting a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction.” Gandhi teaches: “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data” and “filtered historical sensor data” (Gandhi, ¶ 0004, ¶ 0038-¶ 0041: Gandhi teaches “detecting abnormalities and failures related to the rotating equipment” (¶ 0004) by collecting historical data (¶ 0040). Gandhi teaches the “Data sets that are indicative of a failure mode can be removed from the historical data” (¶ 0040) thereby disclosing “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm to remove failure data from historical data to as taught by Gandhi as both Tautz and Gandhi are concerned with detecting failures in rotating equipment, in order to determine normal behavior for comparison to actual behavior when determining whether a deviation exists or not where the deviation “may be indicative of a failure of the rotating equipment” (Gandhi ¶ 0041). Ide teaches: “the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure” (Ide, ¶ 0005: Ide teaches using a “plurality of mixture models” where “each mixture model is a function of the plurality of variables, learning weighting factors” (¶ 0005) and “determining a Gaussian Markov random field (GMRF) model from surviving mixture models” etc., where the GMRF model is used to “detect anomalous sensor data values that could be indicative of an impending system failure” (¶ 0005) disclosing using “factor analysis” and “weighting factors to identify data associated with the first failure” therefore the combination of Tautz, Gandhi, and Ide discloses the limitation “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure.”) Ide teaches: the input of the first set of failure prediction models being balanced inputs,” (Ide, ¶ 0005: Ide teaches “a method for detecting early indications of equipment failure in an industrial system” using “sensor training data collected from industrial equipment” where “the sensor training data includes samples of sensor values for a plurality of variables” (¶ 0005). Moreover, the sensor data is used for detecting patterns and to “initialize a plurality of mixture models” (¶ 0005) where “unimportant models” are removed from the plurality of mixture models (¶ 0005) disclosing the “first set of failure prediction models being balanced inputs”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm that utilizes factor analysis, a weighting factor to identify data associated with failure and balanced inputs as disclosed by Ide as factor analysis can find hidden patterns and weighting factors and balanced inputs allow for control in under or over-representation in the data thereby providing a system where “a control operator can judge if the current operation is good or bad” (Ide, ¶ 0004). Gandenberger teaches: “at least one processor; and memory containing instructions, the instructions being executable by the at least one processor” (Gandenberger, ¶ 0035): “train a first set of failure prediction models using a deep neural network, wherein the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, (Gandenberger, fig 1, ¶ 0050-¶ 0053, ¶ 0083, ¶ 0144-¶ 0145, ¶ 0150-¶ 0153, ¶ 0167: Gandenberger teaches an “event prediction model” referred to as “a predictive model” (¶ 0144) where the “predictive model” is the “data analytics operation” (¶ 0083) where the “asset data platform” that performs “data analytics” including “anomaly detection” and “failure prediction” (¶ 0053) disclosing “anomaly detection” and “failure prediction” are a result of a “predictive model.” Moreover “obtaining a set of training data for the event prediction model, which may comprise historical values for a set of data variables that are potentially suggestive of whether or not an event occurrence the given type is forth coming” (¶ 0151) where the “event prediction model” may be “artificial neural networks” (¶ 0151) which read on “a deep neural network” disclosing a “deep neural network” that is trained using “historical sensor data” to “identify anomalies” and predict “failures.” Moreover, “some representative types of assets that may be monitored by asset data platform 102” include but are not limited to “electric power generation equipment (e.g., wind turbines” (¶ 0050) Additionally, “different event prediction models may comprise event prediction models configured to preemptively predict event occurrences of the same given type that were created using different sets of training data” where “different sets of training data” disclose “the first input and the second input” and “event prediction models” discloses “outputs from the first input and the second input.”) While Gandenberger teaches using different sets of training data where the training data is made up of “historical sensor data” Gandenberger does not teach “first filtered historical sensor data.” Gandhi teaches: “first filtered historical sensor data” (see above). Andoni teaches: “different lead times” (see above). Therefore the combination of Tautz, Gandenberger, Gandhi, and Andoni teaches the limitation “train a first set of failure prediction models using a deep neural network, wherein the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input.” Gandenberger teaches: “evaluate each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive predictive value; compare the confusion matrix and the positive prediction value of each of the first set of failure prediction models; select at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure” (Gandenberger, Table 1, Eqn. 1, Eqn. 2, ¶ 0158-¶ 0163, Gandenberger teaches using a “confusion matrix” to “quickly assess the event prediction model’s performance” (¶ 0159) disclosing “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions” where Gandenberger’s “confusion matrix” includes “metrics for true positives, false positives, true negatives, and false negatives” (see Table 1, ¶ 0159) Gandenberger also teaches using “precision” to evaluate prediction models where p r e c i s i o n = N T P N T P + N F P (Eqn. 1) (precision is part of the confusion matrix (¶ 0159)) where N T P “represents the number of individual ‘true positive’ predictions output by the event prediction model” (¶ 0159) and N F P “represents the number of individual ‘false positive’ predictions output by the event prediction model” (¶ 0159) thereby teaching using “a positive prediction value” as “a positive prediction value” is equal to N T P N T P + N F P as evidenced by Trevethan (2nd page, 2nd col, figure 1). Gandenberger teaches using “individual predictions output by the event prediction model falling into each of these four categories may also be used to calculate metrics that characterize aspects of the event prediction model’s performance” (¶ 0160). In addition to “precision” Gandenberger also teaches “recall= N T P N T P + N F N “ (Eqn 2) where " N F N represents the number of individual ‘false negative predictions’ output by the event prediction model” (¶ 0159) and using both “precision” and “recall” for a reliable comparison between different prediction models (¶ 0162-¶ 0163). Gandenberger also teaches that different lead times are taken into consideration when evaluating a prediction model (¶ 0156). Therefore, Gandenberger teaches “compare the confusion matrix and the positive prediction value of each of the first set of failure prediction models.”) “receive first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset” (Gandenberger, fig 1, fig 6, ¶ 0050, ¶ 0168- ¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” (fig 1 step 602) disclosing “first current sensor data of a second time period” as the prediction models would have been developed using a different set of data. Gandenberger also teaches the data may be from “electric power generation equipment (e.g., wind turbines, …) (¶ 0050) thereby disclosing “sensor data from the one or more sensors of the one or more components of the renewable energy asset”); “apply the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components” (Gandenberger, fig 1, fig 6, ¶ 0053, ¶ 0168-¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” “evaluate predictions output by models using event windows,” and “determine ‘catch’ and ‘false flag’ numbers” (fig 6 step 602-606) thereby disclosing “apply the first selected failure prediction model to the current sensor data” as “applying models to test data” includes “the first selected failure prediction model” and “event occurrences of the same given type” includes “a failure of at least one component” as Gandenberger teaches the “asset data platform (102)” is programed to “perform data analytics operations based on the asset-related data received from data sources (104), including but not limited to failure prediction, …” (¶ 0053)); “compare the first failure prediction to a trigger criteria; and generate and transmit a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction” (Gandenberger, fig 6, ¶ 0114, ¶ 0179-¶ 0187, Gandenberger teaches a “catch” and a “false flag” where “a ‘catch’ is generally defined as a correct prediction that an event occurrence is forthcoming and a ‘false flag’ is generally defined as an incorrect prediction that an event occurrence is forthcoming” (¶ 0179) where “an event occurrence” may be a “failure prediction” (¶ 0114). Gandenberger also teaches the prediction models may have their output grouped into “alerts” based on a ‘criteria” where “the criteria that is used to group individual positive predictions into alerts may take various forms” (¶ 0181) one form being “the criteria may dictate that a new alert begins when the model changes from outputting a negative prediction to outputting a positive prediction and ends when the model changes from outputting a (positive) prediction back to outputting a negative prediction” where the “positive prediction” indicates “the failure prediction”). Both Tautz and Gandenberger are concerned with identifying abnormalities in wind turbines therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz as modified by Andoni and Gandhi by including training a neural network to detect anomalies and predict failures, the confusion matrix, the positive predictive value (precision), and alerts with trigger criteria in determining a failure prediction model as disclosed by Gandenberger in order to provide a system and method where the “primary purpose of an event prediction model is to enable a data analytics platform to preemptively notify a user that an event occurrence of a given type is forthcoming sufficiently in advance of when the event occurrence actually happens, so that action can be taken to address the event occurrence before it actually happens” in order to “mitigate the costs that may otherwise result from an unexpected occurrence of an undesirable event like an asset failure” (¶ 0153). Warde-Farley teaches: “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models” (Warde-Farley, fig 1, fig 2, ¶ 0005, ¶ 0067-¶ 0071: Warde-Farley teaches a “deep neural network” (¶ 0005) where “the action selection network (110) may include a sequence of one or more convolutional layers, followed by a recurrent layer” (¶ 0067) and “an embedded network (112)” (that) “may include a sequence of one or more convolutional layers followed by a fully-connected output layer” (¶ 0069) disclosing “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network.” Additionally, Warde-Farley teaches “receiving an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment” (¶ 0008) and “a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques”(¶ 0010) which discloses creating “the first set of failure prediction models”); It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including applying the well-known layers of a neural network system of fully connected, recurrent, and convolutional to create a failure prediction model as taught by Warde-Farley. A person of ordinary skill in the art would understand applying the well know layers of fully connected, recurrent, and convolutional could be applied to any physical application of a neural network such as creating “a first set of failure prediction models” and that by applying Warde-Farley’s generic mathematical algorithms associated with fully connected, recurrent, and convolutional layers the limitation “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create a first set of failure prediction models” will be attained. Andoni teaches: “each of the different lead times corresponding to a different lead time before a predicted failure” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, Andoni discloses “each of the different lead times corresponding to a different lead time window before a predicted failure”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089). Regarding claim 11: Claim 11 recites analogous limitation to claim 2 above and is therefore rejected on the same premise. Regarding claim 12: Claim 12 recites analogous limitations to claim 3 above and is therefore rejected on the same premise. Regarding claim 13: Claim 13 recites analogous limitations to claim 4 above and is therefore rejected on the same premise. Regarding claim 14: Claim 14 recites analogous limitations to claim 5 above and is therefore rejected on the same premise. Regarding claim 15: Claim 15 recites analogous limitations to claim 6 above and is therefore rejected on the same premise. Regarding claim 16: Claim 16 recites analogous limitations to claim 7 above and is therefore rejected on the same premise. Regarding claim 17: Claim 17 recites analogous limitations to claim 8 above and is therefore rejected on the same premise. Regarding claim 18: Claim 18 recites analogous limitations to claim 9 above and is therefore rejected on the same premise. Regarding independent claim 19 Tautz teaches: “A method comprising: receiving historical wind turbine component failure data and wind turbine asset data from one or more SCADA systems during a first period of time” (Tautz, Abstract, page 11 col 1 4th and 5th paragraph, Tautz teaches “simple trending of SCADA data has demonstrated good abilities to detect anomalies” and “extensive historical failure data are required, if the methods are able to reliably diagnose failures” (page 11 col 1 4th and 5th paragraph) where “simple trending” and “extensive” discloses data is from “a first period of time” and “historical failure data” discloses “historical wind turbine component failure data” as the “SCADA data” is data is from wind turbines (abstract)). “receiving first historical sensor data of the first period of time, the first historical sensor data including sensor data from one or more sensors of one or more components of any number of renewable energy assets, the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time” (Tautz, page 2 col 2 4th paragraph, page 1 col 1 2nd paragraph-col 2 1st paragraph, page 3 col 1 last paragraph-col 2 1st paragraph: The specification states “The data extraction module 508 may optionally prepare the historical sensor data (sensor data over a past period of time” (¶ 0113). Tautz teaches collecting “data over a long period” (page 2 col 2 4th paragraph) thereby disclosing “historical” sensor data as “long period” discloses “a past period of time.” Tautz also teaches “all large utility scale WTs have a standard supervisory control and data acquisition (SCADA) system principally used for performance monitoring” where the system provides “a wealth of data” where “the range and type of signals recorded can vary widely from one turbine type to another” and the data collected is used for “early failure detection” (page 1 col 1 2nd paragraph). Therefore, Tautz teaches “sensor data from one or more sensors of one or more components” where the data used for “early failure detection” contains “sensor data indicating at least one first failure.” Additionally, Tautz teaches that a comparison of “results for a nine turbine onshore wind farm of 2 MW turbines were made” where “Historical and real time analyses helped the operator to detect problems” (page 3 col 1 last paragraph-col 2 1st paragraph) thereby teaching “the first historical sensor data indicating at least one first failure associated with the one or more components of the renewable energy asset during the first period of time”). “dividing a period of time into different classes to train failure prediction models for a first component to create multi-class classifications” (Tautz, page 5-6 § 3.3.2 Artificial neural network: Tautz teaches “ANNs are a way of determining non-linear relationships between observations using training data” (§ 3.3.2) and “models used for fault detection were ANN, ANN ensemble, . . . k-nearest-neighbor ANN” where the “Modelling used several time-steps of wind speed, ‘wind deviation’ (assumed to stand for yaw err), blade pitch angle, generator torque and previous time-steps of the target variable as inputs using an ARX approach” (§ 3.3.2, page 6, 1st column, 2nd paragraph) where “time-steps of wind speed . . .” and “models used for fault detection” discloses “dividing a period of time into different classes to train failure prediction models” and thereby disclosing “a first component to create multi-class classifications” as the time-step classifications are used to determine a failure or non-failure classification). Tautz does not teach “different lead times.” Andoni teaches using “different lead times” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, the combination of Tautz and Andoni discloses the limitation “dividing a period of time into different classes to train failure prediction models for a first component using different lead times to create multi-class classifications”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089) as both Tautz and Andoni are concerned with failures in wind turbines. Tautz does not teach: “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data, the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure;” “training a first set of failure prediction models using a deep neural network, wherein the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data with the different lead times as a second input, the deep neural network identifying anomalies from normal operations of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, the input of the first set of failure prediction models being balanced inputs, the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models, each of the different lead times corresponding to a different lead time window before a predicted failure; the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models, each of the different lead times corresponding to a different lead time window before a predicted failure; “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive prediction value; “comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models;” “selecting at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure;” “receiving first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset;” “applying the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components;” “comparing the first failure prediction to a trigger criteria; and” “generating and transmitting a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction.” Gandhi teaches: “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data” and “filtered historical sensor data” (Gandhi, ¶ 0004, ¶ 0038-¶ 0041: Gandhi teaches “detecting abnormalities and failures related to the rotating equipment” (¶ 0004) by collecting historical data (¶ 0040). Gandhi teaches the “Data sets that are indicative of a failure mode can be removed from the historical data” (¶ 0040) thereby disclosing “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm to remove failure data from historical data to as taught by Gandhi as both Tautz and Gandhi are concerned with detecting failures in rotating equipment, in order to determine normal behavior for comparison to actual behavior when determining whether a deviation exists or not where the deviation “may be indicative of a failure of the rotating equipment” (Gandhi ¶ 0041). Ide teaches: “the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure” (Ide, ¶ 0005: Ide teaches using a “plurality of mixture models” where “each mixture model is a function of the plurality of variables, learning weighting factors” (¶ 0005) and “determining a Gaussian Markov random field (GMRF) model from surviving mixture models” etc., where the GMRF model is used to “detect anomalous sensor data values that could be indicative of an impending system failure” (¶ 0005) disclosing using “factor analysis” and “weighting factors to identify data associated with the first failure” therefore the combination of Tautz, Gandhi, and Ide discloses the limitation “applying an anomaly detection algorithm to the received first historical sensor data of the first period of time to remove data associated with first failure to obtain a first filtered historical sensor data the anomaly detection algorithm utilizing factor analysis and weighting factors to identify data associated with the first failure.”) Ide teaches: the input of the first set of failure prediction models being balanced inputs,” (Ide, ¶ 0005: Ide teaches “a method for detecting early indications of equipment failure in an industrial system” using “sensor training data collected from industrial equipment” where “the sensor training data includes samples of sensor values for a plurality of variables” (¶ 0005). Moreover, the sensor data is used for detecting patterns and to “initialize a plurality of mixture models” (¶ 0005) where “unimportant models” are removed from the plurality of mixture models (¶ 0005) disclosing the “first set of failure prediction models being balanced inputs”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including an algorithm that utilizes factor analysis, a weighting factor to identify data associated with failure and balanced inputs as disclosed by Ide as factor analysis can find hidden patterns and weighting factors and balanced inputs allow for control in under or over-representation in the data thereby providing a system where “a control operator can judge if the current operation is good or bad” (Ide, ¶ 0004). Gandenberger teaches: “training a first set of failure prediction models using a deep neural network, wherein the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input, Gandenberger, fig 1, ¶ 0050-¶ 0053, ¶ 0083, ¶ 0144-¶ 0145, ¶ 0150-¶ 0153, ¶ 0167: Gandenberger teaches an “event prediction model” referred to as “a predictive model” (¶ 0144) where the “predictive model” is the “data analytics operation” (¶ 0083) where the “asset data platform” that performs “data analytics” including “anomaly detection” and “failure prediction” (¶ 0053) disclosing “anomaly detection” and “failure prediction” are a result of a “predictive model.” Moreover “obtaining a set of training data for the event prediction model, which may comprise historical values for a set of data variables that are potentially suggestive of whether or not an event occurrence the given type is forth coming” (¶ 0151) where the “event prediction model” may be “artificial neural networks” (¶ 0151) which read on “a deep neural network” disclosing a “deep neural network” that is trained using “historical sensor data” to “identify anomalies” and predict “failures.” Moreover, “some representative types of assets that may be monitored by asset data platform 102” include but are not limited to “electric power generation equipment (e.g., wind turbines” (¶ 0050) Additionally, “different event prediction models may comprise event prediction models configured to preemptively predict event occurrences of the same given type that were created using different sets of training data” where “different sets of training data” disclose “the first input and the second input” and “event prediction models” discloses “outputs from the first input and the second input.”) While Gandenberger teaches using different sets of training data where the training data is made up of “historical sensor data” Gandenberger does not teach “first filtered historical sensor data.” Gandhi teaches: “first filtered historical sensor data” (see above). Andoni teaches: “different lead times” (see above). Therefore the combination of Tautz, Gandenberger, Gandhi, and Andoni teaches the limitation “training a first set of failure prediction models using a deep neural network, wherein the deep neural network is trained using both the first historical sensor data combined with the different lead times as a first input and the first filtered historical sensor data combined with the different lead times as a second input, the deep neural network identifying anomalies from normal operation of the renewable energy assets that lead to at least the first failure based on comparing outputs from the first input and the second input.” Gandenberger teaches: “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions made at each of the different lead time windows, the confusion matrix including metrics for true positives, false positives, true negatives, and false negatives as well as a positive predictive value; comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models; selecting at least one failure prediction model of the first set of failure prediction models based on the comparison of the confusion matrixes, the positive prediction values, and the lead time windows to create a first selected failure prediction model, the first selected failure prediction model including the lead time window before the predicted failure” (Gandenberger, Table 1, Eqn. 1, Eqn. 2, ¶ 0158-¶ 0163, Gandenberger teaches using a “confusion matrix” to “quickly assess the event prediction model’s performance” (¶ 0159) disclosing “evaluating each of the first set of failure prediction models by applying a confusion matrix analysis to predictions” where Gandenberger’s “confusion matrix” includes “metrics for true positives, false positives, true negatives, and false negatives” (see Table 1, ¶ 0159) Gandenberger also teaches using “precision” to evaluate prediction models where p r e c i s i o n = N T P N T P + N F P (Eqn. 1) (precision is part of the confusion matrix (¶ 0159)) where N T P “represents the number of individual ‘true positive’ predictions output by the event prediction model” (¶ 0159) and N F P “represents the number of individual ‘false positive’ predictions output by the event prediction model” (¶ 0159) thereby teaching using “a positive prediction value” as “a positive prediction value” is equal to N T P N T P + N F P as evidenced by Trevethan (2nd page, 2nd col, figure 1). Gandenberger teaches using “individual predictions output by the event prediction model falling into each of these four categories may also be used to calculate metrics that characterize aspects of the event prediction model’s performance” (¶ 0160). In addition to “precision” Gandenberger also teaches “recall= N T P N T P + N F N “ (Eqn 2) where " N F N represents the number of individual ‘false negative predictions’ output by the event prediction model” (¶ 0159) and using both “precision” and “recall” for a reliable comparison between different prediction models (¶ 0162-¶ 0163). Gandenberger also teaches that different lead times are taken into consideration when evaluating a prediction model (¶ 0156). Therefore, Gandenberger teaches “comparing the confusion matrix and the positive prediction value of each of the first set of failure prediction models”); “receiving first current sensor data of a second period of time, the first current sensor data including sensor data from the one or more sensors of the one or more components of the renewable energy asset” (Gandenberger, fig 1, fig 6, ¶ 0050, ¶ 0168- ¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” (fig 1 step 602) disclosing “first current sensor data of a second time period” as the prediction models would have been developed using a different set of data. Gandenberger also teaches the data may be from “electric power generation equipment (e.g., wind turbines, …) (¶ 0050) thereby disclosing “sensor data from the one or more sensors of the one or more components of the renewable energy asset”); “applying the first selected failure prediction model to the current sensor data to generate a first failure prediction of a failure of at least one component of the one or more components” (Gandenberger, fig 1, fig 6, ¶ 0053, ¶ 0168-¶ 0171, Gandenberger teaches “comparing different event predictions models that are configured to preemptively predict event occurrences of the same given type” (¶ 0169) by “applying models to test data,” “evaluate predictions output by models using event windows,” and “determine ‘catch’ and ‘false flag’ numbers” (fig 6 step 602-606) thereby disclosing “applying the first selected failure prediction model to the current sensor data” as “applying models to test data” includes “the first selected failure prediction model” and “event occurrences of the same given type” includes “a failure of at least one component” as Gandenberger teaches the “asset data platform (102)” is programed to “perform data analytics operations based on the asset-related data received from data sources (104), including but not limited to failure prediction, …” (¶ 0053)); “comparing the first failure prediction to a trigger criteria; and generating and transmitting a first alert based on the comparison of the failure prediction to the trigger criteria, the first alert indicating the at least one component of the one or more components and information regarding the failure prediction” (Gandenberger, fig 6, ¶ 0114, ¶ 0179-¶ 0187, Gandenberger teaches a “catch” and a “false flag” where “a ‘catch’ is generally defined as a correct prediction that an event occurrence is forthcoming and a ‘false flag’ is generally defined as an incorrect prediction that an event occurrence is forthcoming” (¶ 0179) where “an event occurrence” may be a “failure prediction” (¶ 0114). Gandenberger also teaches the prediction models may have their output grouped into “alerts” based on a ‘criteria” where “the criteria that is used to group individual positive predictions into alerts may take various forms” (¶ 0181) one form being “the criteria may dictate that a new alert begins when the model changes from outputting a negative prediction to outputting a positive prediction and ends when the model changes from outputting a (positive) prediction back to outputting a negative prediction” where the “positive prediction” indicates “the failure prediction”). Both Tautz and Gandenberger are concerned with identifying abnormalities in wind turbines therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz as modified by Andoni and Gandhi by including training a neural network to detect anomalies and predict failures, the confusion matrix, the positive predictive value (precision), and alerts with trigger criteria in determining a failure prediction model as disclosed by Gandenberger in order to provide a system and method where the “primary purpose of an event prediction model is to enable a data analytics platform to preemptively notify a user that an event occurrence of a given type is forthcoming sufficiently in advance of when the event occurrence actually happens, so that action can be taken to address the event occurrence before it actually happens” in order to “mitigate the costs that may otherwise result from an unexpected occurrence of an undesirable event like an asset failure” (¶ 0153). Warde-Farley teaches: “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create the first set of failure prediction models” (Warde-Farley, fig 1, fig 2, ¶ 0005, ¶ 0067-¶ 0071: Warde-Farley teaches a “deep neural network” (¶ 0005) where “the action selection network (110) may include a sequence of one or more convolutional layers, followed by a recurrent layer” (¶ 0067) and “an embedded network (112)” (that) “may include a sequence of one or more convolutional layers followed by a fully-connected output layer” (¶ 0069) disclosing “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network.” Additionally, Warde-Farley teaches “receiving an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment” (¶ 0008) and “a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques”(¶ 0010) which discloses creating “the first set of failure prediction models”); It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including applying the well-known layers of a neural network system of fully connected, recurrent, and convolutional to create a failure prediction model as taught by Warde-Farley. A person of ordinary skill in the art would understand applying the well know layers of fully connected, recurrent, and convolutional could be applied to any physical application of a neural network such as creating “a first set of failure prediction models” and that by applying Warde-Farley’s generic mathematical algorithms associated with fully connected, recurrent, and convolutional layers the limitation “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network to create a first set of failure prediction models” will be attained. Andoni teaches: “each of the different lead times corresponding to a different lead time window before a predicted failure” (Andoni, fig 6, ¶ 0057-¶ 0059, ¶ 0089: Andoni teaches the failure prediction “should provide a minimum lead time (e.g., 3 days in the illustrated example)” as seen in fig 6 (640), disclosing the minimum lead time may be greater than 3 days. Additionally, Andoni teaches the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” teaching a variety of lead time. Therefore, Andoni discloses “each of the different lead times corresponding to a different lead time window before a predicted failure”). A person of ordinary skill in the art would understand that an algorithm is a mathematical manipulation of data regardless of how the data was generated therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the methods for determining failure detection and condition monitoring of wind turbines as taught by Tautz by including different lead times as taught by Andoni as both Tautz and Andoni are concerned with failures in wind turbines, in order to provide a system where longer lead time “can result in reduced downtime and monetary savings for an operator of the wind farm” (Andoni, ¶ 0089). Response to Arguments Applicant’s arguments (remarks) filed on 09/02/2025 have been fully considered. Regarding Claim Rejections-35 U.S.C. § 103 page 10-19 of Applicant’s remarks, Applicant argues “Tautz discloses a system which utilizes data from a turbine supervisory control and data acquisition (SCADA) system to monitor wind turbines. While Tautz discusses using SCADA systems for wind turbine monitoring and mentions neural networks for fault detection, Tautz does not teach the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs” (remarks page 11). Examiner respectfully disagrees. Tautz teaches “ANNs are a way of determining non-linear relationships between observations using training data” (§ 3.3.2) and “two different ANN model configurations in a study of up to 14 months’ SCADA data from ten 2MW offshore WTs” (§ 3.3.2 page 5) where “14 months’ SCADA data” discloses historical data as the Specification teaches “The data extraction module 508 may optionally prepare the historical sensor data (sensor data over a past period of time) for training failure predictions modules” (Spec, ¶ 0113). Moreover, “extensive historical failure data are required, if the methods are able to reliably diagnose failures” (Tautz, page 11 col 1 4th and 5th paragraph) where an ANN (an algorithm) is used to detect faults (failures) as stated above. Tautz is not relied on to teach “filtered historical sensor data.” Applicant argues “While Andoni mentions that "the neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days," Andoni does not disclose the specific training approach of comparing neural network outputs from training with filtered versus unfiltered data combined with these lead times. Andoni, paragraph [0089]” (remarks, page 12). Examiner respectfully disagrees. Andoni is not relied on to teach “the specific training approach of comparing neural network outputs from training with filtered versus unfiltered data combined with these lead times. Andoni, paragraph [0089].” Andoni is used to disclose “different lead times” and the “neural network model may, in a particular example, increase failure lead time from 3-5 days to 30-40 days” (Andoni, ¶ 0059) teaching a variety of lead times (see Andoni, fig. 6, ¶ 0057-¶ 0059). Applicant argues “Gandhi discloses systems and methods for vibration analysis of rotating equipment. Although Gandhi discusses removing failure data from historical data to establish normal baselines, Gandhi does not teach the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs” (remarks page 13). Examiner respectfully disagrees. Gandhi is not relied on to teach “the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs.” Gandhi teaches “detecting abnormalities and failures related to the rotating equipment” (Gandhi, ¶ 0004) similar to Tautz as wind turbines are rotating equipment, by collecting historical data (Gandhi, ¶ 0040). Moreover “Data sets that are indicative of a failure mode can be removed from the historical data” (Gandhi, ¶ 0040) thereby disclosing “filtered historical sensor data” (see the rejection above). Applicant argues “Ide discloses a method of early indications of equipment failure. Ide appears to disclose using mixture models and weighting factors for anomaly detection, Ide does not teach the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs” (remarks page 14). Examiner respectfully disagrees. Ide is not relied on to disclose “the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs.” Ide discloses a system for detecting indications of failures in equipment using a “plurality of mixture models” where “each mixture model is a function of the plurality of variables, learning weighting factors” (¶ 0005) and “determining a Gaussian Markov random field (GMRF) model from surviving mixture models” etc., where the GMRF model is used to “detect anomalous sensor data values that could be indicative of an impending system failure” (¶ 0005) disclosing “weighting factors” and “factor analysis.” The specification states “the component failure prediction system 104 may utilize factor analysis to identify the importance of features within sensor data” (PG Pub. ¶ 0075). The “mixture models” are “a function of the plurality of variables, learning weighting factors” and “surviving mixture models” are used to determine a GMRF model therefore “mixture models” disclose “factor analysis” as “surviving mixture models” indicate an “importance of features.” Applicant argues “Warde-Farley discloses a system which includes a training subsystem configured to train an action selection neural network based on rewards generated by a reward subsystem using reinforcement learning techniques. Warde-Farley does not teach the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs” (remarks page 15). Examiner respectfully disagrees. Warde-Farley is not relied on to teach “the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs.” Warde-Farley teaches a “deep neural network” (Warde-Farley, ¶ 0005) where “the action selection network (110) may include a sequence of one or more convolutional layers, followed by a recurrent layer” (Warde-Farley, ¶ 0067) and “an embedded network (112)” (that) “may include a sequence of one or more convolutional layers followed by a fully-connected output layer” (Warde-Farley, ¶ 0069) disclosing “the deep neural network including layers of a fully connected neural network, convolutional neural network, and a recurrent neural network.” Additionally, Warde-Farley teaches “receiving an observation characterizing a current state of the environment and an observation characterizing a goal state of the environment” (Warde-Farley, ¶ 0008) and “a training subsystem that is configured to train the action selection neural network based on the rewards generated by the reward subsystem using reinforcement learning techniques” (Warde-Farley, ¶ 0010) which discloses creating “the first set of failure prediction models.” Applicant argues “Gandenberger discloses a system which evaluates different event prediction modules. Gandenberger discusses event prediction models and classification metrics, explaining how predictions can be classified as "true positive," "false positive," "true negative," and "false negative," such as in paragraph [0157]. Gandenberger fails to teach training using both filtered and unfiltered historical data as separate inputs” remarks page 16-17). Examiner respectfully disagrees. Gandenberger is not relied on to teach “training using both filtered and unfiltered historical data as separate inputs.” Applicant argues “Trevethan discloses methods to avoid misconceptions about sensitivity, specificity, and predictive values of screening tests. Trevethan does not teach the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs” (remarks page 18). Examiner respectfully disagrees. Trevethan is not relied on to teach “the specific approach of training a neural network using both filtered and unfiltered historical sensor data combined with different lead times as separate inputs.” Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Graham, III et al., U.S. Pub. No. 2012/0143565 A1, teaches a system and method for predicting wind turbine component failures using operational data to determine probability of a failure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Denise R Karavias whose telephone number is (469)295-9152. The examiner can normally be reached 7:00 - 3:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen M. Vazquez can be reached at 571-272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENISE R KARAVIAS/Examiner, Art Unit 2857 /MICHAEL J DALBO/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Dec 28, 2018
Application Filed
Oct 05, 2021
Non-Final Rejection — §103, §112
Feb 14, 2022
Response Filed
Apr 14, 2022
Final Rejection — §103, §112
Oct 19, 2022
Request for Continued Examination
Oct 24, 2022
Response after Non-Final Action
Dec 19, 2022
Non-Final Rejection — §103, §112
Jun 27, 2023
Response Filed
Sep 11, 2023
Final Rejection — §103, §112
Feb 21, 2024
Request for Continued Examination
Feb 27, 2024
Response after Non-Final Action
Mar 25, 2024
Non-Final Rejection — §103, §112
Aug 02, 2024
Response Filed
Oct 02, 2024
Final Rejection — §103, §112
Apr 07, 2025
Request for Continued Examination
Apr 09, 2025
Response after Non-Final Action
Apr 23, 2025
Non-Final Rejection — §103, §112
Sep 02, 2025
Response Filed
Dec 19, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12571867
NUCLEAR MAGNETIC RESONANCE ANALYSIS SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12535809
MODULAR, GENERAL PURPOSE, AUTOMATED, ANOMALOUS DATA SYNTHESIZERS FOR ROTARY PLANTS
2y 5m to grant Granted Jan 27, 2026
Patent 12535374
SENSOR FOR PARALLEL MEASUREMENT OF PRESSURE AND ACCELERATION AND USE OF THE SENSOR IN A VEHICLE BATTERY
2y 5m to grant Granted Jan 27, 2026
Patent 12529625
IMPROVING DATA MONITORING AND QUALITY USING AI AND MACHINE LEARNING
2y 5m to grant Granted Jan 20, 2026
Patent 12461165
METHOD FOR BALANCING BATTERY MODULES
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
63%
Grant Probability
98%
With Interview (+34.9%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month