DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-10, and 13, 15-21 are rejected under 35 U.S.C. 103 as being unpatentable over
Zhanpan ZHANG et al. (hereinafter ZHANG) US 2022/0292666 A1,
in view of Marcello Tedesco et.al. (hereinafter Ted) US 2021/0125428 A1.
in view of Arnab Chowdhury et.al. (hereinafter Chow) US 2020/0380336 A1.
In regard to claim 1:
ZHANG discloses:
- receiving machine historical sensor data and their failure log and generating a failure labeling model to generate training data from a failure prediction window, a history window and a failure infected interval settings;
In [0026]:
As an overview, embodying systems and methods provide an AI (Artificial Intelligence) anomaly pattern recognition model that leverages a diagnostic expert domain knowledge base and deep learning technique to automatically detect an industrial asset (e.g., wind turbine) operational anomaly and identify root cause(s) corresponding to the detected anomaly. In some embodiments, a large set of training cases can be established based on historical diagnostic records that include multiple root causes. For each training case, several pairs of time series of sensor measurements may be configured and represented as scatter plots, where a combination of data patterns in or derived from the scatter plots indicates a specific root cause of an anomaly reflected in the sensor measurements (i.e., data).
(BRI: a time series of sensor measurements may be configured and represented as a historical records is a “history window)
In [0028]:
FIG. 1 is a schematic block diagram of an example system 100 that may be associated with some embodiments herein. The system includes an industrial asset 105 that may generally operate normally for substantial periods of time but occasionally experience an anomaly that results in a malfunction or other abnormal operation of the asset
(BRI: malfunction or abnormal operation is a “failure”)
In [0028]:
a set of sensors 110 51 through SN may monitor one or more characteristics of the asset 105 (e.g., acceleration, vibration, noise, speed, energy consumed, output power, etc.). The information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
In [0045]:
As such, each scatter plot captures a specific pair of time series data derived from the sensor measurements for a wind turbine (or other asset). In FIG. 5A, the high tower acceleration measurements are due to wind turbine blade misalignment and in FIG. 5B the high tower acceleration measurements captured in the scatter plot are due to an incorrect setting of a specific control parameter for the wind turbine.
(BRI: collecting asset characteristics to predict abnormal operations does provide data for setting failure intervals. The settings need is captured with this limitation)
In [0026]:
a large set of training cases can be established based on historical diagnostic records that include multiple root causes.
(BRI: A diagnostic record detailing an anomaly that stems from multiple underlying issues can be considered a type of failure log)
In [0037]:
The training data establishment component 320 or functionality of deep learning model system 310 may operate to establish a set of training cases based on the historical diagnostic records of the wind turbine operational data 305 that includes multiple root causes embedded within the data. The set of training cases may be used in training the deep learning model generated by component 325.
In [0038]:
deep learning model building and validation component 325 may operate to develop (i.e., generate) a deep learning classification model that builds connections (e.g., transfer functions, algorithms, etc.) between the scatter plots based on the operational data and root causes for anomalies in the operational data by processing an input of high-dimensional images including data pixels corresponding to the scatter plots to generate an output including root cause labels associated with one or more anomalies derived from data patterns in the images.
In [0064]:
The machine learning engine processes the combination of images to recognize patterns therein that correspond to one of a plurality of defined anomalies
(BRI: a machine learning engine that uses a combination of images to recognize patterns corresponding to a plurality of defined anomalies can be, and often is, an ensemble classifier)
In [0053]:
In some embodiments, at least a portion of the received historical time series sensor data may be transformed to a format, configuration, level, resolution, etc. from its raw configuration as obtained by the wind turbine (or other asset) sensors
In [0055]:
At operation 615, a root cause label is assigned to each visual image including the scatter plots representing an operational anomaly based on a reference
In [0055]:
In some aspects, a standardized ground truth label is assigned to each generated image. In some regards, abnormal sensor measurements (i.e., anomalies) may be caused by different root causes. In particular, each root cause requires a specific type of maintenance and repair practice. As such, identification of the correct root cause can provide actionable insights with respect to on-going operations, preventative maintenance, and corrective maintenance aspects of a wind turbine (and/or other assets).
In [0057]:
Continuing to operation 620, a deep learning model and more particularly a convolutional neural network (CNN) model is trained using a first subset of the labeled images and tested based on a second subset of the labeled images applied to the trained model to evaluate the performance of the trained model
- providing the failure labeling model's output data to a failure classification model or pipeline that is generated automatically to learn failure signal behavior and also providing the failure labeling model's output to an anomaly detection model or pipeline to detect an abnormal behavior in real time;
In [0011]:
FIG. 8 is an illustrative example representation of data associated with labeling images in accordance with some embodiments;
In [0038]:
The deep learning model building and validation component 325 or functionality of deep learning model system 310 may operate to convert or transform the scatter plots (or other representations of wind turbine operational data 305) into visual representation images of the scatter plots (or other representations of the operational data). For example, deep learning model building and validation component 325 may operate to develop (i.e., generate) a deep learning classification model that builds connections (e.g., transfer functions, algorithms, etc.) between the scatter plots based on the operational data and root causes for anomalies in the operational data by processing an input of high-dimensional images including data pixels corresponding to the scatter plots to generate an output including root cause labels associated with one or more anomalies derived from data patterns in the images. The deep learning model herein is a deep learning classification model developed to build a connection between scatter plots including data representations of wind turbine anomalies and the corresponding root causes thereof. In some aspects, a convolutional neural network (CNN) model is developed to capture and process pixel data to recognize the complex data patterns in images of the scatter plots and to further classify anomaly cases in the training set as being associated with a particular root cause for the determined anomaly classification.
In [0047] :
In some aspects, there might generally be a large variation in wind turbine operation data due to a plurality or combination of sensor, turbine control, and environment factors. The combination and complexity of factors presents a challenge to accurately distinguishing between normal wind turbine operation and abnormal wind turbine operation
In [0028]:
the information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
(BRI: abnormal prediction is a “failure prediction”)
ZHANG does not explicitly disclose:
- and applying an ensemble classifier to the outputs of the data failure classification model and the anomaly detection model to predict a machine failure.
However, Ted discloses:
- and applying an ensemble classifier to the outputs of the data failure classification model and the anomaly detection model to predict a machine failure.
In [0005]:
with principles of inventive concepts a vehicle monitoring system may monitor a vehicle characteristic and, from the monitoring, may determine the state of a vehicle component. The system may set an alert and may communicate that alert to a user or supervisory authority. The state of the vehicle component may relate to a vehicle tire and to the potential delamination of a tire.
In [0058]:
In example embodiments one or more classifiers may be trained using tire characteristic data from one or more sensors. If multiple classifiers are trained, they may be trained to provide an indication of the degree to which tire delamination has taken place and “live” signals from an active vehicle may be compared against the one or more trained classifiers to determine the probability of failure (for example, delamination) within a given period (the “period” may be expressed as time, or distance, for example). The probability may take into account various driving conditions, such as velocity, load, or road surface quality, for example, in addition to sensor data such as pressure or temperature data, for example,
In [0067]:
A system and method may employ machine learning to recognize a tire fault and to determine the severity of the fault. Machine learning may be used constantly or may be engaged after an initial indication of a fault (for example, a periodic signal anomaly) is detected.
In [0056]:
principles of inventive concepts may assess the possibility of the onset and/or propagation of a delamination by detecting and analyzing the variation of movement and other sensed characteristics of a tire. These sensed characteristics may be used to determine the degree of failure (for example, delamination) and the time of failure migration. In example embodiments data from triaxial accelerometers, (and/or, possibly, other sensors which may disclose the time/acceleration signature associated with an angle of delamination, for example) may be used to develop a learning process (to train a classifier, for example) to refine the process of recognizing the onset of tire failures.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG and Ted.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
One of ordinary skill would have motivation to combine ZHANGZ and Ted that can improve the operational life of the tire system and avoid costly catastrophes that may associate with it (Ted [0054]).
ZHANG and Ted do not explicitly disclose:
- A method to maintain a machine, comprising:
However, Chow discloses:
- A method to maintain a machine, comprising:
In [0031]:
A system, method, and computer-readable medium are disclosed for a hardware component failure prediction system that can incorporate a time-series dimension as an input
In [0038]:
Data is provided to the system by a plurality of internet of things (IoT) devices 130 and 135 that are connected to information handling system 100 by network 140.
(BRI: the machine is a IoT for maintenance)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT maintenance.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement (Chow [0061]).
In regard to claim 2:
ZHANG discloses:
- comprising automatically identifying failure instances from a historical data stream by the failure labeling model
0028] FIG. 1 is a schematic block diagram of an example system 100 that may be associated with some embodiments herein. The system includes an industrial asset 105 that may generally operate normally for substantial periods of time but occasionally experience an anomaly that results in a malfunction or other abnormal operation of the asset.
In [0028]:
the information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
In [0034]:
FIG. 3 is a schematic block diagram depicting an overall system 300, in accordance with some embodiments. System 300 illustrates wind turbine operational data 305 being provided as input(s) to a deep learning model development and implementation system, device, service, or apparatus (also referred to herein simply as a “system” or “service”) 310 that outputs, at least, data 330 indicative of wind turbine anomalies detected by deep learning model system 310 and the root cause(s) corresponding to the detected anomalies.
In [0036]:
some scenarios, operational data 305 might include historical operational data associated with one or more wind turbines.
In [0039]:
output of deep learning model system 310 including an indication of the detected one or more anomalies derived from data patterns in the images and the corresponding root cause labels
In regard to claim 3:
ZHANG discloses:
- comprising using time series similarities to relabel a failure and normal signals
In 0055]:
At operation 615, a root cause label is assigned to each visual image including the scatter plots representing an operational anomaly based on a reference to and leveraging of, at least in part, a digitized knowledge domain data structure or system associated with the industrial asset(s) in combination with the data patterns in each image. In some aspects, a standardized ground truth label is assigned to each generated image. In some regards, abnormal sensor measurements (i.e., anomalies) may be caused by different root causes. In particular, each root cause requires a specific type of maintenance and repair practice. As such, identification of the correct root cause can provide actionable insights with respect to on-going operations, preventative maintenance, and corrective maintenance aspects of a wind turbine (and/or other assets).
In [0064]:
the machine learning engine processes the combination of images to recognize patterns therein that correspond to one of a plurality of defined anomalies (e.g., 8 anomalies in the example of FIG. 12). The output 1215 of the machine learning engine includes an indication of the specific root cause (e.g., anomaly 2=blade calibration and anomaly 4=incorrect ramp rate) in response to the specific inputs 1210.
(BRI: Using time series similarities to relabel failure and normal signals is a process where unlabeled or ambiguously labeled data points are assigned a definitive label (either "failure" or "normal") based on how closely their patterns or shapes match known, pre-established examples of each class)
ZHANG, and Ted do not explicitly disclose:
- and increasing the quality of training data for the failure classification model or pipeline.
However, Chow discloses :
- and increasing the quality of training data for the failure classification model or pipeline.
In [0042]:
to allow for accurate and efficient results to be provided by the deep neural network, the data needs to be preprocessed to better enable the deep neural networks to converge rapidly to a solution that can accurately predict device failure.
(BRI: preprocessing enhance the quality of the NN)
In regard to claim 4:
ZHANG and Ted do not explicitly disclose:
- comprising real-time general streaming that allows businesses to link machines and assets.
However, Chow discloses:
- comprising real-time general streaming that allows businesses to link machines and assets.
In [0043]:
Once training and validation datasets are formed that include information relevant to continuous and categorical features, that information can be used to determine a failure prediction model for the hardware device type. Modeling stage 240 utilizes the sample sets to first train the double-stacked long-short term memory deep neural network, and then validate the trained solution to perform additional tuning. Once the solution has been satisfactorily tuned, the solution can be used to help enable failure prediction for devices not included in the sample sets. This information can be provided during deployment stage 250 to business units that can utilize the information in support of customers.
In [0044]:
FIG. 3 is a simplified flow diagram illustrating a set of steps involved in data processing stage 240, in accord with embodiments of the present invention. As discussed above, information collected from a set of devices falling in an IoT device type of interest
In regard to claim 5:
ZHANG and Ted do not explicitly disclose:
- comprising providing the output of the failure labeling model to generate quality labeled training data.
However, Chow discloses:
- comprising providing the output of the failure labeling model to generate quality labeled training data.`
In [0042]:
to allow for accurate and efficient results to be provided by the deep neural network, the data needs to be preprocessed to better enable the deep neural networks to converge rapidly to a solution that can accurately predict device failure.
In [0063] :
The failure prediction system discussed above is designed such that it is generic and can be used for any IoT hardware components that are connected to provide telemetry data. While the above discussion has focused on an example of hard disk drives, embodiments are not limited to HDDs, but can be applied to any IoT device.
In regard to claim 9:
ZHANG and Ted do not explicitly disclose:
- comprising representing machine sensor data as two dimensional (2D) time series data with timestamps and features.
However, Chow discloses:
- comprising representing machine sensor data as two dimensional (2D) time series data with timestamps and features.
In [0034]:
Embodiments of the present invention utilize a deep-learning based architecture for component failure prediction and address a variety of issues inherent in traditional systems. Such issues include: (1) incorporating a time-series dimension is an input; (2) incorporating a combination of multi-dimensional continuous and categorical parameters with only the continuous parameters having a time-series component; (3) addressing a class imbalance problem between devices that have failed and those that have not failed; (4) ensuring that device observation sequences are weighted based on their importance in their ability to predict a next failure; (5) predicting component failure in any day in a certain window of a future time period; and, (6) providing self-learning for the prediction model.
In [0048]:
FIG. 4 is a table 400 illustrating observation ranking for each passing HDD. A primary object of the solution model is to predict whether an IoT device will fail within the next “d” days. To this end, “a” days of observations are selected for each of the passing device samples in the passing device data frames of both the training and validation datasets based on the ranking performed in step 330 (350). As illustrated in FIG. 4, the range of the ranking is [d+1, d+a] with d+a≤x, where x is the minimum threshold of event data occurrences used in 310.
PNG
media_image1.png
311
617
media_image1.png
Greyscale
(BRI: the FIG 4 representation of observation is a timestamp of observations)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT maintenance.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement (Chow [0061]).
In regard to claim 10:
ZHANG and Ted do not explicitly disclose:
- comprising representing machine sensor data as three- dimensional (3D) time series data sequences with timestamps, history window, and features to capture temporal context.
However, Chow discloses:
- comprising representing machine sensor data as three- dimensional (3D) time series data sequences with timestamps, history window, and features to capture temporal context.
In [0034]:
Embodiments of the present invention utilize a deep-learning based architecture for component failure prediction and address a variety of issues inherent in traditional systems. Such issues include: (1) incorporating a time-series dimension is an input; (2) incorporating a combination of multi-dimensional continuous and categorical parameters with only the continuous parameters having a time-series component; (3) addressing a class imbalance problem between devices that have failed and those that have not failed; (4) ensuring that device observation sequences are weighted based on their importance in their ability to predict a next failure; (5) predicting component failure in any day in a certain window of a future time period; and, (6) providing self-learning for the prediction model.
In [0048]:
FIG. 4 is a table 400 illustrating observation ranking for each passing HDD. A primary object of the solution model is to predict whether an IoT device will fail within the next “d” days. To this end, “a” days of observations are selected for each of the passing device samples in the passing device data frames of both the training and validation datasets based on the ranking performed in step 330 (350). As illustrated in FIG. 4, the range of the ranking is [d+1, d+a] with d+a≤x, where x is the minimum threshold of event data occurrences used in 310.
PNG
media_image1.png
311
617
media_image1.png
Greyscale
(BRI: the FIG 4 representation of observation is a timestamp of observations)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT maintenance.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement (Chow [0061]).
In regard to claim 13:
ZHANG and Bhat do not explicitly disclose:
- augmenting failure data;
- balancing the failure data;
- extracting features from the data;
- if features are extracted, selecting a 2D deep learning model and otherwise selecting a 3D deep learning model;
- and performing failure prediction.
However, Chow discloses:
- augmenting failure data;
In [0008]:
generating the oversampled set of observations from the set of records associated with failed devices in the training dataset further includes synthetically creating repetitive samples using a moving time window. In still a further aspect, synthetically creating repetitive samples using a moving time window further includes generating and over sampled set of observations “d” from “a” actual observations such that for observation “n” in the set of observations, the observation is in a date range characterized by [d+2−n, d+a+1−n].
(BRI: synthetically creating samples associated with the training set of failed devices represents augmenting the failure data)
- balancing the failure data;
In [0098]:
One of the challenges of the feeder ranking application is that of imbalanced data/scarcity of data characterizing the failure class can cause problems with generalization. Specifically, primary distribution feeders are susceptible to different kinds of failures, and one can have very few training examples for each kind of event, making it difficult to reliably extract statistical regularities or determine the features that affect reliability.
In [0099]:
In one particular embodiment, the focus is on most serious failure type, where the entire feeder is automatically taken offline by emergency substation relays, due to some type of fault being detected by sensors. The presently disclosed system for generating data sets can address the challenge of learning with rare positive examples (feeder failures). An actual feeder failure incident is instantaneous: a snapshot of the system at that moment will have only one failure example. To better balance the data, one can employ the rare event prediction setup shown in FIG. 6, labeling any example that had experienced a failure over some time window as positive
- extracting features from the data;
In [0042]:
Data processing steps can include data transformation, such as filtering, ordering, normalization, oversampling, and selecting sample sets. Feature engineering techniques can include defining continuous and categorical features, normalization of continuous features, determining those features of greatest impact to device failure, and the like.
In [0053]:
Continuous feature data is normalized (815). In one embodiment, the data is normalized using a min-max normalization, such that (a) each feature contributes approximately proportionately while predicting the target feature; and (b) gradient descent converges faster with features scaling than without features scaling. Min−max normalization is a normalization strategy that linearly transforms x to y=(x−min)/(max−min), wherein min and max are minimum and maximum values in X, where X is a set of observed values of x.
- if features are extracted, selecting a 2D deep learning model and otherwise selecting a 3D deep learning model;
In [0054]:
After processing the categorical and continuous features, the train, validation, and hold-out datasets are separated out using each dataset identifier
(BRI:a DNN-based failure prediction system that incorporates a time-series dimension can utilize a 2D deep learning model, specifically by transforming the time-series data into a 2D format)
In [0031]:
A system, method, and computer-readable medium are disclosed for a hardware component failure prediction system that can incorporate a time-series dimension as an input while also addressing issues related to a class imbalance problem associated with failure data. Embodiments provide this capability through the use of a deep learning-based artificial intelligence binary classification method. Embodiments utilize a double-stacked long short-term memory (DS-LSTM) deep neural network with a first layer of the LSTM passing hidden cell states learned from a sequence of multi-dimensional parameter time steps to a second layer of the LSTM that is configured to capture a next sequential prediction output. Output from the second layer of the LSTM is concatenated with a set of categorical variables to an input layer of a fully-connected dense neural network layer. Information generated by the dense neural network provides prediction of whether a hardware component will fail in a given future time interval. In addition, in some embodiments, a lagged feedback component from the output is added back to the input layer of the DNN and concatenated to the set of categorical parameters and next sequential higher-dimension parameter set. This enables the system to self-learn and increases robustness.
In [0053]:
Continuous feature data is normalized (815). In one embodiment, the data is normalized using a min-max normalization, such that (a) each feature contributes approximately proportionately while predicting the target feature; and (b) gradient descent converges faster with features scaling than without features scaling. Min−max normalization is a normalization strategy that linearly transforms x to y=(x−min)/(max−min), wherein min and max are minimum and maximum values in X, where X is a set of observed values of x.
- and performing failure prediction
In [0031]:
A system, method, and computer-readable medium are disclosed for a hardware component failure prediction system that can incorporate a time-series dimension as an input while also addressing issues related to a class imbalance problem associated with failure data. Embodiments provide this capability through the use of a deep learning-based artificial intelligence binary classification method. increases robustness.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT maintenance.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement (Chow [0061]).
In regard to claim 15:
ZHANG and Ted do not explicitly disclose:
- comprising applying time series augmentation methods to artificially generate failure sequences when small number of failure events occurred in training data.
However, Chow discloses:
- comprising applying time series augmentation methods to artificially generate failure sequences when small number of failure events occurred in training data.
In [0008]:
generating the oversampled set of observations from the set of records associated with failed devices in the training dataset further includes synthetically creating repetitive samples using a moving time window.
(BRI: synthetically creating samples is an artificially generated associated with the set of failed devices (augmenting the failure data))
In [0062]:
As discussed above, embodiments introduce a unique way of handling class imbalance by synthetically creating repetitive samples of the lower proportion class using a moving time window method. The manner in which the model architecture is designed uniquely provides an initial layer of LSTM that consumes time series specific multi-dimensional input parameters to output a hidden cell state at each time step
In [0062] :
embodiments introduce a unique way of handling class imbalance by synthetically creating repetitive samples of the lower proportion class using a moving time window method.
(BRI: A small number of failure events occurring in training data is commonly referred to as an imbalanced dataset or class imbalance)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT maintenance.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement (Chow [0061]).
In regard to claim 16:
ZHANG discloses:
- receiving machine historical sensor data and their failure log and generating a failure labeling model to generate training data from a failure prediction window, a history window and a failure infected interval settings;
In [0026]:
As an overview, embodying systems and methods provide an AI (Artificial Intelligence) anomaly pattern recognition model that leverages a diagnostic expert domain knowledge base and deep learning technique to automatically detect an industrial asset (e.g., wind turbine) operational anomaly and identify root cause(s) corresponding to the detected anomaly. In some embodiments, a large set of training cases can be established based on historical diagnostic records that include multiple root causes. For each training case, several pairs of time series of sensor measurements may be configured and represented as scatter plots, where a combination of data patterns in or derived from the scatter plots indicates a specific root cause of an anomaly reflected in the sensor measurements (i.e., data).
(BRI: a time series of sensor measurements may be configured and represented as scatter (BRI: a time series of sensor measurements may be configured and represented as a historical record is a “history window)
In [0028]:
FIG. 1 is a schematic block diagram of an example system 100 that may be associated with some embodiments herein. The system includes an industrial asset 105 that may generally operate normally for substantial periods of time but occasionally experience an anomaly that results in a malfunction or other abnormal operation of the asset
(BRI: malfunction or abnormal operation is a “failure”)
In [0028]:
a set of sensors 110 51 through SN may monitor one or more characteristics of the asset 105 (e.g., acceleration, vibration, noise, speed, energy consumed, output power, etc.). The information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
In [0045]:
As such, each scatter plot captures a specific pair of time series data derived from the sensor measurements for a wind turbine (or other asset). In FIG. 5A, the high tower acceleration measurements are due to wind turbine blade misalignment and in FIG. 5B the high tower acceleration measurements captured in the scatter plot are due to an incorrect setting of a specific control parameter for the wind turbine.
(BRI: collecting asset characteristics to predict abnormal operations does provide data for setting failure intervals. The settings need is captured with this limitation)
In [0026]:
a large set of training cases can be established based on historical diagnostic records that include multiple root causes.
(BRI: A diagnostic record detailing an anomaly that stems from multiple underlying issues can be considered a type of failure log)
In [0037]:
The training data establishment component 320 or functionality of deep learning model system 310 may operate to establish a set of training cases based on the historical diagnostic records of the wind turbine operational data 305 that includes multiple root causes embedded within the data. The set of training cases may be used in training the deep learning model generated by component 325.
In [0038]:
deep learning model building and validation component 325 may operate to develop (i.e., generate) a deep learning classification model that builds connections (e.g., transfer functions, algorithms, etc.) between the scatter plots based on the operational data and root causes for anomalies in the operational data by processing an input of high-dimensional images including data pixels corresponding to the scatter plots to generate an output including root cause labels associated with one or more anomalies derived from data patterns in the images.
In [0064]:
The machine learning engine processes the combination of images to recognize patterns therein that correspond to one of a plurality of defined anomalies
(BRI: a machine learning engine that uses a combination of images to recognize patterns corresponding to a plurality of defined anomalies can be, and often is, an ensemble classifier)
In [0053]:
In some embodiments, at least a portion of the received historical time series sensor data may be transformed to a format, configuration, level, resolution, etc. from its raw configuration as obtained by the wind turbine (or other asset) sensors
In [0055]:
At operation 615, a root cause label is assigned to each visual image including the scatter plots representing an operational anomaly based on a reference
In [0055]:
In some aspects, a standardized ground truth label is assigned to each generated image. In some regards, abnormal sensor measurements (i.e., anomalies) may be caused by different root causes. In particular, each root cause requires a specific type of maintenance and repair practice. As such, identification of the correct root cause can provide actionable insights with respect to on-going operations, preventative maintenance, and corrective maintenance aspects of a wind turbine (and/or other assets).
In [0057]:
Continuing to operation 620, a deep learning model and more particularly a convolutional neural network (CNN) model is trained using a first subset of the labeled images and tested based on a second subset of the labeled images applied to the trained model to evaluate the performance of the trained model
- providing the failure labeling model's output data to a failure classification model or pipeline that is generated automatically to learn failure signal behavior and also providing the failure labeling model's output to an anomaly detection model or pipeline to detect an abnormal behavior in real time;
In [0011]:
FIG. 8 is an illustrative example representation of data associated with labeling images in accordance with some embodiments;
In [0038]:
The deep learning model building and validation component 325 or functionality of deep learning model system 310 may operate to convert or transform the scatter plots (or other representations of wind turbine operational data 305) into visual representation images of the scatter plots (or other representations of the operational data). For example, deep learning model building and validation component 325 may operate to develop (i.e., generate) a deep learning classification model that builds connections (e.g., transfer functions, algorithms, etc.) between the scatter plots based on the operational data and root causes for anomalies in the operational data by processing an input of high-dimensional images including data pixels corresponding to the scatter plots to generate an output including root cause labels associated with one or more anomalies derived from data patterns in the images. The deep learning model herein is a deep learning classification model developed to build a connection between scatter plots including data representations of wind turbine anomalies and the corresponding root causes thereof. In some aspects, a convolutional neural network (CNN) model is developed to capture and process pixel data to recognize the complex data patterns in images of the scatter plots and to further classify anomaly cases in the training set as being associated with a particular root cause for the determined anomaly classification.
In [0047] :
In some aspects, there might generally be a large variation in wind turbine operation data due to a plurality or combination of sensor, turbine control, and environment factors. The combination and complexity of factors presents a challenge to accurately distinguishing between normal wind turbine operation and abnormal wind turbine operation
In [0028]:
the information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
(BRI: abnormal prediction is a “failure prediction”)
ZHANG does not explicitly disclose:
- and applying an ensemble classifier to the outputs of the data failure classification model and the anomaly detection model to predict a machine failure.
However, Ted discloses:
- and applying an ensemble classifier to the outputs of the data failure classification model and the anomaly detection model to predict a machine failure.
In [0058]:
In example embodiments one or more classifiers may be trained using tire characteristic data from one or more sensors. If multiple classifiers are trained, they may be trained to provide an indication of the degree to which tire delamination has taken place and “live” signals from an active vehicle may be compared against the one or more trained classifiers to determine the probability of failure (for example, delamination) within a given period (the “period” may be expressed as time, or distance, for example). The probability may take into account various driving conditions, such as velocity, load, or road surface quality, for example, in addition to sensor data such as pressure or temperature data, for example,
In [0005]:
with principles of inventive concepts a vehicle monitoring system may monitor a vehicle characteristic and, from the monitoring, may determine the state of a vehicle component. The system may set an alert and may communicate that alert to a user or supervisory authority. The state of the vehicle component may relate to a vehicle tire and to the potential delamination of a tire.
In [0067]:
A system and method may employ machine learning to recognize a tire fault and to determine the severity of the fault. Machine learning may be used constantly or may be engaged after an initial indication of a fault (for example, a periodic signal anomaly) is detected.
In [0056]:
principles of inventive concepts may assess the possibility of the onset and/or propagation of a delamination by detecting and analyzing the variation of movement and other sensed characteristics of a tire. These sensed characteristics may be used to determine the degree of failure (for example, delamination) and the time of failure migration. In example embodiments data from triaxial accelerometers, (and/or, possibly, other sensors which may disclose the time/acceleration signature associated with an angle of delamination, for example) may be used to develop a learning process (to train a classifier, for example) to refine the process of recognizing the onset of tire failures.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG and Ted.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
One of ordinary skill would have motivation to combine ZHANGZ and Ted that can improve the operational life of the tire system and avoid costly catastrophes that may associate with it (Ted [0054]).
ZHANG and Ted do not explicitly disclose:
- A system, comprising: at least a machine to be maintained; a maintenance server coupled to the machine using an internet of things (IoT) protocol in real time as a stream of data, the maintenance server running computer code for:
However, Chow discloses:
- A system, comprising: at least a machine to be maintained; a maintenance server coupled to the machine using an internet of things (IoT) protocol in real time as a stream of data, the maintenance server running computer code for:
In [0038]:
Data is provided to the system by a plurality of internet of things (IoT) devices 130 and 135 that are connected to information handling system 100 by network 140. IoT devices 130 are coupled to the information handling system via edge network server 142, which can act as an intermediary in gathering data from the IoT devices and providing a desired subset of the data to the information handling system 100 via network port 110.
In [0041]:
data acquisition stage 210 is an initial stage of the process in which IoT devices coupled to a network (e.g., network 140) provide information about the state of those devices to servers (e.g., information handling system 100 or edge network server 142) that can store the information in one or more databases.
In [0006]:
A system, method, and computer-readable medium are disclosed for predicting failure of a hardware device, where the system, method, and computer-readable medium can incorporate a time-series dimension as an input
In [0032]:
early component failure detection coupled with preventative replacement and automatic monitoring facilitates total productive maintenance in real time.
In [0039]:
the implementation of the predictive maintenance system on information handling system 100 provides a useful and concrete result of accurate estimation of when an IoT device is about to fail.
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine ZHANG, Ted and Chow.
ZHANG teaches failure labeling, windows for historical failures and detection of abnormality in operation.
Ted teaches ensemble (multiple classifiers) for failure classification and anomaly detection.
Chow teaches IoT.
One of ordinary skill would have motivation to combine ZHANG , Ted and Chow that can provide accuracy improvement over the prediction of the failure (Chow [0061]).
In regard to claim 17:
ZHANG discloses:
- comprising automatically identifying failure instances from a historical data stream by the failure labeling model
0028] FIG. 1 is a schematic block diagram of an example system 100 that may be associated with some embodiments herein. The system includes an industrial asset 105 that may generally operate normally for substantial periods of time but occasionally experience an anomaly that results in a malfunction or other abnormal operation of the asset.
In [0028]:
the information from the sensors may, according to some embodiments described herein, be collected and used to facilitate detection and/or prediction of abnormal operation (i.e., an anomaly) of operating asset 105 and the root cause corresponding to the detected anomaly.
In [0034]:
FIG. 3 is a schematic block diagram depicting an overall system 300, in accordance with some embodiments. System 300 illustrates wind turbine operational data 305 being provided as input(s) to a deep learning model development and implementation system, device, service, or apparatus (also referred to herein simply as a “system” or “service”) 310 that outputs, at least, data 330 indicative of wind turbine anomalies detected by deep learning model system 310 and the root cause(s) corresponding to the detected anomalies.
In [0036]:
some scenarios, operational data 305 might include historical operational data associated with one or more wind turbines.
In [0039]:
output of deep learning model system 310 including an indication of the detected one or more anomalies derived from data patterns in the images and the corresponding root cause labels
In regard to claim 18:
ZHANG discloses:
- comprising using time series similarities to relabel a failure and normal signals and increasing the quality of training data for the failure classification model or pipeline.
In 0055]:
At operation 615, a root cause label is assigned to each visual image including the scatter plots representing an operational anomaly based on a reference to and leveraging of, at least in part, a digitized knowledge domain data structure or system associated with the industrial asset(s) in combination with the data patterns in each image. In some aspects, a standardized ground truth label is assigned to each generated image. In some regards, abnormal sensor measurements (i.e., anomalies) may be caused by different root causes. In particular, each root cause requires a specific type of maintenance and repair practice. As such, identification of the correct root cause can provide actionable insights with respect to on-going operations, preventative maintenance, and corrective maintenance aspects of a wind turbine (and/or other assets).
In [0064]:
the machine learning engine processes the combination of images to recognize patterns therein that correspond to one of a plurality of defined anomalies (e.g., 8 anomalies in the example of FIG. 12). The output 1215 of the machine learning engine includes an indication of the specific root cause (e.g., anomaly 2=blade calibration and anomaly 4=incorrect ramp rate) in response to the specific inputs 1210.
(BRI: Using time series similarities to relabel failure and normal signals is a process where unlabeled or ambiguously labeled data points are assigned a definitive label (either "failure" or "normal") based on how closely their patterns or shapes match known, pre-established examples of each class)
ZHANG, and Ted do not explicitly disclose:
- and increasing the quality of training data for the failure classification model