DETAILED ACTION
This office action is responsive to the request for continued examination filed 2/5/2026. The application contains claims 1, 3-11, 13-22, all examined and rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/5/2026 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-11, 13-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 is rejected under 35 USC 101 because the claimed inventions are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
While independent claims 1, 10 and 11 are each directed to a statutory category, it recites a series of steps pertaining to analyze received data to identify features that are used to predict machine failure, which appears to be directed to an abstract idea (mental process, mathematical concept).
Claims 1, 3-11, 13-21 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below.
When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts, (2019 PEG)
STEP 1.
Per Step 1, the claims are determined to include process, manufacture, and machine as in independent Claim 1, 10, and 11, and in the therefrom dependent claims. Therefore, the claims are directed to a statutory eligibility category.
At step 2A, prong 1, The invention is directed to identifying features within received data that could be an indication of the probability of occurrence of a machine failure based on analyzed historic data which is akin to Mental Process (see Alice), As such, the claims include an abstract idea. When considering the limitations individually and as a whole the limitations directed to the abstract idea are:
“generating a plurality of data features based on at least a portion of the sensor data”, “selecting, from the plurality of data features, at least one indicative data feature for a machine failure detection”, “detect machine failure indicators based on the selected at least one indicative data feature”, “determining, selected at least one indicative data feature that is associated with the new sensor data, whether at least one machine failure indicator was detected in the new sensor data”, “tagging the at least one machine failure indicator upon determination that the at least one machine failure indicator was detected, wherein upon determination that no machine failure indicators were detected, continuously searches for machine failure indicators (Mental process, observation, evaluation and judgment) “selecting, from the plurality of data features, at least one indicative data feature for machine failure prediction; applying to the selected at least one indicative data feature a supervised machine failure prediction process; wherein the supervised machine failure prediction process is configured to predict machine failures based on the selected at least one indicative data feature; and updating the supervised machine failure prediction process with the tagged at least one machine failure indicator, such that the supervised machine failure prediction process is continuously and automatically updated and improved” (Mental process, observation, evaluation and judgment).
The claim recites additional elements as
“online machine learning based method for detection and prediction of industrial machine failures”, “non-transitory computer readable medium having stored thereon instructions for Causing a processing circuitry to perform a process”, “a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry” (“Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C));
receiving sensor data related to at least one industrial machine (insignificant extra-solution activity, MPEP 2106.05(g));
“applying an unsupervised machine failure”, “unsupervised machine failure detection process is configured”, “applying the unsupervised machine failure detection process to the selected at least one indicative data feature”, “unsupervised machine failure detection process “ (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)).
This judicial exception is not integrated into a practical application. The elements are recited at a high level of generality, i.e. a generic computing system performing generic functions including generic processing of data. Accordingly the additional elements do not integrate the abstract into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore the claims are directed to an abstract idea. (2019 Revised Patent Subject Matter Eligibility Guidance ("2019 PEG"). Thus, under Step 2A of the Mayo framework, the Examiner holds that the claims are directed to concepts identified as abstract.
STEP 2B.
Because the claims include one or more abstract ideas, the examiner now proceeds to Step 2B of the analysis, in which the examiner considers if the claims include individually or as an ordered combination limitations that are "significantly more" than the abstract idea itself. This includes analysis as to whether there is an improvement to either the "computer itself," "another technology," the "technical field," or significantly more than what is "well-understood, routine, or conventional" (WURC) in the related arts.
The instant application includes in Claim 1 additional steps to those deemed to be abstract idea(s).
When taken the steps individually, these steps are:
“online machine learning based method for detection and prediction of industrial machine failures”, “non-transitory computer readable medium having stored thereon instructions for Causing a processing circuitry to perform a process”, “a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry” (“Using a computer as a tool to perform a mental process”, MPEP 2106.05(f)(2));
receiving sensor data related to at least one industrial machine (WELL-UNDERSTOOD, ROUTINE, CONVENTIONAL ACTIVITY, sending, receiving, displaying and processing data are common and basic functions in computer technology, MPEP 2106.05(d)(II)(i))
“applying an unsupervised machine failure”, “unsupervised machine failure detection process is configured”, “applying the unsupervised machine failure detection process to the selected at least one indicative data feature”, “unsupervised machine failure detection process “ (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h) and mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f));
In the instant case, Claim 1 is directed to above mentioned abstract idea. Technical functions such as receiving, and extracting are common and basic functions in computer technology. The individual limitations are recited at a high level and do not provide any specific technology or techniques to perform the functions claimed.
In addition, when the claims are taken as a whole, as an ordered combination, the combination of steps does not add "significantly more" by virtue of considering the steps as a whole, as an ordered combination. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments using what is well-understood, routine, and conventional in the related arts. The steps are still a combination made to the abstract idea. The additional steps only add to those abstract ideas using well understood and conventional functions, and the claims do not show improved ways of, for example, an unconventional non-routine functions for analyzing model operations or updating the model that could then be pointed to as being "significantly more" than the abstract ideas themselves.
Moreover, Examiner was not able to identify any "unconventional" steps, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is well-understood, routine, and conventional (WURC) in the related arts.
Further, note that the limitations, in the instant claims, are done by the generically
recited computing devices. The limitations are merely instructions to implement the abstract idea on a computing device that is recited in an abstract level and require no more than a generic computing devices to perform generic functions.
Claim 10 recites a system comprising “non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry” configured to perform the same method as set forth in claim 1, the added element of “non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process” do not transform the judicial exception into a practical application because they are amount to a mere instruction to apply the judicial exception to a generic computer. The additional elements are also not sufficient to amount to significantly more than the judicial exception because the action of implementing the method on a general purpose computer with non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry is mere instruction to apply the judicial exception to a computer.
Claim 10 is therefore rejected according to the same findings and rationale as provided above.
Claim 11 recites a system comprising processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry” configured to perform the same method as set forth in claim 1, the added element of “processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry” do not transform the judicial exception into a practical application because they are amount to a mere instruction to apply the judicial exception to a generic computer. The additional elements are also not sufficient to amount to significantly more than the judicial exception because the action of implementing the method on a general purpose computer with processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry is mere instruction to apply the judicial exception to a computer.
Claim 11 is therefore rejected according to the same findings and rationale as provided above.
Independent claims 10 and 11 are the same analogy and rejected using similar analysis as claim 1.
CONCLUSION
It is therefore determined that the instant application not only represents an abstract idea identified as such based on criteria defined by the Courts and on USPTO examination guidelines, but also lacks the capability to bring about "Improvements to another technology or technical field" (Alice), bring about "Improvements to the functioning of the computer itself" (Alice), "Apply the judicial exception with, or by use of, a particular machine" (Bilski), "Effect a transformation or reduction of a particular article to a different state or thing" (Diehr), "Add a specific limitation other than what is well-understood, routine and conventional in the field" (Mayo), "Add unconventional steps that confine the claim to a particular useful application" (Mayo), or contain "Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment" (Alice), transformed a traditionally subjective process performed by humans into a mathematically automated process executed on computers (McRO), or limitations directed to improvements in computer related technology, including claims directed to software (Enfish).
The dependent claims, when considered individually and as a whole, likewise do not provide "significantly more" than the abstract idea for similar reasons as the independent claim.
claims 2 disclose selecting features (Mental process), using a machine learning model (merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h)), updating the model using data (training a system which is a high-generic computer software process of training data. This limitation does not amount to significantly more than the judicial exception, see MPEP 2106.05 (f)). It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea, claims 3 disclose “wherein the plurality of data features represents a behavior of at least a component of the at least one industrial machine” data description , which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 4 disclose “wherein the plurality of data features is generated based on at least one statistical method.” data description , which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 5 disclose “wherein the at least one indicative data feature is selected from the plurality of data features based on a probability to detect machine failures”, (mental and mathematical concept), It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 6 disclose “wherein the at least one indicative data feature is selected from the plurality of data features based on a probability to predict machine failures”(mental and mathematical concept), It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 7 disclose “selecting a plurality of indicative data features from the plurality of data features based on at least a distribution of the plurality of indicative data features, wherein the at least a distribution indicates at least an association between the plurality of data features towards a machine failure” (mental and mathematical concept), It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 8 disclose “wherein at least a portion of the sensor data is previously tagged with at least one machine failure indicator.” It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. Claim 9 disclose “wherein determining whether at least one machine failure indicator were detected in the new sensor data is based on semi- supervised machine learning”, It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea. Claim 20 disclose “wherein the at least one of the selected indicative data feature for machine failure prediction is selected when it is determined that a portion of the selected indicative data feature for machine failure prediction has a better probability to contribute more to predicting a machine failure with respect to others of the plurality of data features by identifying in the portion an increasing change in a distribution of the selected indicative data feature prior to a machine failure with respect to a normal state of the at least one industrial machine” (mental and mathematical concept), It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 21 disclose “wherein the selected at least one indicative data feature for machine failure prediction comprises at least two indicative data feature for machine failure prediction and wherein the at least two indicative data features for machine failure prediction are selected such that abnormal parameters of at least two of the at least two indicative data features for machine failure prediction demonstrate an association, wherein such association is indicative of a forthcoming machine failure.” (mental and mathematical concept), It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea; claim 22 disclose “the selected at least one indicative data feature for machine failure detection is different from the selected at least one indicative data feature for machine failure prediction” ata description , which is directed to generally linking the use of a judicial exception to a particular technological environment or field of use. It does not integrate the abstract idea into a practical application and did not add significantly more to the abstract idea.
The dependent claims which impose additional limitations also fail to claim patent eligible subject matter because the limitations cannot be considered statutory. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1 ; where all claims are directed to the same abstract idea, "addressing each claim of the asserted patents [is] unnecessary." Content Extraction &. Transmission LLC v, Wells Fargo Bank, Natl Ass'n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Claims for the other statutory classes are similarly analyzed.
For at least these reasons, the claimed inventions of each of dependent claims 2-21,are directed or indirect to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more and are rejected under 35 USC 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-8, 10-11, 13-18, 20-21 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Bates et al. [US 2017/0083830 A1, hereinafter D1].
With regard to Claim 1,
D1 teach an online machine learning based method for detection and prediction of industrial machine failures (¶32), comprising:
receiving sensor data related to at least one industrial machine (¶12, “processor is configured by computer code to receive sensor data relating to the unit of equipment”, ¶¶33-35, ¶38);
generating a plurality of data features based on at least a portion of the sensor data (¶36, ¶54, “Importing the sensor data leading up to and including a failure condition allows the failure signature recognition system to identify what leads up to the failure condition”, ¶68, “input would be a vector of length 24 * 10=240 for each time step, since the input would contain current data as well as prior data”);
selecting, from the plurality of data features, at least one indicative data feature for a machine failure detection (¶47, “failure identification module 330 provides a screen … that allows a user to identify failures from maintenance work order history … which work orders represent failures”, ¶54, “signature of a failure is a characteristic pattern of sensor readings, oscillations, some changing variable, etc. … Importing the sensor data leading up to and including a failure condition allows the failure signature recognition system to identify what leads up to the failure condition, not just the failure condition”, ¶40);
applying to the selected at least one indicative data feature an unsupervised machine failure detection process, wherein the unsupervised machine failure detection process is configured to detect machine failure indicators based on the selected at least one indicative data feature (¶54, ¶87, “anomaly detection component 220 utilizes a Kohonen self organizing map (SOM) to perform the analysis”, ¶89, “Kohonen Self-Organizing Map (SOM) methodology essentially clusters tag data for each time step into an output, which can be thought of as an operating state”);
receiving new sensor data related to the at least one industrial machine (¶91, “Agent feeds the new data into the trained SOM model, which classifies it into one of the known operating states”, ¶76, “failure signature recognition component 210 receives, via the plant data interface 240, current trend data from plant historians related to the plant data sources”);
determining, by applying the unsupervised machine failure detection process to the selected at least one indicative data feature that is associated with the new sensor data, whether at least one machine failure indicator was detected in the new sensor data (¶92, “Anomaly Detection works is, it compares the error E of the current classification to the maximum error detected on the Training DataSet, E′. If E exceeds E′ by a factor T, known as the Anomaly Threshold, then an Anomaly Alert is generated”, ¶95, “probability (P, returned by f(x)) is compared to the minimum baseline probability calculated from the Training DataSet (P′). If P is smaller than P′ by a factor T, known as the Anomaly Threshold … an Anomaly Alert is generated”); and
tagging the at least one machine failure indicator upon determination that the at least one machine failure indicator were detected, wherein upon determination that no machine failure indicators was detected, the unsupervised machine failure detection process continuously searches for machine failure indicators (¶¶84-85, ¶91, “anomaly agent is activated as a live profile for monitoring. The anomaly agents can monitor the new sensor data during the process 1100 in the same way that the failure agents monitor the new sensor data. The Agent feeds the new data into the trained SOM model, which classifies it into one of the known operating states”, ¶92, “Anomaly Detection works is, it compares the error E of the current classification to the maximum error detected on the Training DataSet, E′. If E exceeds E′ by a factor T, known as the Anomaly Threshold, then an Anomaly Alert is generated”, ¶95); and
selecting, from the plurality of data features, at least one indicative data feature for machine failure prediction (¶40, ¶44, “user can select from a list of tags listed in a tag data store shown in the screen 410. Each tag corresponds to a sensor associated with the pump selected with the screen 405 in this example. A sensor could be associated with an operating parameter of the pump such as pressure or temperature. For each tag in the screen”, ¶47);
applying to the selected at least one indicative data feature a supervised machine failure prediction process (¶54, “At stage 1025, the learning agent training module 340 analyzes the sensor data at times leading up to and during the identified failures ... By identifying when a failure occurs for a given asset, the sensor data leading up to the failure and during the failure can be identified”, ¶57, “training at stage 1025 involves creating a failure agent that takes in the sensor data in the training set and, using machine learning, parameters of the failure agent are adjusted such that the failure agent successfully predicts the identified failures before the failures occur”, ¶55, “Machine learning techniques such as Resilient Back Propagation (RPROP), Logistic Regression (LR), and Support Vector machines (SVM) can all be used at stage 1025”);
wherein the supervised machine failure prediction process is configured to predict machine failures based on the selected at least one indicative data feature (¶¶54-55, ¶57, “training at stage 1025 involves creating a failure agent that takes in the sensor data in the training set and, using machine learning, parameters of the failure agent are adjusted such that the failure agent successfully predicts the identified failures before the failures occur”, ¶58); and
updating the supervised machine failure prediction process with the tagged at least one machine failure indicator, such that the supervised machine failure prediction process is continuously and automatically updated and improved (Fig. 11, 1125, 1135, ¶79, ¶84, “Due to the retraining at stages 1125 and 1135, the process 1100 allows a failure agent to adapt itself over time, becoming more and more fine-tuned for the equipment it is monitoring”).
With regard to Claim 3,
D1 disclose the method of claim 1, wherein the plurality of data features represents a behavior of at least a component of the at least one industrial machine (¶57, “failures for equipment where a false negative can be catastrophic such as an oil rig”, ¶59, ¶61).
With regard to Claim 4,
D1 disclose the method of claim 1, wherein the plurality of data features is generated based on at least one statistical method (¶93, “Gaussian algorithm fits a probability distribution to each tag (variable) in the Training DataSet, estimating the mean u and standard deviation σ from the data. With these parameters estimated, the Gaussian probability function is used for each tag”).
With regard to Claim 5,
D1 disclose the method of claim 1, wherein the at least one indicative data feature is selected from the plurality of data features based on a probability to detect machine failures (¶¶53-54, ¶93, “Gaussian algorithm fits a probability distribution to each tag (variable) in the Training DataSet, estimating the mean u and standard deviation σ from the data. With these parameters estimated, the Gaussian probability function is used for each tag”, ¶94, “For a given time step, the value for each tag Xi is fed into the Gaussian function for that tag (with the associated mean and standard deviation), and the probability is calculated”, ¶95, “After the probability is calculated for each tag for a given time step, these probabilities are multiplied together to get the overall probability (based on assumption of independence of the random variables for each tag). The probability (P, returned by f(x)) is compared to the minimum baseline probability calculated from the Training DataSet (P′). If P is smaller than P′ by a factor T, known as the Anomaly Threshold, then the new tag data is considered to be an anomaly, and an Anomaly Alert is generated”).
With regard to Claim 6,
D1 disclose the method of claim 1, wherein the at least one indicative data feature is selected from the plurality of data features based on a probability to predict machine failures (¶¶53-54, ¶93, “Gaussian algorithm fits a probability distribution to each tag (variable) in the Training DataSet, estimating the mean u and standard deviation σ from the data. With these parameters estimated, the Gaussian probability function is used for each tag”, ¶94, “For a given time step, the value for each tag Xi is fed into the Gaussian function for that tag (with the associated mean and standard deviation), and the probability is calculated”, ¶95, “After the probability is calculated for each tag for a given time step, these probabilities are multiplied together to get the overall probability (based on assumption of independence of the random variables for each tag). The probability (P, returned by f(x)) is compared to the minimum baseline probability calculated from the Training DataSet (P′). If P is smaller than P′ by a factor T, known as the Anomaly Threshold, then the new tag data is considered to be an anomaly, and an Anomaly Alert is generated”).
With regard to Claim 7,
D1 disclose the method of claim 1, further comprising:
selecting a plurality of indicative data features from the plurality of data features based on at least a distribution of the plurality of indicative data features (¶47, ¶87, “anomaly detection component 220 analyzes sensor data at times where conditions are normal in order to determine baseline or normal operating conditions. In one aspect, the anomaly detection component 220 utilizes a Kohonen self organizing map (SOM) to perform the analysis at stage”, ¶88, “analysis at stage 1225 can comprise BIC (Bayesian Information Criteria) to determine the number of regions (e.g., the groups 910 and 920). Gausian probability can be used to determine the odds that sensor A (temperature) is one value and sensor B (pressure) is one value and this can detect the anomaly”, wherein the at least a distribution indicates at least an association between the plurality of data features towards a machine failure (¶40, “failure signature recognition component 210 uses pattern recognition techniques to learn when failures are about to occur. The failure signature recognition component identifies fault conditions in the work order histories of the CM system 110, takes the sensor data from the plant data sources and learns failure signatures based on the sensor data”, ¶42, ¶88, “An anomaly agent is trained to detect an anomaly when the current operating state of a piece of equipment is outside of the first group 910 and the second group 920. The analysis at stage 1225 can comprise BIC (Bayesian Information Criteria) to determine the number of regions (e.g., the groups 910 and 920). Gausian probability can be used to determine the odds that sensor A (temperature) is one value and sensor B (pressure) is one value and this can detect the anomaly”).
With regard to Claim 8,
D1 disclose the method of claim 1, wherein at least a portion of the sensor data is previously tagged with at least one machine failure indicator (¶49, “training data set importer module 320 retrieves a set of training data comprising sensor data corresponding to all the tags identified at stage 1005 that exhibit changes during the identified failures for the selected asset”, ¶50, ¶52, “imported data is stored with metadata to flag which intervals are failure intervals versus normal intervals”).
With regards to claim 10,
Claim 10 is similar in scope to claim 1; therefore it is rejected under similar rationale. Further D1 teach non-transitory computer readable medium having stored thereon instructions for Causing a processing circuitry to perform a process (¶¶9-10).
With regards to claim 11,
Claim 11 is similar in scope to claim 1; therefore it is rejected under similar rationale. Further D1 teach a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry (¶¶9-10).
With regards to claim 13,
Claim 13 is similar in scope to claim 3; therefore it is rejected under similar rationale.
With regards to claim 14,
Claim 14 is similar in scope to claim 4; therefore it is rejected under similar rationale.
With regards to claim 15,
Claim 15 is similar in scope to claim 5; therefore it is rejected under similar rationale. With regards to claim 16,
Claim 16 is similar in scope to claim 6; therefore it is rejected under similar rationale. With regards to claim 17,
Claim 17 is similar in scope to claim 7; therefore it is rejected under similar rationale. With regards to claim 18,
Claim 18 is similar in scope to claim 8; therefore it is rejected under similar rationale.
With regard to Claim 20,
D1 disclose the method of claim 1, wherein the at least one of the selected (¶44, “user can select from a list of tags listed in a tag data store shown in the screen 410. Each tag corresponds to a sensor associated with the pump selected with the screen 405 in this example. A sensor could be associated with an operating parameter of the pump such as pressure or temperature”, ¶45, “ user interface 270 renders a user interface screen 420 shown in FIG. 4E. The screen 420 is used to create a sensor template for the chosen asset (the pump)”) indicative data feature for machine failure prediction is selected when it is determined that a portion of the selected indicative data feature for machine failure prediction has a better probability to contribute more to predicting a machine failure with respect to others of the plurality of data features (¶51, “After the user selects to execute the import of the training data with the screen 440, the training data set importer module 320 displays a screen 445, as shown in FIG. 4J, that shows sensor data for normal conditions both before and after a portion 446 of training data that includes the identified failure”, ¶55, “Machine learning techniques such as Resilient Back Propagation (RPROP), Logistic Regression (LR), and Support Vector machines (SVM) can all be used at stage 1025. RPROP can be used for certain non-linear patterns, LR enables ranking of tag prediction rank, and SVM enables confidence intervals for prediction”, ¶63, “learning agent training module 340 uses different spans of time to identify the optimal time interval using Receiver Operating Characteristic methodology and Area Under Curve (AUC) methodology”) by identifying in the portion an increasing change in a distribution of the selected indicative data feature prior to a machine failure (¶54, “The signature of a failure is a characteristic pattern of sensor readings, oscillations, some changing variable, etc. By identifying when a failure occurs for a given asset, the sensor data leading up to the failure and during the failure can be identified. Importing the sensor data leading up to and including a failure condition allows the failure signature recognition system to identify what leads up to the failure condition, not just the failure condition”, ¶66, “When analyzing a memory process or non-Markov process, one looks at the past readings for a period of time to sense the signature. Historyless (memoryless) processes, in contrast, are analyzed at each time period independently and the analysis tries to learn what is different in the failure period compared to the normal periods. As described below, one can vary the memory settings to get the optimum prediction interval”, ¶67, “output of the Agent depends on previous time steps in addition to the current time step”, ¶70, “FIG. 8 shows a graph 800 including a first trace 810 and a second trace 820 from two different sensors. In this example of a failure signature with memory, the amplitude is about the same before and after the failure, but the frequency changes”) with respect to a normal state of the at least one industrial machine (¶51, “After the user selects to execute the import of the training data with the screen 440, the training data set importer module 320 displays a screen 445, as shown in FIG. 4J, that shows sensor data for normal conditions both before and after a portion 446 of training data that includes the identified failure”, ¶87, “ the anomaly detection component 220 analyzes sensor data at times where conditions are normal in order to determine baseline or normal operating conditions”). Examiner notes that a full mapping has been provided for compact persecution. However, “machine failure prediction is selected when it is determined that a portion of the selected indicative data feature” the “when” clause is a contingent clause that is non-limiting in scope See MPEP 2111.04.
With regard to Claim 21,
D1 disclose the method of claim 1, wherein the selected at least one indicative data feature for machine failure prediction comprises at least two indicative data feature for machine failure prediction (¶68, “If there is data from 10 tags in the current training data set, then, with no memory, the input to the machine learning agent would be a vector of length 10 for each time step”, ¶40, “failure signature recognition component 210 uses pattern recognition techniques to learn when failures are about to occur. The failure signature recognition component identifies fault conditions in the work order histories of the CM system 110, takes the sensor data from the plant data sources and learns failure signatures based on the sensor data”) and wherein the at least two indicative data feature for machine failure prediction are selected such that abnormal parameters of at least two of the at least two indicative data feature for machine failure prediction demonstrate an association (¶54, “The signature of a failure is a characteristic pattern of sensor readings, oscillations, some changing variable, etc. By identifying when a failure occurs for a given asset, the sensor data leading up to the failure and during the failure can be identified. Importing the sensor data leading up to and including a failure condition allows the failure signature recognition system to identify what leads up to the failure condition, not just the failure condition”, ¶70, “FIG. 8 shows a graph 800 including a first trace 810 and a second trace 820 from two different sensors. In this example of a failure signature with memory, the amplitude is about the same before and after the failure, but the frequency changes”, ¶41, “ the anomaly detections component 220 can look at temperature and pressure time histories and identify abnormal measurements based on trained learning agents”), wherein such association is indicative of a forthcoming machine failure (¶41, “ learning agents of the anomaly detection component are trained to identify an anomaly in the sensor data before a failure occurs. If an anomaly is detected, the affected equipment can be shut down and inspected to identify what may be causing the anomaly before a catastrophic failure occurs”, system interprets associated sensor behavior (across multiple features) as an indication of a forthcoming machine failure, ¶54, “ learning agent training module 340 analyzes the sensor data at times leading up to and during the identified failures … allows the failure signature recognition system to identify what leads up to the failure condition”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Bates et al. [US 2017/0083830 A1, hereinafter D1] in view of Gundel et al . [US 2021/0373063 A1, hereinafter Gundel].
With regard to Claim 9,
D1 teach the method of claim 1, wherein determining whether at least one machine failure indicator were detected in the new sensor data (¶54, “At stage 1025, the learning agent training module 340 analyzes the sensor data at times leading up to and during the identified failures ... By identifying when a failure occurs for a given asset, the sensor data leading up to the failure and during the failure can be identified”, ¶57, “training at stage 1025 involves creating a failure agent that takes in the sensor data in the training set and, using machine learning, parameters of the failure agent are adjusted such that the failure agent successfully predicts the identified failures before the failures occur”, ¶55, “Machine learning techniques such as Resilient Back Propagation (RPROP), Logistic Regression (LR), and Support Vector machines (SVM) can all be used at stage 1025”).
D1 does not explicitly teach
Gundel teach determining whether at least one machine failure indicator were detected in the new sensor data is based on semi-supervised machine learning (¶¶73-74, ¶77, “Example machine learning techniques that may be employed to generate models 74C can include various learning styles, such as supervised learning, unsupervised learning, and semi-supervised learning”).
D1 and Gundel are analogous art to the claimed invention because they are from a similar field of endeavor of predicting machine failure. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify D1 resulting in resolutions as disclosed by Gundel with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify D1 as described above to reduce labeling cost that can be time-consuming and expensive process specially in the availability of limited amount of labeled data as this is Simple substitution of one known element for another to obtain predictable results; usage of known technique to improve similar devices (methods, or products) in the same way; and Combining prior art elements according to known methods to yield predictable results (MPEP 2143).
With regard to Claim 19,
Claim 19 is similar in scope to claim 9; therefore it is rejected under similar rationale.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Bates et al. [US 2017/0083830 A1, hereinafter D1] in view of “Degradation Feature Selection for Remaining Useful Life Prediction of Rolling Element Bearings” Published 2015 [hereinafter D2].
With regard to Claim 22,
D1 teach the method of claim 1.
D1 does not explicitly teach the selected at least one indicative data feature for machine failure detection.
D2 teach selected at least one indicative data feature for machine failure detection (P. 2, ¶2, “Statistical indices that are effective infault diagnosis, such as RMS and wavelet packet node energy (WPNE), have been considered here”, P. 3, Table 1) is different from the selected at least one indicative data feature for machine failure prediction (P. 2, ¶3, “Unlike static point clustering in diagnostic feature evaluation, a sequence of consecutive realizations should be considered in prognostic feature evaluation because degradation is a continuous stochastic process “, “Features with larger interclass and smaller intraclass distances are selected in diagnostic feature evaluation, while features retained for prognostics should have better predictabilities with trend, robustness and so on “, “features retained for prognostics should have better predictabilities with trend, robustness and so on”, P. 2, 2.2, “good prognostic features should be well correlated with item performance degradation progressing, monotonically increasing or decreasing, robust to outliers and common across individual item and soon. Thus correlation, monotonicity and robustness based on trend and residual are proposed here for more relevant degradation feature selection”, D1 teaches that diagnostic feature selection criteria differ from prognostic feature selection criteria which necessarily results in different retained or selected features).
D1 and D2 are analogous art to the claimed invention because they are from a similar field of endeavor of machine prognostics and health management for industrial equipment. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify D1 resulting in resolutions as disclosed by D2 with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify D1 as described above to improve prognostic accuracy by selecting features associated with degradation (D2, P. 2, 2.2, “good prognostic features should be well correlated with item performance degradation progressing, monotonically increasing or decreasing, robust to outliers and common across individual item and soon. Thus correlation, monotonicity and robustness based on trend and residual are proposed here for more relevant degradation feature selection”) which is Simple substitution of one known element for another to obtain predictable results; usage of known technique to improve similar devices (methods, or products) in the same way; and Combining prior art elements according to known methods to yield predictable results (MPEP 2143).
Response to Arguments
Examiner respectfully withdraw the 35 USC 112(b) rejection for claim 3 based on the claim amendments.
Applicant argue that updating the machine failure process prediction is improvement to the technology.
Examiner respectfully disagrees,
First, machine failure process prediction is broad term that could be completely manual as the user could have a process to monitor failure and update the monitoring process based on previous experience. This fall under mental process as a user is able to monitor machines and predict failure (e.g. if machine temperature increase to a specific degree and it is getting close to a specific threshold, then a failure is expected, in addition the threshold could be updated based on previous failure that occurred previously at temperature less that the threshold). Therefore applicant arguments are not persuasive.
Applicant argue that the Office action asserts that the computer is recited at high level of generality and comprises only processor and a memory ignoring the presence of elements such as machine learning algorithms that are updated (Remarks P. 4).
Examiner respectfully disagrees,
the office analyzed the argued limitations as part of the additional elements and concluded that they merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Applicant argue that it is not possible for a human mind to analyze sensory inputs as called in claims in real time.
Examiner respectfully disagrees, human can monitor the data that is collected using generic computing device that are used to collect and display data. The data collection and display is extra insignificant activity and the claims did not disclose that the data collection is done using any method that is not well-understood, routine, and conventional in the related arts.
Applicant argue that the response to the argument described the claims at a high level of abstraction that the untethered from the language of the claim thus improperly forcing the claim to be interpreted as an abstract idea and that is forbidden by Enfish (Remarks P. 5-6).
Examiner respectfully disagrees,
The applicant is arguing the “response to the arguments” and not the 35 USC 101 rejection. As the claims have been rejected in details of the office action. It would not make sense to repeat the same rejection in the response to the arguments; therefore the examiner provided the analysis and the logic used to draft the rejection. Examiner met the Enfish case in the provided detailed rejection for the claims. However as mentioned previously the arguments are related to the “response to the arguments” and not the 35 USC 101 rejection.
Applicant argue that the claims provide a specific architecture for training a machine learning model (Remarks P. 7).
Examiner respectfully disagrees,
The current claims does not provide a special or new way of training machine learning model for prediction task that could be considered that it is not WURC. Using supervised and unsupervised machine learning model merely indicates a field of use or technological environment in which the judicial exception is performed and fails to add an inventive concept to the claims. See MPEP 2106.05(h). Applicant is welcomed to provide specific citation from the specification that reflects limitations in the claims that show the difference between the state of the art of the time of the invention and the current inventions and how this difference is considered an improvement to the technology. Regarding the continuous update of the machine learning models, examiner notes that training and tuning of machine learning model is WURC activity and cannot overcome the abstract 35 USC 101 rejection unless the specification that reflect limitations in the claims disclose a novel or unique specific for of tuning/training/updating of machine learning model.
Applicant argue that a person looking at a machine and trying to detect or predict its failure does none of these things even if the person had a computer. Moreover, having a computer is not considered a mental process, which is limited to things a person can do in their mind with pen and paper. Also, a person would not perform any machine learning, since the person would use human reasoning and observation.
Examiner respectfully disagrees, the computer is not part of the mental process, the computer is a tool and this clarified in the rejection as the computer is not part of the abstract idea. Same apply to the machine learning training as clarified in the rejection is additional element that is WURC activity and not part of the abstract idea.
Applicant argue that a person cannot practically process such real-time sensor data to perform the requisite machine monitoring. That is why, fundamentally, these systems are needed.
Examiner respectfully disagrees,
First, the claims does not require processing the data in real-time as continuously monitoring is not equivalent to real-time.
Second, assuming arguendo that the claim require processing real time sensor, it is unclear why a person cannot do that mentally. User can be observing and analyzing data and identify if the data is out of specific range. Using a computer as a tool to display data is considered a “Using a computer as a tool to perform a mental process”, MPEP 2106.04(a)(2)(III)(C) and sending and receiving data is WURC activity See at least MPEP 2106.05(d)(II)(i) sending, receiving, displaying and processing data are common and basic functions in computer technology.
Applicant is argue that the claim requires two distinct selection steps and Bates teach performs one selection, and uses a shared feature space. Therefore, Bates does not disclose the two selection acts required (Remarks P. 9-10).
Examiner respectfully disagrees,
The argument that the cited teaching fails to disclose two separate “selecting” steps is not supported by the claim language. Independent claim 1 recites (i) “selection … data feature for machine failure detection”, and (ii) “selection … data feature for machine failure prediction”. The claim does not require that these selection steps to be separated, independently executed through different feature extraction stages, performed using different feature pools, or exclude overlapping features. Rather, the claim simply require that at least one feature be selected for each stated purpose. D1 discloses feature selection in connection with both an anomaly detection process and a supervised failure prediction process. Because the selected features are applied to two distinct machine learning processes directed to two different objectives (detection and prediction), D1 satisfies both selecting limitations as recited. The assertion that D1 must disclose two sequential or independently execute feature selection step improperly imports limitations that are not present in the claim. Under broadest reasonable interpretation, selecting features for use in detection and selecting features for use in prediction even if derived from a shared feature space or generated in a common upstream process constitutes disclosure of the two selecting steps required by the claim. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., sequential selection steps) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argue that tag information in Bates relates to sensors in the plant data sources 130 and not to tagging the at least one machine failure indicator upon determination that the at least one machine failure indicator was detected as called for in the claim. As such, it would not meet the requirement of the claim element of tagging the at least one machine failure indicator upon determination that the at least one machine failure indicator was detected, wherein upon determination that no machine failure indicators were detected, the unsupervised machine failure detection process continuously searches for machine failure indicators because such Tags are not relevant to the claim element.
Examiner respectfully disagrees,
The argument improperly narrow “tagging” to mean only “sensor tag identifier” or plant data source identifiers. The claim does not limit tagging to sensor identifiers. Under the broadest reasonable interpretation, tagging include flagging, labeling, or associating metadata with a machine failure indicator upon determination of its existence.
First, D1 expressly discloses tagging/flagging based on failure determinations, not merely sensor identifier tags See at least ¶52, “imported data is stored with metadata to flag which intervals are failure intervals versus normal intervals”. This direct tagging of failure status.
Second, D1 expressly discloses in ¶92, when an anomaly is detected, “an Anomaly Alert is generated” and the resulting agent is “flagged with extra metadata about the specifics of the fault and remedy”. This is tagging of a machine failure indicator upon determination.
Third, D1 expressly discloses in ¶83 that upon indication of an alarm condition, a work request is generated identifying the related equipment and the contributing sensor tags, further associating the detected condition with contextual information.
Taken together these disclosures demonstrate tagging of machine failure indicators upon determination, satisfying the claim element under a broadcast reasonable interpretation.
Applicant argue that Bates requires sensor data and known failure information relating to equipment failures. There is no teaching or suggestion in Bates that such failure information is generated internally by the system via unsupervised learning. More specifically, Bates defines the source of failure information as human-generated maintenance records in paragraph 33, which explains: The asset failure detection system also receives notifications of equipment failures (e.g., work order histories, etc.) from the CM system 115. The failure notifications from the CM system 115 include indications of the types of failures, dates of failures, and failure codes. Thus, the failure information comes from a CM system and the failure information consists of work order histories, failure codes, and dates which are human-entered maintenance records with no connection to an unsupervised detection algorithm.
Examiner respectfully disagrees, the argument incorrectly asserts that the system rely solely on human CM work orders and therefore does not internally generate machine failure indicators through unsupervised learning.
While ¶33 explain that the system receive failure ratification from CM system as labeling information, D1 separately discloses an unsupervised anomaly detection process that is trained on normal data and then applied to new sensor during live monitoring. Paragraph 87 describes the anomaly detection component establishing baseline conditions using a self organizing map (SOM), and paragraph 91 explains that new sensor data is fed into the trained model during live monitoring. When the classification error exceeds threshold Paragraph 92 states that an anomaly alert is generated.
Thus D1 disclose internal determination of a failure indicator via unsupervised processing of new sensor data. Furthermore Paragraph 92 explains that when an anomaly is detected and determined to be a valid predictor of a fault, a supervised learning agent is created and flagged with metadata. Accordingly, the reference discloses the claimed architecture: applying an unsupervised detection process to selected features, determining whether a failure indicator is present, generating an anomaly alert (i.e. tagging the indicator), and then using that determination in a supervised prediction process. The argument that the teaching relies exclusively on external CM records ignores the explicit disclosure of internal unsupervised detection and alert generation.
Applicant argue that claims 10 and 11 are allowable for the same reasons presented in claim 1. Examiner respectfully disagrees, the claims are not allowable for the same reasons clarified for claim 1.
Applicant argue that Bates does not appear to teach selecting data features wherein the at least a distribution indicates at least an association between the plurality of data features towards a machine failure. Rather, it appears to simply say that anomalies can be detected based on the odds that sensor A (temperature) is one value and sensor B (pressure) is one value using Gaussian probability. More specifically, the sensors are selected without regard for their having a distribution that indicates at least an association between the plurality of data features towards a machine failure even though in the end it may turn out to be the case that there is an association. However, this is not the selecting called for in the claim.
Examiner respectfully disagrees, the argument improperly characterize the reference as merely applying Gaussian probability to random sensor values without regard to association with machine failure. D1 expressly discloses identifying sensor tags that exhibit change during failure intervals and storing metadata distinguishing failure versus normal conditions (¶¶49-52). Gaussian probability model is then fit to each tag variable using training data, and the calculated probability is compared to a baseline derived from the training set (¶¶93-95). Because the training data include failure relate data and the tags selected are those that change during failure, the probability distribution must reflects the statistical behavior of those features in relation to the machine failure. Selection and evaluation of based on such distribution is an at least an association between the plurality of features and machine failure. This meets that claim limitation that require that selection be based on a distribution indicating at least an association.
As to the remaining dependent claims, applicant argue that they are allowable due to their respective direct and indirect dependencies upon one of the aforementioned Independent claims. The examiner respectfully disagrees, Independent claims were not allowable as stated in the paragraph above in this “Response to Arguments” section in this office action.
Conclusion
The prior art made of record and not relied upon is considered pertinent to the applicant’s disclosure.
US Patent Application Publication No. 20220301903 filed by Liao et al. that disclose the ability to use sensor historical data to predict machine failure See at least Abstract
Examiner has pointed out particular references contained in the prior arts of record in the body of this action for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and Figures may apply as well. It is respectfully requested from the applicant, in preparing the response, to consider fully the entire references as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior arts or disclosed by the examiner. It is noted that any citation to specific pages, columns, figures, or lines in the prior art references any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331-33, 216 USPQ 1038-39 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABOU EL SEOUD whose telephone number is (303)297-4285. The examiner can normally be reached Monday-Thursday 9:00am-6:00pm MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED ABOU EL SEOUD/Primary Examiner, Art Unit 2148