Prosecution Insights
Last updated: April 19, 2026
Application No. 18/051,070

AUTOMATED DECISION OPTIMIZATION FOR MAINTENANCE OF PHYSICAL ASSETS

Final Rejection §103
Filed
Oct 31, 2022
Examiner
KIM, JONATHAN J
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
2 granted / 6 resolved
-21.7% vs TC avg
Strong +80% interview lift
Without
With
+80.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
36.7%
-3.3% vs TC avg
§103
38.6%
-1.4% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to amendments filed December 19th, 2025. The status of the claims is as follows. Claims 1, 4, 11 and 16 are amended. Claims 1-20 are currently pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 9-10; 11-12, 15; 16-17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Serradilla et al. (“Deep learning models for predictive maintenance: a survey, comparison, challenges and prospect” [2020], hereinafter “Serradilla”) in view of Schoch et al. (US20230047304A1, hereinafter “Schoch”) in view of Chu et al. (US20180096261A1, hereinafter “Chu”). Regarding Claim 1, Serradilla discloses executing an automated process to be used to maintain physical assets of a selected environment (Serradilla [Page 4 Section 2.2]; PNG media_image1.png 290 603 media_image1.png Greyscale Serradilla [Page 8 Section 2.2.6]; “2.2.6 Mitigation. Once an anomaly is detected, diagnosed its cause and prognosticated its remaining life, there is enough information to perform maintenance actions to mitigate failures in early phases and thus prevent assets deriving into failure. This stage consists of designing and performing the steps necessary to restore assets to correct working condition before failures occur, which also reduces implementation and downtime costs. Mitigation is performed by maintenance technicians who are in charge of creating and implementing a mitigation plan as part of the maintenance management and manufacturing operation management processes. Data-driven PdM models should generate assistance information, providing domain technicians with statistics [122] and prescriptions [9]. Therefore, a more advanced mitigation is accomplished by the combination of domain knowledge and data-driven information about assets’ health and expected degradation [105].” wherein the mitigation process implemented to prevent asset failure thus reads on an process (of anomaly detection, failure diagnosis, prognosis and mitigation) automated in pre-emptive response to future asset failures (thus maintaining physical assets of a selected environment)) the automated process including: automatically selecting a maintenance solution pipeline from a plurality of maintenance solution pipelines based on obtained information from an artificial intelligence process, the artificial intelligence process being … trained machine learning process … and the obtained information including a risk estimation, the maintenance solution pipeline to be used in providing a physical asset maintenance solution for a plurality of physical assets (Serradilla [Page 4 Section 2.2]; PNG media_image1.png 290 603 media_image1.png Greyscale Serradilla [Page 8 Section 2.2.6]; “2.2.6 Mitigation. Once an anomaly is detected, diagnosed its cause and prognosticated its remaining life, there is enough information to perform maintenance actions to mitigate failures in early phases and thus prevent assets deriving into failure. This stage consists of designing and performing the steps necessary to restore assets to correct working condition before failures occur, which also reduces implementation and downtime costs. Mitigation is performed by maintenance technicians who are in charge of creating and implementing a mitigation plan as part of the maintenance management and manufacturing operation management processes. Data-driven PdM models should generate assistance information, providing domain technicians with statistics [122] and prescriptions [9]. Therefore, a more advanced mitigation is accomplished by the combination of domain knowledge and data-driven information about assets’ health and expected degradation [105].” wherein the detecting and diagnosis of an anomaly reads on obtained information; wherein the mitigation methodology determined by PdM (Predictive Maintenance) executed by technicians reads on selection of a maintenance solution pipeline from a plurality of maintenance solution pipelines (selecting particular advanced mitigation pipeline involving data-driven information and degradation information) to provide a solution for asset maintenance. Serradilla [Page 2 Paragraph 1]; “Maintenance optimisation is a priority for industrial companies given that effective maintenance can reduce their cost up to 60% by correcting failures of machines, systems and people [42]. Concretely, PdM maximises components’ working life by taking advantage of their unexploited lifetime potential while reducing downtime and replacement costs by replacement before failures occur; thus preventing expensive breakdowns and production time loss caused by unexpected stops” wherein the maintenance is performed on, at least in part, physical assets and components Serradilla [Page 6 Paragraph 1]; “After multi-class classification for anomaly detection: diagnosis is performed based on previous failure data knowledge of the estimated class, so the link of data to failure type is directly obtained from model [14, 21]. Once the possible failure type has been detected, semi-quantitative and qualitative approaches can be used by harnessing expert knowledge to evaluate its potential consequences, using tools such as FMEA [38] or Ishikawa diagram [137]. In addition, interpreting directly explainable models [2, 9] or using explainability on less interpretable models such as SVM [40] can also help to perform this task.” wherein the FMEA risk evaluation approach and its associated Risk Priority Number (RPN) score thus reads on the obtained information including a risk estimation) initiating code and model rendering for the maintenance solution pipeline automatically selected (Serradilla [Page 6 Table 1]; PNG media_image2.png 637 607 media_image2.png Greyscale Wherein anomaly detection of the PdM maintenance solution pipeline is performed through initiated models for difference anomaly/data configurations; wherein such initiated models thus reads on the information being specifically obtained from artificial intelligence models; wherein the plurality of models and algorithms being trained towards capturing interdependencies and relations in data (such as density features, distance among data-points, relation to clusters, relation to training data) thus reads on the models being trained) continuing to obtain output from the artificial intelligence process, the output including an automatically generated risk estimation relating to one or more conditions of at least one physical asset of the plurality of physical assets (Serradilla [Page 6 Paragraph 1]; “After multi-class classification for anomaly detection: diagnosis is performed based on previous failure data knowledge of the estimated class, so the link of data to failure type is directly obtained from model [14, 21]. Once the possible failure type has been detected, semi-quantitative and qualitative approaches can be used by harnessing expert knowledge to evaluate its potential consequences, using tools such as FMEA [38] or Ishikawa diagram [137]. In addition, interpreting directly explainable models [2, 9] or using explainability on less interpretable models such as SVM [40] can also help to perform this task.” wherein the estimated class output is analyzed and diagnosed including evaluation of its failure type and potential consequences, thus reading on obtained output from an artificial intelligence process including a generated risk estimation related to the physical assets being analyzed by the current PdM pipeline) re-initiating code and model rendering for the maintenance solution pipeline, based on the output from the artificial intelligence process (Serradilla [Page 5 Section 2.2.3]; “The anomaly detection methods need preprocessed and some also depend on feature engineered data to work. Once worked on features, the next step is to select, train and optimise the right model for the use-case. Following PdM stages will be influenced and constrained by the selected AD method and use-case’s data” wherein the optimization of model selection, training, and optimization comprising re-initiating of the model and its code rendering reads on regeneration of the same maintenance solution pipeline that was selected; wherein the optimization, re-training, and rendering are performed based on each model’s diagnosed performance in a given use-case) Serradilla does not explicitly disclose the artificial intelligence process being automated. but Schoch discloses automatically selecting a maintenance solution pipeline from a plurality of maintenance solution pipelines based on obtained information from an artificial intelligence process, the artificial intelligence process being an automated, trained machine learning process (Schoch [0108]; “The automation engineering support system 100 as described herein could be provided as one of the following: As a central unit pool provided to multiple plant operators. The central pool comprises a collection of semantic modules corresponding to modules previously used by the plant operators and including semantic descriptions along with usage data collected from the commissioned plants. Not only local, unit-centric optimization but also global, unit-in pipeline-context optimization is thus facilitated. This solution may be cloud-based, scalable and easily accessible. As a private unit pool managed by the plant operator. The private unit pool allows the plant operator to manage and administer a collection of semantic modules corresponding to real modules owned by the plant operator. Parameters, KPIs and other collected usage data (best practices, calibrated parameter combinations, maintenance intervals, MTBF, last maintenance done, last material used, clean or not, active/available/unused, etc.) are gathered from the real modules and uploaded to the database to allow for application of the system's pipeline generation engine and optimization component. Both an on-premise implementation as well as a cloud-based solution are possible, depending on the wishes/requirements of the plant operator. As an in-house product in order to reduce the plant operator's costs, or as an offer to clients to reduce their own cost.” Schoch [0090]; “FIG. 6 illustrates the feedback processing component 110 being used to train a machine learning model to provide improved rankings of pipeline suggestions generated by the pipeline generation component 104. In step 1, the optimization component 106 generates a ranking of the generated pipeline suggestions according to one or more predetermined criteria, in this case the KPIs energy cost (in Joules) and Minimum SFM Lifetime (in days). FIG. 6 shows two generated pipeline suggestions 502, 504 for which a ranking was obtained in this way. In step 2, the users (in this example, the plant owners of three modular plants) provide user feedback 506 on the ranking. In the example, the users prefer the second ranked pipeline suggestion 504 and state the reasons (justifications along with dependencies or relations or relevant data) for the preference. In step 3, the pipeline suggestions 502, 504 together with the user feedback 506 are input by the feedback processing component 110 as training data to a machine learning model, to train the model to generate rankings which better satisfy the user's preferences” wherein a trained machine learning model generating rankings of maintenance solution pipelines to subsequently automatically select optimized pipelines according to one or more predetermined criteria thus reads on an automated, trained artificial intelligence process that automatically selects a maintenance pipeline (ranking to determine model’s recommended pipelines) based on obtained information from the artificial intelligence process (user preferences and predetermined criteria)) It would have been obvious to modify Serradilla’s maintenance solution pipeline selection through artificial intelligence processes to ensure that Serradilla’s artificial intelligence processes are automated to obtain information in a similar fashion to Schoch’s trained, automated artificial intelligence processes. One would have been motivated to do so in order to “automatically generate suggested pipelines to be provided to the plant operator (who would otherwise have to manually choose, combine and compose modules).” (Schoch [0063]) Serradlila/Schoch discloses regenerating another maintenance solution pipeline (Serradilla [Page 5 Section 2.2.3]; “The anomaly detection methods need preprocessed and some also depend on feature engineered data to work. Once worked on features, the next step is to select, train and optimise the right model for the use-case. Following PdM stages will be influenced and constrained by the selected AD method and use-case’s data” wherein the optimization of model selection, training, and optimization comprising re-initiating of the model and its code rendering reads on regeneration of the same maintenance solution pipeline that was selected to thus produce another, optimized version of the regenerated maintenance solution pipeline). Serradlila/Schoch fails to disclose but Chu discloses wherein the maintenance solution pipeline automatically selected is reused … reducing processor execution time and memory utilization (Chu [0050]; “As noted above, an anomaly detection logic 220 may be provided to access an anomaly detection model generated by an anomaly management system 215 and detect anomalies in data delivered to the management system from devices (e.g., 105b,d) within an M2M system. The anomaly detection logic 220 may additionally log the reported anomalies and may determine maintenance or reporting events based on the receipt of one or more anomalies” Chu [0052]; “In some implementations, a user may select a collection of different diversified unsupervised machine learning algorithms for use in generating an anomaly detection model for a particular set of sensors. In some cases, the ensemble manager 245 may self-identify one or more of the unsupervised machine learning algorithms 270, for instance, by identifying one or more of the sensors in the set for which the ensemble is to be created. For instance, the ensemble manager 245 may identify that a particular one of the sensors is of a particular type or model and identify, for instance, from a library or other collection of available machine learning algorithms 270, which of the algorithms would be relevant for detecting anomalies in data generated by the particular sensor. The ensemble manager 240, in some cases, may reuse the overlapping sets of unsupervised machine learning algorithms in the development of different ensembles” wherein the ensemble comprises reused optimized selected anomaly detection models; wherein the maintenance determination pipeline comprising the re-initiated and model rendered reused anomaly detection ensemble being used in a management maintenance system thus reads on a maintenance solution pipeline automatically selected (optimized for anomaly detection) being reused; wherein recycling already optimized selected anomaly detection models thus implicitly reads on reducing processor execution time (since no future optimization necessary) and memory utilization (no creation of additional unsupervised machine learning models during ensemble development)) It would have been obvious to modify Serradlila/Schoch’s maintenance solution pipeline selection through artificial intelligence processes to perform Chu’s method of reusing parts of previous pipeline iterations to re-initiate code for a reused maintenance solution pipeline. One would have been motivated to do so “for use in generating anomaly detection models for different sensors or groups of sensors” (Chu [0052]) thus allowing future pipeline iterations to learn for different sensor types from prior iteration trainings. Serradlila/Schoch discloses regenerating another maintenance solution pipeline. Chu discloses wherein the maintenance solution pipeline automatically selected is reused … reducing processor execution time and memory utilization. By using Chu’s alternative method of reusing maintenance solution pipelines over Serradilla’s original maintenance solution pipeline regeneration methodology, the combination of Serradlila/Schoch/Chu thus discloses wherein the maintenance solution pipeline automatically selected is reused instead of regenerating another maintenance solution pipeline, reducing processor execution time and memory utilization. Regarding Claim 2, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination further discloses wherein the physical asset maintenance solution includes a condition-based maintenance plan for the plurality of physical assets, at least a portion of the plurality of physical assets being interdependent (Serradilla [Page 8 Section 2.2.6]; “Mitigation is performed by maintenance technicians who are in charge of creating and implementing a mitigation plan as part of the maintenance management and manufacturing operation management processes. Data-driven PdM models should generate assistance information, providing domain technicians with statistics [122] and prescriptions [9]. Therefore, a more advanced mitigation is accomplished by the combination of domain knowledge and data-driven information about assets’ health and expected degradation [105].” wherein the mitigation plan for the physical assets based on the health of the assets reads on a condition-based maintenance plan Serradilla [Page 3 Section 2.1 Paragraph 4]; “Some commonly monitored key components in PdM are but not limited to, bearings, blades, engines, valves, gears and cutting tools [200]. Moreover, the most common failure types detected by CM are imbalance cracks, fatigue, abrasive and corrosion wear, rubbing, defects and leak detection among others. The publication by Li et al. [90] classifies the types of failures that may exist in the system as: component failure, environmental impact, human mistakes and procedure handling” wherein the assets are all interdependent components of an industrial system) Regarding Claim 3, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination further discloses wherein the physical asset maintenance solution includes a condition-based maintenance schedule for the plurality of physical assets, at least a portion of the plurality of physical assets being interdependent (Serradilla [Page 8 Section 2.2.6]; “2.2.6 Mitigation. Once an anomaly is detected, diagnosed its cause and prognosticated its remaining life, there is enough information to perform maintenance actions to mitigate failures in early phases and thus prevent assets deriving into failure. This stage consists of designing and performing the steps necessary to restore assets to correct working condition before failures occur, which also reduces implementation and downtime costs. Mitigation is performed by maintenance technicians who are in charge of creating and implementing a mitigation plan as part of the maintenance management and manufacturing operation management processes. Data-driven PdM models should generate assistance information, providing domain technicians with statistics [122] and prescriptions [9]. Therefore, a more advanced mitigation is accomplished by the combination of domain knowledge and data-driven information about assets’ health and expected degradation [105]” wherein the mitigation early-phase mitigation steps to be conducted for the physical assets based on the health of the assets reads on a condition-based maintenance schedule Serradilla [Page 3 Section 2.1 Paragraph 4]; “Some commonly monitored key components in PdM are but not limited to, bearings, blades, engines, valves, gears and cutting tools [200]. Moreover, the most common failure types detected by CM are imbalance cracks, fatigue, abrasive and corrosion wear, rubbing, defects and leak detection among others. The publication by Li et al. [90] classifies the types of failures that may exist in the system as: component failure, environmental impact, human mistakes and procedure handling” wherein the assets are all interdependent components of an industrial system) Regarding Claim 4, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination further discloses wherein risk estimation includes an one or more risk estimation scores relating to one or more conditions of one or more physical assets of the plurality of physical assets (Serradilla [Page 6 Paragraph 1]; “After multi-class classification for anomaly detection: diagnosis is performed based on previous failure data knowledge of the estimated class, so the link of data to failure type is directly obtained from model [14, 21]. Once the possible failure type has been detected, semi-quantitative and qualitative approaches can be used by harnessing expert knowledge to evaluate its potential consequences, using tools such as FMEA [38] or Ishikawa diagram [137]. In addition, interpreting directly explainable models [2, 9] or using explainability on less interpretable models such as SVM [40] can also help to perform this task.” wherein the estimated class output is analyzed and diagnosed including evaluation of its failure type and potential consequences, thus reading on obtained data including a generated risk estimation related to the physical assets being analyzed; wherein the risk estimation conducted by FMEA (Failure Mode and Effects Analysis) tools and its associated Risk Priority Number (RPN) score calculated based on severity, occurrence and detectability of failure modes thus reads on an explicit risk estimation scores associated with risk estimation of the assets) Regarding Claim 5, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination further discloses wherein the automatically selecting the maintenance solution pipeline is further based on asset interdependencies of the plurality of physical assets (Serradilla [Page 3 Section 2.1 Paragraph 4]; “Some commonly monitored key components in PdM are but not limited to, bearings, blades, engines, valves, gears and cutting tools [200]. Moreover, the most common failure types detected by CM are imbalance cracks, fatigue, abrasive and corrosion wear, rubbing, defects and leak detection among others. The publication by Li et al. [90] classifies the types of failures that may exist in the system as: component failure, environmental impact, human mistakes and procedure handling” wherein the obtained data of monitored assets for use in PdM is derived from assets which are all interdependent components of an industrial system, thus reading on automatic selection of a maintenance solution pipeline based on asset interdependencies of the plurality of physical assets) Regarding Claim 6, The combination of Serradlila/Schoch/Chu teaches the method of Claim 5 (and thus the rejection of Claim 5 is incorporated). The combination further discloses wherein the automatically selecting the maintenance solution pipeline is further based on a problem definition and is defined for a selected time period (Serradilla [Page 2 Introduction]; “Maintenance is defined by the norm EN 13306 [168] as the combination of all technical, administrative and managerial actions during the life cycle of an item intended to retain it in, or restore it to, a state in which it can perform the required function. Moreover, it defines three types of maintenance: improvement maintenance improves machine reliability, maintainability and safety while keeping the original function; preventive maintenance is performed before failures occur either in periodical or predictive ways and corrective maintenance replaces the defective/broken parts when machine stops working. Currently, most industrial companies rely on periodical and corrective maintenance strategies. Nowadays, we are transitioning towards the fourth revolution denominated as Industry 4.0 (I4.0), which is based on cyber physical systems and industrial internet of things. It combines software, sensors and intelligent control units to improve industrial processes and fulfill their requirements [109]. These techniques enable automatised predictive maintenance functions analysing massive amount of process and related data based on condition monitoring (CM). Predictive maintenance (PdM) is the most cost-optimal maintenance type given its potential to achieve an overall equipment effectiveness (OEE) [171] higher than 90% by anticipating maintenance requirements [37, 44] and promise a return on investment up to 1000% [81]. Maintenance optimisation is a priority for industrial companies given that effective maintenance can reduce their cost up to 60% by correcting failures of machines, systems and people [42]. Concretely, PdM maximises components’ working life by taking advantage of their unexploited lifetime potential while reducing downtime and replacement costs by replacement before failures occur; thus preventing expensive breakdowns and production time loss caused by unexpected stops” wherein the PdM conducted for the problem of optimization of cost-efficiency in an industrial environment reads on selecting the maintenance solution pipeline based on a problem definition; wherein the problem definition is defined for the selected time period of the relevant components’ working life and asset life prognosis) Regarding Claim 9, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination further discloses regenerating an existing maintenance solution pipeline based on one or more updated constraints (Serradilla [Page 5 Section 2.2.4]; “Once an anomaly has been detected, the next stage consists of diagnosing whether this anomaly belongs to a faulty working condition and can evolve into a future failure or, in contrary, there is no risk of failure. The last case indicates that the anomaly detection model has not worked properly and therefore it may need to be reevaluated or retrained. The diagnosis is usually based on root cause analysis (RCA) techniques, which aim to identify the true cause of a problem. The diagnosis algorithm has to be suitable for the problem being addressed. There are several approaches to tackle this step, which depend on the implemented AD method and training data characteristics: multi-class classification, binary classification, one-class classification and clustering. Concretely these are chosen if the dataset has multiple failure types, failure and non failure observations, only observations of one class or unsupervised, respectively. There is another technique that commonly complements RCA: anomaly deviation quantification by health index (HI). It aims to measure assets’ damage by comparing current working data with historical data in a supervised or unsupervised way. It can either indicate a percentage of deviation with regard to normal working data, or show degradation level in a numerical scale, where the higher the value the more damaged the component is, where minimum value means no damage, maximum is fully damaged or failure and intermediate values indicate different degrees of degradation [119]” wherein the retrained anomaly detection model step of the PdM maintenance solution pipeline based on updated AD method and training data characteristics deemed to be more suitable for the problem being addressed reads on regeneration of an existing maintenance solution pipeline based on updated constraints) Regarding Claim 10, The combination of Serradlila/Schoch/Chu teaches the method of Claim 9 (and thus the rejection of Claim 9 is incorporated). The combination further discloses wherein the regenerating is further based on one or more updated objectives (Serradilla [Page 5 Section 2.2.4]; “Once an anomaly has been detected, the next stage consists of diagnosing whether this anomaly belongs to a faulty working condition and can evolve into a future failure or, in contrary, there is no risk of failure. The last case indicates that the anomaly detection model has not worked properly and therefore it may need to be reevaluated or retrained. The diagnosis is usually based on root cause analysis (RCA) techniques, which aim to identify the true cause of a problem. The diagnosis algorithm has to be suitable for the problem being addressed. There are several approaches to tackle this step, which depend on the implemented AD method and training data characteristics: multi-class classification, binary classification, one-class classification and clustering. Concretely these are chosen if the dataset has multiple failure types, failure and non failure observations, only observations of one class or unsupervised, respectively. There is another technique that commonly complements RCA: anomaly deviation quantification by health index (HI). It aims to measure assets’ damage by comparing current working data with historical data in a supervised or unsupervised way. It can either indicate a percentage of deviation with regard to normal working data, or show degradation level in a numerical scale, where the higher the value the more damaged the component is, where minimum value means no damage, maximum is fully damaged or failure and intermediate values indicate different degrees of degradation [119]” wherein the regeneration of the maintenance solution pipeline by retraining the anomaly detection portion of the pipeline is based on updated objectives regarding RCA and anomaly deviation quantification by health index (regeneration performed based on RCA indicating true cause is not anomalous and HI indicating assets are not degraded, thus demonstrating anomaly detection malfunctioned and necessitates regeneration) Claims 11, 12 and 15 recite a system to perform the method of Claims 1, 5 and 9. Thus, Claims 11, 12 and 15 are rejected for reasons set forth in the rejection of Claims 1, 5 and 9. Claims 16, 17 and 20 recite a computer program product comprising a compute readable storage media and stored program instructions to perform the method of Claims 1, 5 and 9. Thus, Claims 16, 17 and 20 are rejected for reasons set forth in the rejection of Claims 1, 5 and 9. Claims 7-8; 13-14; and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Serradilla et al. (“Deep learning models for predictive maintenance: a survey, comparison, challenges and prospect” [2020], hereinafter “Serradilla”) in view of Schoch et al. (US20230047304A1, hereinafter “Schoch”) in view of Chu et al. (US20180096261A1, hereinafter “Chu”) in view of Faller et al. (“Combining Condition Monitoring and Predictive Modeling to Improve Equipment Uptime on Drilling Rigs” [2008], hereinafter “Faller”). Regarding Claim 7, The combination of Serradlila/Schoch/Chu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination fails to explicitly disclose but Faller discloses automatically selecting the maintenance solution pipeline comprises traversing a tree structure to select the maintenance solution pipeline (Faller [Page 3 Section “Decision Tree Analysis”]; “A decision tree is a logical model represented as a binary (two-way split) tree that shows how the value of a target variable can be predicted by using the values of a set of predictor variables. A condition based monitoring sensor network generates a lot of data from multiple technologies or channels. Each component or equipment creates large amounts of relevant data (predictor values), in similar time frames. Managing high data rates and recognizing patterns of interest in multi-channel data is a challenging problem. Decision tree analysis organizes the large volume of predictor values from the CBM system. Decision trees assess the data and finds patterns based on conditions of interest. By various means, the process "learns" how to model (predict) the value of the target variable based on the predictor variables. It leverages machine-learning technology for detecting patterns in multi-channel time-series data. It determines interrelationships among the patterns and target values, which are then used to build Predictive models. By finding patterns, groupings or other ways to characterize the data, the expert software builds predictive models about equipment health. Decision trees are used to make inferences that help understand the purpose and results of the model. A CBM/PdM system based on decision tree analysis builds profiles of normalcy under varying operating conditions enabling detection of potential failures based on patterns created by multiple ‘conditions of interest’ target values.” Faller [Page 4 Section “Predictive Modeling”]; “Predictive modeling draws from statistics, machine learning, database techniques, pattern recognition, and optimization techniques. Predictive modeling based on decision tree analysis incorporates the process of extracting accurate and previously unknown information from large volumes of data. Decision-tree-based modeling techniques produce models with interpretable structures, making them highly amenable to explanation and human inspection. This characteristic allows end users and analysts to understand the implications of the models and to take actions based on these implications. Predictive Maintenance relies on the development of an asset strategy that determines the level of downtime necessary to maintain an asset, along with the resource structure required for organizing and controlling the work. Predictive maintenance analyzes and compares sampled data to reference models to assess the potential for failure.” wherein a sensor network collecting data by which a decision tree is utilized for analysis of the sensor data to determine, through predictive modeling, potential for failure across varying operating condition patterns and profiles of normalcy for end users to take appropriate actions based on such implications reads on automated selection of a maintenance solution pipeline (appropriate actions based on the decision tree risk analysis) comprising traversal of a tree structure) It would have been obvious to modify Serradlila/Schoch/Chu’s maintenance solution pipeline selection through artificial intelligence processes to use specifically decision trees as the artificial intelligence processes for generating risk estimates of physical assets. One would have been motivated to do so because “A great advantage of decision trees over classical regression and neural networks is they are easy to interpret into actionable items with a clear understanding of how and why a downtime incident is avoided.” (Faller [Abstract Column 2 Line 29]). Regarding Claim 8, The combination of Serradilla/Chu/Faller teaches the method of Claim 7 (and thus the rejection of Claim 7 is incorporated). The combination already discloses wherein the tree structure comprises a directed acyclic graph (Faller [Page 3 Section “Decision Tree Analysis”]; “A decision tree is a logical model represented as a binary (two-way split) tree that shows how the value of a target variable can be predicted by using the values of a set of predictor variables. A condition based monitoring sensor network generates a lot of data from multiple technologies or channels. Each component or equipment creates large amounts of relevant data (predictor values), in similar time frames. Managing high data rates and recognizing patterns of interest in multi-channel data is a challenging problem. Decision tree analysis organizes the large volume of predictor values from the CBM system. Decision trees assess the data and finds patterns based on conditions of interest. By various means, the process "learns" how to model (predict) the value of the target variable based on the predictor variables. It leverages machine-learning technology for detecting patterns in multi-channel time-series data. It determines interrelationships among the patterns and target values, which are then used to build Predictive models. By finding patterns, groupings or other ways to characterize the data, the expert software builds predictive models about equipment health. Decision trees are used to make inferences that help understand the purpose and results of the model” wherein a decision tree structure inherently comprises a directed acyclic graph) Claims 13 and 14 recite a system to perform the method of Claims 7-8. Thus, Claims 13 and 14 are rejected for reasons set forth in the rejection of Claims 7-8. Claims 18 and 19 recite a computer program product comprising a compute readable storage media and stored program instructions to perform the method of Claims 7-8. Thus, Claims 18 and 19 are rejected for reasons set forth in the rejection of Claims 7-8. Response to Arguments The Examiner acknowledges the Applicant’s amendments to Claims 1, 4, 11 and 16. Applicant’s arguments filed December 19th, 2025, traversing the rejection of claims 1-20 under 35 U.S.C. § 101 have been fully considered, and are fully persuasive. Applicant’s arguments filed December 19th, 2025, traversing the rejection of claims 1-20 under 35 U.S.C. § 103 have been fully considered, but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN J KIM whose telephone number is (571)272-0523. The examiner can normally be reached 8-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt El can be reached on (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN J KIM/Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Oct 31, 2022
Application Filed
Sep 22, 2025
Non-Final Rejection — §103
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+80.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month