Prosecution Insights
Last updated: April 19, 2026
Application No. 18/118,124

MACHINE LEARNING-BASED APPLICATION MANAGEMENT FOR ENTERPRISE SYSTEMS

Non-Final OA §101§103
Filed
Mar 06, 2023
Examiner
LEE, WILLIAM MICHAEL
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solution Limited
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
8 currently pending
Career history
8
Total Applications
across all art units

Statute-Specific Performance

§101
23.3%
-16.7% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
DETAILED ACTION The action is in response to the original filing on March 6, 2023. Claims 1-20 are pending and have been considered below. Claims 1, 14, and 19 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: “cloud-based system 142” in paragraph 21, page 10. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: character “422” in Figure 4. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: In paragraph 30, page 15, “reduces and overall training time” should read “reduces overall training time” In paragraph 35, “text used to rain the failure engine” should read “text used to train the failure engine” In paragraph 36, page 19, “based a comparison” should read “based on a comparison” In paragraph 64, “FIGs. 1-5)” should read “FIGs. 1-5” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1 – Claim 1 is directed to a method: A method for machine learning-based application management… Step 2A, Prong 1 – A judicial exception is recited in this claim as it recites mathematical concepts (see MPEP 2106.04(a)(2)(I)): decomposing… log data associated with a plurality of applications into time-series data representing values of one or more key performance indicators (KPIs) over a time period associated with the log data… To “decompose” log data is to use mathematical calculations such as data parsing or aggregation to convert unstructured event-based data into structured time-series data. Hence “decomposing… log data associated with a plurality of applications into time-series data representing values of one or more key performance indicators (KPIs) over a time period associated with the log data” is a mathematical concept. performing… clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups… To perform “clustering operations” to assign applications into groups is to aggregate data into sets, which is a mathematical calculation. Hence “performing… clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups” is a mathematical concept. determining… a training sequence for the plurality of applications based on the multiple training groups… To “determine” a training sequence for the plurality of applications is to aggregate and append data into an ordered list, which is a mathematical calculation. Hence “determining… a training sequence for the plurality of applications based on the multiple training groups” is a mathematical concept. to detect occurrence of an anomaly by a corresponding application based on received application data… To “detect” occurrence of an anomaly based on received application data, as understood in the present application’s specification, is “to check sparsity within the time- series data and use the central tendency for one of multiple different intervals for thresholding and prioritized detection of anomalies” (¶44). Using the “central tendency… for thresholding” is comparing data to a threshold value, which is a mathematical calculation. Hence, to “detect occurrence of an anomaly by a corresponding application based on received application data” is a mathematical concept. Step 2A, Prong 2 – The following limitations are additional elements without significantly more than the abstract idea: decomposing, by one or more processors… performing, by the one or more processors… determining, by the one or more processors… one or more processors used as mere tools to apply an exception are generic elements for performing or applying the abstract idea using a generic computing environment (see MPEP 2106.05(f)). initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications according to the training sequence… training of a plurality of models according to the training sequence is an attempt to use the plurality of models by merely applying the abstract idea (i.e., using the training sequence determined with math) without placing any limits on how the training is performed. Further, the limitation omits any details as to how “training of a plurality of anomaly detection models” solves a technical problem and instead recites only the idea of a solution or outcome (see MPEP 2106.05(f)). Thus, the limitation represents no more than mere instructions to implement the abstract idea which is equivalent to adding the words “apply it” to the recited judicial exception. wherein each anomaly detection model of the plurality of anomaly detection models comprises a machine learning (ML) model configured to detect occurrence of an anomaly… a machine learning model used as a mere tool to apply an exception is a generic element for performing or applying the abstract idea using a generic computing environment (see MPEP 2106.05(f)). Step 2B: As discussed above, the additional elements decomposing, by one or more processors… performing, by the one or more processors… determining, by the one or more processors… and wherein each anomaly detection model of the plurality of anomaly detection models comprises a machine learning (ML) model configured to detect occurrence of an anomaly… amount to insignificant extra-solution activity as mere instructions to apply the judicial exception using a generic computing environment and is not indicative of significantly more. The additional element initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications according to the training sequence… is recited at such a high level of generality that it fails to integrate the abstract idea into a practical application, since it provides nothing more than mere instructions to implement an abstract idea on a generic computer without significantly more (see MPEP 2106.05(f)). These limitations, taken either alone or in combination, fail to provide an inventive concept. Thus, the claim is not patent eligible. Claims 2-13 recite limitations which further narrow the abstract idea of claim 1 by specifying more details of the mathematical concepts that occur: Regarding claim 2, specifying wherein the one or more temporal components comprise trend components, seasonal components, cyclic components, or a combination thereof in this manner does not overcome the rejection of claim 1 as modifying the temporal components does not make “performing… clustering operations” to not be a mathematical concept. Regarding claim 3, the claim further limits abstract idea of claim 1 to be based on a mental process: determining, by the one or more processors, training frequencies for the plurality of anomaly detection models based on the time-series data. For example, given a small enough plurality of anomaly detection models and simple enough time-series data, a human can reasonably perform determining training frequencies for the plurality of anomaly detection models (see MPEP 2106.04(a)(2)(III)). Additionally, the limitation generating, by the one or more processors, a training schedule for the plurality of anomaly detection models based on the training frequencies and the training sequence, the training schedule including the training sequence and one or more future training sequences amounts to necessary data outputting and is still insignificant extra-solution activity (see MPEP 2106.05(g)). Regarding claim 4, specifying wherein the training of the plurality of anomaly detection models according to the training sequence includes concurrently training one or more anomaly detection models of a first training group of the multiple training groups and one or more anomaly detection models of a second training group of the multiple training groups in this manner does not overcome the rejection of claim 1 as modifying the training of the plurality of anomaly detection models does not make “to detect occurrence of an anomaly” to not be a mathematical concept. Regarding claim 5, specifying wherein training an anomaly detection model of the first training group comprises performing one or more same preprocessing operations, one or more same post-processing operations, or a combination thereof, than training an anomaly detection model of the second training group in this manner does not overcome the rejection of claim 4 as modifying the training of an anomaly detection model does not make “to detect occurrence of an anomaly” to not be a mathematical concept. Regarding claim 6, specifying wherein the training of the plurality of anomaly detection models according to the training sequence includes training a first anomaly detection model of a first training group of the multiple training groups and a second anomaly detection model of the first training group in series in this manner does not overcome the rejection of claim 1 as modifying the training of the plurality of anomaly detection models does not make “to detect occurrence of an anomaly” to not be a mathematical concept. Regarding claim 7, specifying wherein training the first anomaly detection model comprises performing one or more different preprocessing operations, one or more different post-processing operations, or a combination thereof, as training the second anomaly detection model in this manner does not overcome the rejection of claim 6 as modifying the training of an anomaly detection model does not make “to detect occurrence of an anomaly” to not be a mathematical concept. Regarding claim 8, the claim further limits abstract idea of claim 1 to be based on a mental process: to identify one or more additional applications that are predicted to fail based on one or more detected anomalies output by the plurality of anomaly detection models. For example, given a small enough number of applications and detected anomalies, a human can reasonably perform identifying one or more additional applications that are predicted to fail (see MPEP 2106.04(a)(2)(III)). Additionally, generating, by the one or more processors, an application dependency graph based on the time-series data, the log data, or a combination thereof and initiating, by the one or more processors, training of a failure engine based on the application dependency graph to output indicators of applications that are predicted to fail amount to necessary data gathering and outputting and is still insignificant extra-solution activity (see MPEP 2106.05(g)), and wherein the failure engine executes a ML model configured to identify one or more additional applications… amounts to mere instructions to apply the judicial exception using a generic computing environment and is not indicative of significantly more (see MPEP 2106.05(f)). Regarding claim 9, describing wherein the failure engine is further trained based on the application dependency graph to configure the failure engine to output failure scores corresponding to reasons for failure associated with the applications that are predicted to fail amounts to necessary data gathering and outputting and is still insignificant extra-solution activity (see MPEP 2106.05(g)). Regarding claim 10, describing initiating, by the one or more processors, training of an application recovery model based on historical recovery action data, the log data, and the application dependency graph and wherein the application recovery model comprises an ML model configured to output recovery actions based on input indicators of applications that are predicted to fail amount to necessary data gathering and outputting and is still insignificant extra-solution activity (see MPEP 2106.05(g)). Regarding claim 11, describing providing, by the one or more processors, current log data as input data to the plurality of anomaly detection models to generate one or more detected anomalies associated with one or more applications of the plurality of applications… providing, by the one or more processors, the one or more detected anomalies as input data to the failure engine to generate one or more indicators of applications that are predicted to fail and one or more failure scores corresponding to reasons for failure associated with the applications that are predicted to fail… providing, by the one or more processors, the one or more indicators of the applications that are predicted to fail as input data to the application recovery model to generate one or more recovery action recommendations… and displaying, by the one or more processors, a dashboard that indicates the applications that are predicted to fail, the one or more failure scores, the reasons for failure, the one or more recovery action recommendations, or a combination thereof amounts to necessary data gathering and outputting and is still insignificant extra-solution activity (see MPEP 2106.05(g)). Regarding claim 12, describing initiating, by the one or more processors, automatic performance of an action indicated by the one or more recovery action recommendations amounts to mere automation of manual processes using a generic computer and is not sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)). Regarding claim 13, specifying wherein the action comprises re-executing one or more of the applications that are predicted to fail, terminating one or more of the applications that are predicted to fail, or a combination thereof in this manner does not overcome the rejection of claim 12 as modifying the action does not make “to detect occurrence of an anomaly” to not be a mathematical concept. Claims 14-18 recite a system that parallels the method claims of 1 and 8-11, respectively. Therefore, the analysis discussed above with respect to claims 1 and 8-11 also applies to claims 14-18, respectively. Accordingly, claims 14-18 are rejected based on substantially the same rationale as set forth above with respect to claims 1 and 8-11, respectively. Claims 19-20 recite a non-transitory computer-readable storage medium that parallels the method claims of 1 and 3, respectively. Therefore, the analysis discussed above with respect to claims 1 and 3 also applies to claims 19-20, respectively. Accordingly, claims 19-20 are rejected based on substantially the same rationale as set forth above with respect to claims 1 and 3, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-6, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnan et al. (US 20180113773 A1, hereinafter Krishnan) in view of Priydarshi et al. (WO 2020171622 A1, hereinafter Priydarshi) and further in view of Liu et al. (CN 115220899 A, hereinafter Liu). Regarding claim 1, Krishnan teaches a method for machine learning-based application management, the method comprising: decomposing, by one or more processors, log data associated with a plurality of applications (Fig. 1 – 100, ¶18 “FIG. 1 illustrates an example environment that employs an application failure prediction system (AFPS) 100 which uses a model to analyze logs of an application to predict the probability of application failure,” ¶41 “The AFPS 100 is enabled to proactively monitor and correct errors that occur during the course of application execution thereby ensuring the smooth running of the various applications,” wherein log data associated with a plurality of applications is implicit) into time-series data representing values of one or more key performance indicators (KPIs) over a time period associated with the log data (Fig. 1 – 102, 122, 162, Fig. 4 – 400-410, ¶36 “FIG. 4 is a flowchart 400 that details an example method of detecting potential application failures or malfunctions. The method of detecting potential application failures as detailed herein can be carried out by the processor 102… Real-time data 162 is received at block 402 during the course of execution of the application 122… If at block 404 it is determined that the real-time data 162 comprises unstructured data, it can be converted to structured data at block 406… The predictive data model 120 can then be applied to the real-time data 162 at block 408 and the anomalies are detected at 410… The anomalies may be detected, for example, by their characteristic temporal error patterns or other attributes,” wherein converting unstructured “real-time data,” or log data, into “structured data” encompasses the method comprising: decomposing, by one or more processors, log data… into time-series data and “characteristic temporal error patterns or other attributes” encompasses values of one or more key performance indicators (KPIs) over a time period associated with the log data). Regarding the limitation performing, by the one or more processors, clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups, Krishnan teaches one or more temporal components derived from the time-series data (Fig. 4 – 410, ¶36 “characteristic temporal error patterns,” Fig. 5 – 504, ¶38 “FIG. 5 is a flowchart 500 that details one example of a method of estimating an anomaly score or the probability of application failure… Patterns of error codes which represent a temporal sequences of errors are therefore recognized at block 504”). However, Krishnan fails to teach performing, by the one or more processors, clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups. Priydarshi, in the same field of endeavor, teaches performing, by the one or more processors (Fig. 2a – 100, 107, 213, ¶38 “the one or more modules 213 may be communicatively coupled to the processor 107 for performing one or more functions of the electronic device 100. The said modules 213 when configured with the functionality defined in the present disclosure will result in a novel hardware”), clustering operations based on application usage to assign each of the plurality of applications to at least one of multiple training groups (Fig. 2 – 109, 219, Fig. 5 – 505, ¶58 “the one or more applications are clustered into one or more groups by the clustering module 219 by using the learning model 109. The learning model 109 is trained dynamically based on the application usage pattern for clustering”). Regarding the limitation determining, by the one or more processors, a training sequence for the plurality of applications based on the multiple training groups, Priydarshi teaches the plurality of applications based on the multiple training groups (Fig. 2c, ¶43 “the learning model 109 cluster the plurality of applications based on temporal and application usage pattern”). However, Priydarshi fails to teach the full limitation determining, by the one or more processors, a training sequence for the plurality of applications based on the multiple training groups. Liu, in the same field of endeavor, teaches determining, by the one or more processors (Machine Translation, Fig. 5 – 910-912, ¶91 “the electronic device 910 includes a processor 911… The memory 912 is used to store computer-executable instructions… which, when run by the processor 911, can perform one or more steps of the scheduling method for the model training task”), a training sequence for a plurality of model training tasks based on a training group (Machine Translation, Fig. 2 – 201-202, ¶42 “a target task group can be obtained, which includes multiple model training tasks to be processed. These model training tasks can be training tasks of models involving various deep learning methods,” ¶44 “multiple model training tasks in the target task group are scheduled to multiple model training resources of different types,” wherein “multiple model training tasks… in the task group are scheduled” encompasses determining… a training sequence). Regarding the limitation and initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications according to the training sequence, Krishnan further teaches initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications (Fig. 1 – 120, 124, 164, ¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models corresponding to the plurality of applications,” ¶21 “The predictive data model 120 thus generated can be initially trained”). However, Krishnan fails to teach according to the training sequence. Liu teaches training models according to the training sequence (Machine Translation, ¶47 “the model training process can be divided into multiple training stages, with each training stage scheduling each model training task once”). Krishnan further teaches wherein each anomaly detection model of the plurality of anomaly detection models comprises a machine learning (ML) model configured to detect occurrence of an anomaly by a corresponding application based on received application data (Fig. 1 – 120, 124, 164, ¶21 “training data 124 which may comprise a subset of the application logs 164,” ¶23 “Anomalies may include a combination of error codes which the predictive data model 120 is trained to identify as leading to a high probability of application failure”). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the training groups of Priydarshi and the training sequence of Liu with the methodology of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Regarding claim 4, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated). Krishnan teaches the training of the plurality of anomaly detection models and one or more anomaly detection models (Fig. 1 – 120, ¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models corresponding to the plurality of applications,” ¶21 “The predictive data model 120 thus generated can be initially trained”). However, Krishnan fails to teach wherein the training of the plurality of anomaly detection models according to the training sequence includes concurrently training one or more anomaly detection models of a first training group of the multiple training groups and one or more anomaly detection models of a second training group of the multiple training groups. Priydarshi teaches a first training group of the multiple training groups and a second training group of the multiple training groups (Fig. 2a –219, Fig. 2c, ¶42 “The clustering module 219 may clusters the plurality of applications into one or more groups,” Figure 2c depicts tables with multiple Group ID’s, among which are Group ID 1 and 2, or a first training group of the multiple training groups and a second training group of the multiple training groups). However, Priydarshi fails to teach wherein the training of the plurality of anomaly detection models according to the training sequence includes concurrently training. Liu teaches wherein training a plurality of models according to the training sequence includes concurrently training one model of a first training group and one model of the same training group (Fig. 3C, ¶42 “a target task group can be obtained, which includes multiple model training tasks to be processed. These model training tasks can be training tasks of models involving various deep learning methods,” ¶58 “after entering the (n-1)th training phase, task A is scheduled to resource 1… Task B is scheduled to resource 3… Task C is scheduled to resource 2… After tasks A, B, and C are completed, the nth training phase begins,” Figure 3C depicts multiple model training tasks being performed simultaneously in each training phase, or concurrently training). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the first and second training groups of Priydarshi and the concurrent training of Liu with the anomaly detection models of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Regarding claim 5, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 4 (and thus the rejection of claim 4 is incorporated). Krishnan teaches training an anomaly detection model (¶21 “The predictive data model 120 thus generated can be initially trained”). However, Krishnan fails to teach wherein training an anomaly detection model of the first training group comprises performing one or more same preprocessing operations, one or more same post-processing operations, or a combination thereof, than training an anomaly detection model of the second training group. Priydarshi teaches the first training group and the second training group (Fig. 2a –219, Fig. 2c, ¶42 “The clustering module 219 may clusters the plurality of applications into one or more groups,” Figure 2c depicts the first training group and the second training). However, Priydarshi fails to teach wherein training an anomaly detection model of the first training group comprises performing one or more same preprocessing operations, one or more same post-processing operations, or a combination thereof, than training an anomaly detection model of the second training group. Liu teaches wherein training a model of the first training comprises performing one or more same preprocessing operations (¶32 “The model training process requires the use of various resources. For example, in one iteration of model training, the following stages need to be completed in sequence… preprocessing data and simulation operations in reinforcement learning (using CPU resources)”), one or more same post-processing operations, or a combination thereof, than training a model of the same training group (¶58 “it can be seen that the model training resources are utilized more efficiently under the scheduling mode shown in Figure 3C,” Figure 3C depicts each model training task sharing the same resources 1, 2, and 3, which encompasses performing one or more same preprocessing operations). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the first and second training groups of Priydarshi and the same preprocessing operations of Liu with the anomaly detection models of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Regarding claim 6, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated). Regarding the limitation wherein the training of the plurality of anomaly detection models according to the training sequence includes training a first anomaly detection model of a first training group of the multiple training groups and a second anomaly detection model of the first training group in series, Krishnan teaches the training of the plurality of anomaly detection models, training a first anomaly detection model, and training a second anomaly detection model (Fig. 1 – 120, ¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models corresponding to the plurality of applications,” ¶21 “The predictive data model 120 thus generated can be initially trained,” wherein a “plurality of respective predictive data models” implies at least two models, or a first anomaly detection model and a second anomaly detection model). However, Krishnan fails to teach wherein the training of the plurality of anomaly detection models according to the training sequence includes training a first anomaly detection model of a first training group of the multiple training groups and a second anomaly detection model of the first training group in series. Priydarshi teaches a first training group of the multiple training groups (Fig. 2a –219, Fig. 2c, ¶42 “The clustering module 219 may clusters the plurality of applications into one or more groups,” Figure 2c depicts a first training group of the multiple training groups). However, Priydarshi fails to teach wherein the training of the plurality of anomaly detection models according to the training sequence includes training a first anomaly detection model of a first training group of the multiple training groups and a second anomaly detection model of the first training group in series. Liu teaches wherein the training of a plurality of models according to the training sequence includes training a first model of a first training group and a second model of the first training group (¶42 “a target task group can be obtained, which includes multiple model training tasks to be processed. These model training tasks can be training tasks of models involving various deep learning methods”) in series (Fig. 3B, ¶57 “in one scheduling mode, after entering the (n-1)th training phase, task A is scheduled to resource 1, and the duration of task A using resource 1 is (t2-t1)… After tasks A, B, and C are completed, the nth training phase begins… Task B is scheduled to resource 3, and the duration of task B's use of resource 3 is (t3-t2)… The subsequent process follows the same pattern,” Figure 3B depicts each of the model training tasks taking up most of the training time during each respective stage of training, hence the training tasks are executed in series). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the first training group of Priydarshi and the training in series of Liu with the plurality of anomaly detection models of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Regarding claim 14, Krishnan teaches a system for machine learning-based application management (¶13 “An application failure prediction system (AFPS) disclosed herein is configured for accessing the real-time data from an application executing on a computing apparatus, predicting anomalies which may be indicative of potential application failures and implementing corrective actions to mitigate the occurrences of the anomalies”), the system comprising: a memory (Fig. 9 – 906, ¶50 “The computer-readable storage medium 906 may be any suitable medium which participates in providing instructions”). Krishnan further teaches and one or more processors communicatively coupled to the memory, the one or more processors configured to (Fig. 1 – 100, Fig. 9 – 902, 906, ¶50 “The computer-readable storage medium 906… participates in providing instructions to the processor(s) 902 for execution… The instructions… stored on the computer readable medium 906 may include machine readable instructions… executed by the processor(s) 902 to perform the methods and functions for the AFPS 100 described herein”): decompose log data associated with a plurality of applications (Fig. 1 – 100, ¶18, ¶41, as explained above with respect to claim 1) into time-series data representing values of one or more key performance indicators (KPIs) over a time period associated with the log data (Fig. 1 – 102, 122, 162, Fig. 4 – 400-410, ¶36, as explained above with respect to claim 1). Regarding the limitation perform clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups, Krishnan further teaches one or more temporal components derived from the time-series data (Fig. 4 – 410, Fig. 5 – 504, ¶36, ¶38). However, Krishnan fails to teach the full limitation. Priydarshi, in the same field of endeavor, teaches perform clustering operations based on application usage to assign each of the plurality of applications to at least one of multiple training groups (Fig. 2 – 109, 219, Fig. 5 – 505, ¶58). Regarding the limitation determine a training sequence for the plurality of applications based on the multiple training groups, Priydarshi teaches the plurality of applications based on the multiple training groups (Fig. 2c, ¶43). However, Priydarshi fails to teach determine a training sequence. Liu, in the same field of endeavor, teaches determine a training sequence for a plurality of model training tasks based on a training group (Machine Translation, Fig. 2 – 201-202, ¶42, as explained above with respect to claim 1). Regarding the limitation and train a plurality of anomaly detection models that correspond to the plurality of applications according to the training sequence, Krishnan further teaches and train a plurality of anomaly detection models that correspond to the plurality of applications (Fig. 1 – 120, 124, 164, ¶20-21). However, Krishnan fails to teach according to the training sequence. Liu teaches according to the training sequence (Machine Translation, ¶47). Krishnan further teaches wherein each anomaly detection model of the plurality of anomaly detection models comprises a machine learning (ML) model configured to detect occurrence of an anomaly by a corresponding application based on received application data (Fig. 1 – 120, 124, 164, ¶21, ¶23). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the training groups of Priydarshi and the training sequence of Liu with the system of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Regarding claim 19, Krishnan teaches a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations for machine learning-based application management (Fig. 1 – 100, Fig. 9 – 902, 906, ¶50 “The computer-readable storage medium 906 may be any suitable medium which participates in providing instructions to the processor(s) 902 for execution… the computer readable medium 906 may be non-transitory… The instructions… stored on the computer readable medium 906 may include machine readable instructions… executed by the processor(s) 902 to perform the methods and functions for the (AFPS) 100 described herein”), the operations comprising: decomposing log data associated with a plurality of applications (Fig. 1 – 100, ¶18, ¶41, as explained above with respect to claim 1) into time-series data representing values of one or more key performance indicators (KPIs) over a time period associated with the log data (Fig. 1 – 102, 122, 162, Fig. 4 – 400-410, ¶36, as explained above with respect to claim 1). Regarding the limitation performing clustering operations based on one or more temporal components derived from the time-series data to assign each of the plurality of applications to at least one of multiple training groups, Krishnan further teaches one or more temporal components derived from the time-series data (Fig. 4 – 410, Fig. 5 – 504, ¶36, ¶38). However, Krishnan fails to teach the full limitation. Priydarshi, in the same field of endeavor, teaches performing clustering operations based on application usage to assign each of the plurality of applications to at least one of multiple training groups (Fig. 2 – 109, 219, Fig. 5 – 505, ¶58). Regarding the limitation determining a training sequence for the plurality of applications based on the multiple training groups, Priydarshi teaches the plurality of applications based on the multiple training groups (Fig. 2c, ¶43). However, Priydarshi fails to teach determine a training sequence. Liu, in the same field of endeavor, teaches determining a training sequence for a plurality of model training tasks based on a training group (Machine Translation, Fig. 2 – 201-202, ¶42, as explained above with respect to claim 1). Regarding the limitation and initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications according to the training sequence, Krishnan further teaches and initiating, by the one or more processors, training of a plurality of anomaly detection models that correspond to the plurality of applications (Fig. 1 – 120, 124, 164, ¶20-21). However, Krishnan fails to teach according to the training sequence. Liu teaches training models according to the training sequence (Machine Translation, ¶47). Krishnan further teaches wherein each anomaly detection model of the plurality of anomaly detection models comprises a machine learning (ML) model configured to detect occurrence of an anomaly by a corresponding application based on received application data (Fig. 1 – 120, 124, 164, ¶21, ¶23). Krishnan, Priydarshi, and Liu are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the training groups of Priydarshi and the training sequence of Liu with the non-transitory computer-readable storage medium of Krishnan. The motivation to do so is to design a method that “optimizes user experience while using the application” (Priydarshi, ¶24) while “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Krishnan in view of Priydarshi and further in view of Liu, and further in view of Gennetten et al. (US 20230153191 A1, hereinafter Gennetten). Regarding claim 2, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated). Krishnan teaches the one or more temporal components (¶36 “characteristic temporal error patterns”). However, Krishnan fails to teach wherein the one or more temporal components comprise trend components, seasonal components, cyclic components, or a combination thereof. Gennetten, in the same field of endeavor, teaches this limitation (¶34 “historical data critical to the execution of batch processes in the past may be identified and collected… various examples of which are set forth in Appendix A… critical data may be identified via feedback, e.g. from the underlying event based process automation system(s), such as log data,” APPENDIX A on pages 16-17 depicts multiple rows describing various features such as “How many times was the job modified,” “The year/month/day/day of the week/hour/minute that the job is going to run,” and “How many times does the job run in the dataset,” which encompasses temporal components comprise trend components, seasonal components, cyclic components, or a combination thereof). Krishnan and Gennetten are analogous art to the claimed invention as both are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the temporal components of Gennetten and Krishnan. The motivation to do so, as stated by Gennetten, is to “improve success rates (e.g., greater chance of successful completion, etc.) and/or efficiency (e.g., time to successful completion, etc.) in terms of both the batch processing and any recovery processes” (Gennetten, ¶64). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Krishnan in view of Priydarshi and further in view of Liu, and further in view of Jain et al. (US 20220198949 A1, hereinafter Jain). Regarding claim 3, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated). Krishnan teaches the plurality of anomaly detection models (¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models corresponding to the plurality of applications”) and the time series data (Fig. 1 – 162, Fig. 4 – 404-406, ¶36 “If at block 404 it is determined that the real-time data 162 comprises unstructured data, it can be converted to structured data at block 406”). However, Krishnan fails to teach determining, by the one or more processors, training frequencies for the plurality of anomaly detection models based on the time-series data. Jain, in the same field of endeavor, teaches determining, by the one or more processors (¶69 Fig. 4 – 220, 224, ¶69 “The processor 220 includes… a training module 224… The processor 220 is configured to execute the processor-executable routines”), training frequencies for an AI model (Fig. 4 – 224, ¶64 “training module 224 is configured to train the AI model continuously in a dynamic manner. In such embodiments, the training data may be presented to the training module 224 continuously,” wherein training “the AI model continuously in a dynamic manner” encompasses retraining a model, or determining… training frequencies”) based on additional suitable data (Fig. 4 – 224, ¶63 “The training module 224 may be further configured to train the AI model based on… additional suitable data, not described herein”). Regarding the limitation and generating, by the one or more processors, a training schedule for the plurality of anomaly detection models based on the training frequencies and the training sequence, the training schedule including the training sequence and one or more future training sequences, Krishnan teaches the plurality of anomaly detection models (¶20). However, Krishnan fails to teach and generating, by the one or more processors, a training schedule for the plurality of anomaly detection models based on the training frequencies and the training sequence, the training schedule including the training sequence and one or more future training sequences. Liu teaches the training sequence (Machine Translation, ¶47 “the model training process can be divided into multiple training stages, with each training stage scheduling each model training task once”). However, Liu fails to teach and generating, by the one or more processors, a training schedule for the plurality of anomaly detection models based on the training frequencies and the training sequence, the training schedule including the training sequence and one or more future training sequences. Jain teaches and generating, by the one or more processors, a training schedule for the AI model based on the training frequencies and a training sequence (Fig. 4 – 224, ¶64 “the training module 224 is configured to train the AI model at defined intervals… the training data may be presented to the training module 224 at a frequency determined by a training schedule… training module 224 is configured to train the AI model continuously in a dynamic manner. In such embodiments, the training data may be presented to the training module 224 continuously,” wherein training the model at “defined intervals” at a “frequency” determined by a “training schedule” encompasses a training schedule… based on the training frequencies and… training sequence), the training schedule including the training sequence and one or more future training sequences (Fig. 4 – 224, ¶64 “the training module 224 is configured to train the AI model at defined intervals, e.g., weekly, bi-weekly, fortnightly, monthly etc.,” wherein “defined intervals” encompasses training schedule including the training sequence and one or more future training sequences). Krishnan, Liu, and Jain are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the training frequencies and training schedule of Jain and the training sequence of Liu with the plurality of anomaly detection models of Krishnan. The motivation to do so is “avoiding competition for model training resources between different model training tasks, improving the utilization rate of model training resources, and enhancing the efficiency of model training” (Liu, Machine Translation, ¶16) while designing an interactive and online “platform that is… faster and potentially better… with the flexibility of being in any geographic location” (Jain, ¶3). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Krishnan in view of Priydarshi and further in view of Liu, and further in view of Dar et al. (US 20200134467 A1, hereinafter Dar). Regarding claim 7, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 6 (and thus the rejection of claim 6 is incorporated). Regarding the limitation wherein training the first anomaly detection model comprises performing one or more different preprocessing operations, one or more different post-processing operations, or a combination thereof, as training the second anomaly detection model, Krishnan teaches training the first anomaly detection model and training the second anomaly detection model (Fig. 1 – 120, ¶20-21 as explained above with respect to claim 6). However, Krishnan fails to teach wherein training the first anomaly detection model comprises performing one or more different preprocessing operations, one or more different post-processing operations, or a combination thereof, as training the second anomaly detection model. Dar, in the same field of endeavor, teaches wherein training a first model comprises performing one or more different preprocessing operations, one or more different post-processing operations, or a combination thereof, as training another model (Fig. 2B, ¶52 “In the example of FIG. 2B, preprocessing batch A comprises… a non-sharable portion… The term “non-sharable portion” means a portion of the preprocessed data that is usable (e.g., sharable) as input to only a single NN computation task, for training of a single NN,” wherein the “non-shareable portion” of a “preprocessing batch” for “training of a single NN” is implied to comprise performing one or more different preprocessing operations when compared to training another model). Krishnan and Dar are analogous art to the claimed invention as both are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the different preprocessing operations of Dar with the training of the anomaly detection models of Krishnan. The motivation to do so, as stated by Dar, is to “increase an availability of deep-learning-based products, such as artificial intelligence products which are based on learning methods, and improve hardware utilization” (Dar, ¶37). Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnan in view of Priydarshi and further in view of Liu, and further in view of Mandal et al. (US 20230105304 A1, hereinafter Mandal). Regarding claim 8, Krishnan in view of Priydarshi and further in view of Liu teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated). Regarding the limitation generating, by the one or more processors, an application dependency graph based on the time-series data, the log data, or a combination thereof, Krishnan teaches the time-series data, the log data, or a combination thereof (Fig. 1 –122, 162, Fig. 4 – 402-406, ¶36 “Real-time data 162 is received at block 402 during the course of execution of the application 122… If at block 404 it is determined that the real-time data 162 comprises unstructured data, it can be converted to structured data at block 406”). However, Krishnan fails to teach generating, by the one or more processors, an application dependency graph based on the time-series data, the log data, or a combination thereof. Mandal, in the same field of endeavor, teaches generating, by the one or more processors (Fig. 7 – 710, ¶143 “CPU 710 may execute instructions stored in RAM 720 to provide several features of the present disclosure”), an application (Fig. 1 – 130, 160, ¶38 “Computing infrastructure 130 is a collection of nodes (160)… which are engineered to together host software applications,” ¶46 “software applications containing one or more components are deployed in nodes 160 of computing infrastructure 130… The components may include software/code modules of a software application,” wherein “component” encompasses application) dependency graph based on log data (Fig. 1 – 135, 150, Fig. 2 – 210, ¶59 “PRT 150 forms a causal dependency graph representing the usage dependencies among various components deployed in computing environment 135 during processing of prior user requests,” wherein “causal dependency graph” encompasses application dependency graph, ¶61 “PRT 150 receives real-time data such as… logs… during the processing of (current) user requests”). Mandal further teaches and initiating, by the one or more processors, training of a failure engine based on the application dependency graph to output indicators of applications that are predicted to fail (Fig. 1 – 135, Fig. 2 – 220-260, Fig. 4 – 450A, Fig. 5A – 500, ¶95 “Incident predictor 450A takes as inputs… the causal dependency graph (500”)… and generates as outputs a probabilistic model, predicted future incidents/imminent performance issues… and severity of the predicted performance issues… the probabilistic model is a Markov network that corelates incidents to outliers occurring in the components deployed in computing environment 135,” ¶96 “the Markov network is trained to corelate the occurrences of the outliers to the eventual occurrences of the incidents, with the strength of correlation based on the strength of the relationships as indicated by causal dependency graph 500,” wherein the “incident predictor” encompasses a failure engine, ¶65 “performance issues may include… failure of the components,” wherein “performance issues” including “failure of the components” encompasses indicators of applications that are predicted to fail). Regarding the limitation wherein the failure engine executes a ML model configured to identify one or more additional applications that are predicted to fail based on one or more detected anomalies output by the plurality of anomaly detection models, Krishnan teaches one or more detected anomalies output by the plurality of anomaly detection models (Fig. 1 – 120, ¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models,” ¶23 “Anomalies in the real-time data 162 which can lead to application failures are identified by the predictive data model 120”). However, Krishnan fails to teach wherein the failure engine executes a ML model configured to identify one or more additional applications that are predicted to fail based on one or more detected anomalies output by the plurality of anomaly detection models. Mandal teaches wherein the failure engine executes a ML model configured to identify one or more additional applications that are predicted to fail based on anomaly data (Fig. 1 – 135, Fig. 4 – 420, 450A, 460, Fig. 6E – 648, ¶65 as explained above, ¶95 “Incident predictor 450A takes as inputs… log events (stored in operational data 420) and generates as outputs a probabilistic model, predicted future incidents/imminent performance issues… and severity of the predicted performance issues… the probabilistic model is a Markov network that corelates incidents to outliers occurring in the components,” ¶109 “anomaly/outlier data (maintained as part of operational data 420),” ¶131 “Column 648 specifies a severity score indicating the severity of the incident, with a high value indicating a possible failure of multiple components of the software application or the application as a whole,” wherein an “incident predictor” that “generates” a “Markov network that correlates incidents to outliers occurring in the components” encompasses the failure engine executes a ML model configured to identify one or more additional applications that are predicted to fail). Krishnan and Mandal are analogous art to the claimed invention as both are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the application dependency graph and failure engine of Mandal with the log data and anomaly detection models of Krishnan. The motivation to do so, as stated by Mandal, is to reduce “overall cost required to resolve failure by simply preventing failure from occurring. In addition, the overall productivity of the computing environment (135) increases by minimizing chances of interruption due to failure” (Mandal, Fig. 1 – 135, ¶54). Regarding claim 9, Krishnan in view of Priydarshi and further in view of Liu and further in view of Mandal teaches the method of claim 8 (and thus the rejection of claim 8 is incorporated). Regarding the limitation wherein the failure engine is further trained based on the application dependency graph to configure the failure engine to output failure scores corresponding to reasons for failure associated with the applications that are predicted to fail, Krishnan teaches failure scores corresponding to reasons for failure associated with the applications that are predicted to fail (Fig. 1 – 120, 162, Fig. 4 – 412, ¶36 “an anomaly is a potential application failure that the data model 120 is configured to detect… The anomaly score for an anomaly is calculated at block 412. For example, the anomaly score for a particular anomaly is calculated based on the occurrences of the various features corresponding to the anomaly in the predictive data model 120 within the real-time data 162… all the anomalies detected in the real-time data 162 can be simultaneously processed to obtain their anomaly scores,” wherein “anomaly scores” for an anomaly that are “calculated based on occurrences of the various features corresponding to the anomaly” encompasses failure scores corresponding to reasons for failure associated with the applications that are predicted to fail). However, Krishnan fails to teach wherein the failure engine is further trained based on the application dependency graph to configure the failure engine to output failure scores. Mandal teaches wherein the failure engine is further trained based on the application dependency graph to configure the failure engine to output severity of predicted performance issues (Fig. 1 – 135, Fig. 4 – 450A, Fig. 5A – 500, Fig. 6E – 648, ¶95 “Incident predictor 450A takes as inputs… the causal dependency graph (500)… and generates as outputs a probabilistic model… and severity of the predicted performance issues… the probabilistic model is a Markov network,” ¶96 “the Markov network is trained to corelate the occurrences of the outliers to the eventual occurrences of the incidents, with the strength of correlation based on the strength of the relationships as indicated by causal dependency graph 500,” ¶131 “Column 648 specifies a severity score indicating the severity of the incident”). Krishnan and Mandal are analogous art to the claimed invention as both are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the application dependency graph and failure engine of Mandal with the failure scores corresponding to reasons for failure and applications predicted to fail of Krishnan. The motivation to do so, as stated by Mandal, is to reduce “overall cost required to resolve failure by simply preventing failure from occurring. In addition, the overall productivity of the computing environment (135) increases by minimizing chances of interruption due to failure” (Mandal, Fig. 1 – 135, ¶54). Claims 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnan in view of Priydarshi and further in view of Liu, and further in view of Mandal, and further in view of Balasubramanian et al. (US 20200409715 A1, hereinafter Balasubramanian). Regarding claim 10, Krishnan in view of Priydarshi and further in view of Liu and further in view of Mandal teaches the method of claim 9 (and thus the rejection of claim 9 is incorporated). Regarding the limitation initiating, by the one or more processors, training of an application recovery model based on historical recovery action data, the log data, and the application dependency graph, Krishnan teaches historical recovery action data (Fig. 6 – 602-606, ¶39 “When an anomaly is detected, the action(s) to be implemented can be identified via recognizing similar features or patterns of error codes from the application logs 164 and retrieving the action or series of actions that were taken to address the anomaly. The model applicator 114 can be trained via, for example, un-supervised learning to identify similar anomalies and respective corrective actions that were earlier implemented. In an example, surveys may be collected from personnel who implement corrective actions”) and the log data (Fig. 1 – 122, 162, Fig. 4 – 402, ¶36 “Real-time data 162 is received at block 402 during the course of execution of the application 122”). However, Krishnan fails to teach training of an application recovery model based on historical recovery action data, the log data, and the application dependency graph. Mandal teaches the application dependency graph (Fig. 1 – 135, ¶59 “a causal dependency graph representing the usage dependencies among various components deployed in computing environment 135”). However, Mandal fails to teach training of an application recovery model. Balasubramanian, in the same field of endeavor, teaches initiating, by the one or more processors (Fig. 1 – 101, 111, ¶46 “computing device 101 may include a processor 111… adapted to perform computations associated with machine learning”), training of an application recovery model based on past corrective actions (Fig. 12 – 1249, ¶142 “the monitoring device may utilize machine learning techniques to determine patterns of performance based on system state information associated with performance events. System state information for an event may be collected and used to train a machine learning model based on determining correlations between attributes of dependencies and the monitored application entering an unhealthy state. During later, similar events, the machine learning model may be used to generate a recommended action based on past corrective actions,” ¶153 “machine learning processes 1249 may also learn from the corrective actions associated with the event records and generate a recommendation that similar corrective action be taken when similar conditions arise at a later time”), event records or other system information (Fig. 12 – 1247-1249, ¶148 “The event records and other system information stored in smart database 1247 may be used by machine learning process 1249 to train a machine learning model and determine potential patterns of performance for the monitored application”), and application dependencies (¶112 “a machine learning model trained to identify correlations between the first dependency and the first application having an unhealthy operating status”). Regarding the limitation wherein the application recovery model comprises an ML model configured to output recovery actions based on input indicators of applications that are predicted to fail, Krishnan teaches recovery actions based on input indicators of applications that are predicted to fail (Fig. 1 – 164, 170, Fig. 6 – 602-604, ¶20 “anomaly or a potential application failure,” ¶39 “The method begins at block 602 wherein the application logs 164 are accessed in order to identify solutions or corrective actions 170 to address the anomalies. When an anomaly is detected, the action(s) to be implemented can be identified via recognizing similar features or patterns of error codes from the application logs 164 and retrieving the action or series of actions that were taken to address the anomaly”). However, Krishnan fails to teach wherein the application recovery model comprises an ML model configured to output recovery actions based on input indicators of applications that are predicted to fail. Balasubramanian teaches wherein the application recovery model comprises an ML model configured to output recovery actions based on an application entering an unhealthy state (¶142 “System state information for an event may be collected and used to train a machine learning model based on determining correlations between attributes of dependencies and the monitored application entering an unhealthy state. During later, similar events, the machine learning model may be used to generate a recommended action based on past corrective actions”). Krishnan, Mandal, and Balasubramanian are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the application dependency graph of Mandal and the recovery model of Balasubramanian with the log data, historical recovery action data, and recovery actions of Krishnan. The motivation to do so is to reduce “overall cost required to resolve failure by simply preventing failure from occurring. In addition, the overall productivity of the computing environment (135) increases by minimizing chances of interruption due to failure” (Mandal, Fig. 1 – 135, ¶54) while “allowing a more complete picture of the health and status of the target application and system” (Balasubramanian, ¶81). Regarding claim 11, Krishnan in view of Priydarshi and further in view of Liu and further in view of Mandal and further in view of Balasubramanian teaches the method of claim 10 (and thus the rejection of claim 10 is incorporated). Krishnan teaches providing, by the one or more processors, current log data as input data to the plurality of anomaly detection models to generate one or more detected anomalies associated with one or more applications of the plurality of applications (Fig. 1 –120, 122, 162, ¶20 “the features discussed herein are equally applicable when… executing the plurality of respective predictive data models corresponding to the plurality of applications,” ¶22 “real-time data 162 may be obtained… from the application 122 even as it is being generated,” ¶23 “Anomalies in the real-time data 162 which can lead to application failures are identified by the predictive data model 120”). Regarding the limitation providing, by the one or more processors, the one or more detected anomalies as input data to the failure engine to generate one or more indicators of applications that are predicted to fail and one or more failure scores corresponding to reasons for failure associated with the applications that are predicted to fail, Krishnan teaches the one or more detected anomalies (Fig. 1 – 162, ¶23 “Anomalies in the real-time data 162 which can lead to application failures are identified”) and one or more indicators of applications that are predicted to fail (Fig. 1 – 164, ¶39 “an anomaly is detected… via recognizing similar features or patterns of error codes from the application logs 164”) and one or more failure scores corresponding to reasons for failure associated with the applications that are predicted to fail (Fig. 1 – 120, 162, Fig. 4 – 412, ¶36 “the anomaly score for a particular anomaly is calculated based on the occurrences of the various features corresponding to the anomaly in the predictive data model 120 within the real-time data 162… all the anomalies detected in the real-time data 162 can be simultaneously processed to obtain their anomaly scores”). However, Krishnan fails to teach providing, by the one or more processors, the one or more detected anomalies as input data to the failure engine to generate one or more indicators of applications that are predicted to fail and one or more failure scores corresponding to reasons for failure associated with the applications that are predicted to fail. Mandal teaches providing, by the one or more processors, anomaly data as input data to the failure engine to generate predicted performance issues and severity of the predicted performance issues (Fig. 4 – 420, 450A, Fig. 6E – 648, ¶95 “Incident predictor 450A takes as inputs… log events (stored in operational data 420) and generates… predicted future incidents/imminent performance issues… and severity of the predicted performance issues,” ¶109 “anomaly/outlier data (maintained as part of operational data 420),” ¶131 “Column 648 specifies a severity score indicating the severity of the incident”). Regarding the limitation providing, by the one or more processors, the one or more indicators of the applications that are predicted to fail as input data to the application recovery model to generate one or more recovery action recommendations, Krishnan teaches the one or more indicators of the applications that are predicted to fail (Fig. 1 – 164, ¶39) and one or more recovery action recommendations (Fig. 1 – 164, Fig. 6 – 608, 616, ¶40 “If it is determined at 608 that the action is not an automatic action, the procedure jumps to block 616 to transmit a message to the personnel. In an example, the message may include information regarding any solutions or corrective actions that were identified from the application logs 164,” wherein the “message” including “information regarding… corrective actions” encompasses recovery action recommendations). However, Krishnan fails to teach providing, by the one or more processors, the one or more indicators of the applications that are predicted to fail as input data to the application recovery model to generate one or more recovery action recommendations. Balasubramanian teaches providing, by the one or more processors (Fig. 1 –111, ¶46 “a processor 111… adapted to perform… machine learning”), current operating status as input data to the application recovery model to generate recovery action recommendations (Fig. 12 – 1249, ¶155 “Once the model is trained on these patterns of performance, a current operating status of the monitored application and system may be used by the machine learning processes to generate predictions regarding the likelihood that the system will enter an unhealthy state. If the system is in an unhealthy state, or if conditions seem ripe for the system to enter an unhealthy state, the machine learning processes 1249 may generate a recommended action to restore the system to a healthy state”). Krishnan further teaches and displaying, by the one or more processors, a dashboard that indicates the applications that are predicted to fail, the one or more failure scores, the reasons for failure, the one or more recovery action recommendations, or a combination thereof (Fig. 1 – 118, Fig. 8, ¶45 “FIG. 8 illustrates an example of the GUI 118… that allows a human user to monitor the real-time data 162… The predictors or features 802 for estimating the probability of application failure are shown on the right hand side of the GUI 118. The probability of each of the features indicating application failure can be indicated on the plot 804… A total anomaly score for the real-time data set that is currently being analyzed on the GUI 118 can be indicated via a torus 808. The color of the torus 808 indicates the status alert of the application 122 based on the information from the real-time data 162 currently being displayed”). Krishnan, Mandal, and Balasubramanian are analogous art to the claimed invention as all are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the failure engine of Mandal and the recovery model of Balasubramanian with the log data, anomaly detection models, applications predicted to fail, and dashboard of Krishnan. The motivation to do so is to reduce “overall cost required to resolve failure by simply preventing failure from occurring. In addition, the overall productivity of the computing environment (135) increases by minimizing chances of interruption due to failure” (Mandal, Fig. 1 – 135, ¶54) while “allowing a more complete picture of the health and status of the target application and system” (Balasubramanian, ¶81). Regarding claim 12, Krishnan in view of Priydarshi and further in view of Liu and further in view of Mandal and further in view of Balasubramanian teaches the method of claim 11 (and thus the rejection of claim 11 is incorporated). Regarding the limitation initiating, by the one or more processors, automatic performance of an action indicated by the one or more recovery action recommendations, Krishnan teaches initiating, by the one or more processors, automatic performance of an action (Fig. 6 – 608-610, ¶41 “If the retrieved actions can be automatically executed… then such actions are automatically executed at block 610”). However, Krishnan fails to teach an action indicated by the one or more recovery action recommendations. Balasubramanian teaches an action indicated by the one or more recovery action recommendations (¶155 “a recommended action to restore the system to a healthy state”). Krishnan and Balasubramanian are analogous art to the claimed invention as both are from the same field of endeavor of machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the action indicated by the one or more recovery action recommendations of Balasubramanian with the automatic performance of Krishnan. The motivation to do so, as stated by Balasubramanian, is “allowing a more complete picture of the health and status of the target application and system” (Balasubramanian, ¶81). Regarding claim 13, Krishnan in view of Priydarshi and further in view of Liu and further in view of Mandal and further in view of Balasubramanian teaches the method of claim 12 (and thus the rejection of claim 12 is incorporated). Krishnan teaches wherein the action comprises re-executing one or more of the applications that are predicted to fail, terminating one or more of the applications that are predicted to fail, or a combination thereof (¶48 “different actions may be implemented. In an example, an action may be implemented on the application server, such as when the correction of the error requires a restart”). Claims 15-18 recite a system that parallels the method claims of 8-11, respectively. Therefore, the analysis discussed above with respect to claims 8-11 also applies to claims 15-18, respectively. Accordingly, claims 15-18 are rejected based on substantially the same rationale as set forth above with respect to claims 8-11, respectively. Claim 20 recites a non-transitory computer-readable storage medium that parallels the method claim of 3, respectively. Therefore, the analysis discussed above with respect to claim 3 also applies to claim 20, respectively. Accordingly, claim 20 is rejected based on substantially the same rationale as set forth above with respect to claim 3, respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM MICHAEL LEE whose telephone number is (571)272-4761. The examiner can normally be reached Monday-Thursday: 8am-5pm, every other Friday 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571)272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.M.L./ Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Feb 13, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month