Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1,4-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) mental processes – concepts performed in the human mind.
Subject Matter Eligibility Analysis
Step 1: Do the Claims Specify a Statutory Category?
Claims 1,4-7 recite a system, claim 8 recites an apparatus, claim 9 recites a method, and claim 10 recites a non-transitory computer-readable recording medium, therefore satisfying Step 1 of the analysis.
Step 2 Analysis
Regarding claim 1,
Step 2A – Prong 1: Is a Judicial Exception Recited?
For step 2A eligibility prong one(does the claim recite a judicial exception?), the claim(s) recite(s) “predicting an abnormal accident in the plurality of clusters,”(this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]), “calculate a root score through a combination of a feature score for the abnormal accident and a log score obtained in response to the predicted abnormal accident, in order to search a root cause of the abnormal accident;” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]; Under broadest reasonable interpretation, the process of “calculate a root score” is also considered a mathematical calculation [MPEP 2106.04(a)(2) 1. “Mathematical concepts”]. The claim does not specify any particular mathematical formula or algorithm and therefore broadly encompasses any mathematical combination of such scores.), “trigger a recovery action based on the root cause” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]). As claimed, this process can practically be performed either in the human mind or using a computer as a tool.
Even if the limitations require a computer, it can still be a mental process [see MPEP 2106.04(a)(2) III. C. "A Claim That Requires a Computer May Still Recite a Mental Process"]. Predicting an abnormal accident in the clusters by using a prediction model, calculating a root cause score according to feature scores and log scores, and triggering a recovery process based on the root cause are directed to mental processes of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”], because the steps are recited at a high level of generality and merely use computers as a tool to perform the processes.
Step 2A – Prong 2: Is the Judicial Exception Integrated into a Practical Application?
For step 2A eligibility prong two(does the claim recite additional elements that integrate the judicial exception into a practical application?), This judicial exception is not integrated into a practical application because the additional limitations of “search data source endpoints, and to obtain address and port information of the data source endpoints … in … a plurality of clusters”, “register the address and port information of the data source endpoints”, “request monitoring of the data source endpoints”, “receive metric information regarding the data source endpoints”, “transmit federate-endpoint-api information for the data source endpoints”, “inputting metric streams into a machine learning-based prediction model”, “a remediation module … interfacing with the infra controller”, “ a data source management registering the address and port information of the data source endpoints according to the control of the infra controller”, “a data collector receiving the metric information regarding the data source endpoints from the data source management, and transmitting the federate-endpoint-api information to the data source management.” are insignificant extra-solution activities of data gathering, data sending, and presentation[see MPEP 2106.05(g) Whether the limitation amounts to necessary data gathering and outputting. This is considered in Step 2A Prong Two and Step 2B.]
The additional computer parts(cloud environment, infra controller, data source endpoints, monitoring agent, plurality of clusters, monitoring module, a prediction and localization module, machine learning-based prediction model, a remediation module, data source management, data collector) are generic components recited at a high level of generality[see MPEP 2106.05(b) “If applicant amends a claim to add a generic computer or generic computer components and asserts that the claim recites significantly more because the generic computer is 'specially programmed' (as in Alappat, now considered superseded) or is a 'particular machine' (as in Bilski), the examiner should look at whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008)”]. As a whole, the claims are directed to several abstract mental processes implemented on a generic computer, but are not integrated into a practical application[see MPEP 2106.05(f) “implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two”].
The claim’s cloud environment, infra controller, data source endpoint, monitoring agent, plurality of clusters, monitoring module, a prediction and localization module, a machine learning-based prediction model, a remediation module, data source management, data collector, do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims could work with any system with data endpoints and only generally links the abstract idea to the field of cloud environments. The same process except for the descriptors would also work for managing services in a cloud environment, managing components in a car, managing patient health during surgery, managing building security systems, managing a single computer with multiple components. [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two” and also MPEP 2106.05(h) “Field of Use and Technological Environment”]
Step 2B: Do the Claims Provide an Inventive Concept?
For step 2B eligibility (Whether a Claim Amounts to Significantly More), The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements are either gathering/storing data(“search data source endpoints, and to obtain address and port information of the data source endpoints … in … a plurality of clusters”, “register the address and port information of the data source endpoints”, “request monitoring of the data source endpoints”, “receive metric information regarding the data source endpoints”, “transmit federate-endpoint-api information for the data source endpoints”, “inputting metric streams into a machine learning-based prediction model”, “a remediation module … interfacing with the infra controller”, “ a data source management registering the address and port information of the data source endpoints according to the control of the infra controller”, “a data collector receiving the metric information regarding the data source endpoints from the data source management, and transmitting the federate-endpoint-api information to the data source management.”), or are additional computer parts that are well known components recited at a high level of generality(cloud environment, infra controller, data source endpoints, monitoring agent, plurality of clusters, monitoring module, a prediction and localization module, machine learning-based prediction model, a remediation module, data source management, data collector).
These data gathering/storing/presenting limitations are insignificant extra-solution activity because these limitations amount to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output) [see MPEP 2106.05(g) “(1) Whether the extra-solution limitation is well known. “, “(2) Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).”, “(3) Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).”]
These data gathering/storing/presenting limitations are also well-understood, routine, conventional computer functions, recited at a high level of generality functions as recognized by the court decisions listed in MPEP § 2106.05(d). Reference US 20240036963 A1 (Azeez) describes this well (“Anomaly detection refers to identifying data values that deviate from an observed norm. Oftentimes anomaly detection may indicate an issue that requires attention. For example, in the context of network traffic, anomaly detection may include identifying traffic loads that deviate from historical norms, which may indicate a service outage or a network intrusion event. Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”). The process of gathering data and identifying values that deviate from normal, although time-consuming and manual, is a well-understood, routine, and conventional process. Automating a mental process and adding well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality does not qualify as “significantly more” [see MPEP 2106.05 “Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include: … ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));”].
The claim’s cloud environment, infra controller, data source endpoint, monitoring agent, plurality of clusters, monitoring module, a prediction and localization module, a machine learning-based prediction model, remediation module, data source management, data collector, do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims could work with any system with data endpoints and only generally links the abstract idea to the field of cloud environments. The same process except for the descriptors would also work for managing services in a cloud environment, managing components in a car, managing patient health during surgery, managing building security systems, managing a single computer with multiple components. [See MPEP 2106.05(h) “Field of Use and Technological Environment”]
Combined and considered as a whole, the claim describes a system that finds endpoints, collects data, feeds the data into a machine learning model, finds a root cause based on the analysis done by the machine learning model, and executes the recovery actions tied to the root cause. The claim as a whole takes the judicial exception and only adds data gathering/storing/presenting steps[MPEP 2106.05(g)], which are conventional [MPEP 2106.05(d)], and generic [MPEP 2106.05(h)], and do not amount to significantly more than the judicial exception itself. [see MPEP 2106.05].
Conclusion: In light of the above, the limitations in claim 1 recite and are directed to an abstract idea and recite no additional elements that would amount to significantly more than the identified abstract idea. Claim 1 is therefore not patent eligible.
As for the limitations recited in claims 4-7, when considering each of the claims as a whole these additional elements do not integrate the exception into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. The additional elements do not reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field. The additional elements do not implement a judicial exception with, or use a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim. The additional element do not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Step 2 Analysis
Regarding claim 8,
Step 2A – Prong 1: Is a Judicial Exception Recited?
For step 2A eligibility prong one(does the claim recite a judicial exception?), the claim(s) recite(s) “predict an abnormal accident in the plurality of clusters by … a machine learning-based prediction model,”(this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]), “calculate a root score through a combination of a feature score for the abnormal accident and a log score obtained in response to the predicted abnormal accident for searching a root cause of the abnormal accident.” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]; Under broadest reasonable interpretation, the process of “calculate a root score” is also considered a mathematical calculation [MPEP 2106.04(a)(2) 1. “Mathematical concepts”]), “trigger a recovery action based on the root cause by interfacing with an infra controller.” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]). As claimed, this process can practically be performed either in the human mind or using a computer as a tool.
Even if the limitations require a computer, it can still be a mental process [see MPEP 2106.04(a)(2) III. C. "A Claim That Requires a Computer May Still Recite a Mental Process"]. Predicting an abnormal accident in the clusters by using a prediction model, searching for a root cause according to metric and log scores, and performing a recovery process according to the searched root cause are directed to mental processes of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”], because the steps are recited at a high level of generality and merely use computers as a tool to perform the processes.
Step 2A – Prong 2: Is the Judicial Exception Integrated into a Practical Application?
For step 2A eligibility prong two(does the claim recite additional elements that integrate the judicial exception into a practical application?), This judicial exception is not integrated into a practical application because the additional limitations of “search data source endpoints and obtain address and port information of the data source endpoints from monitoring agents”, “register the address and port information of the data source endpoints”, “inputting metric streams into a machine learning-based prediction model” are insignificant extra-solution activities of data gathering, data sending, and presentation[see MPEP 2106.05(g) Whether the limitation amounts to necessary data gathering and outputting. This is considered in Step 2A Prong Two and Step 2B.]
The additional computer parts(cloud environment, a processor, a memory connected to the processor, “wherein the memory stores program instructions executed by the processor to”, data source endpoint, monitoring agent, plurality of clusters, a machine learning-based prediction model, infra controller) are generic components recited at a high level of generality[see MPEP 2106.05(b) “If applicant amends a claim to add a generic computer or generic computer components and asserts that the claim recites significantly more because the generic computer is 'specially programmed' (as in Alappat, now considered superseded) or is a 'particular machine' (as in Bilski), the examiner should look at whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008)”]. As a whole, the claims are directed to several abstract mental processes implemented on a generic computer, but are not integrated into a practical application[see MPEP 2106.05(f) “implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two”].
The claim’s cloud environment, a processor, a memory connected to the processor, “wherein the memory stores program instructions executed by the processor to”, data source endpoint, monitoring agent, plurality of clusters, a machine learning-based prediction model, infra controller, do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims could work with any system with data endpoints and only generally links the abstract idea to the field of cloud environments. The same process except for the descriptors would also work for managing services in a cloud environment, managing components in a car, managing patient health during surgery, managing building security systems, managing a single computer with multiple components. [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two” and also MPEP 2106.05(h) “Field of Use and Technological Environment”]
Step 2B: Do the Claims Provide an Inventive Concept?
For step 2B eligibility (Whether a Claim Amounts to Significantly More), The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements are either gathering/storing data(“search data source endpoints and obtain address and port information of the data source endpoints from monitoring agents”, “register the address and port information of the data source endpoints”, “inputting metric streams into a machine learning-based prediction model”), or are additional computer parts that are well known components recited at a high level of generality(cloud environment, a processor, a memory connected to the processor, “wherein the memory stores program instructions executed by the processor to”, data source endpoint, monitoring agent, plurality of clusters, a machine learning-based prediction model, infra controller).
These data gathering/storing/presenting limitations are insignificant extra-solution activity because these limitations amount to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output) [see MPEP 2106.05(g) “(1) Whether the extra-solution limitation is well known. “, “(2) Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).”, “(3) Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).”]
These data gathering/storing/presenting limitations are also well-understood, routine, conventional computer functions, recited at a high level of generality functions as recognized by the court decisions listed in MPEP § 2106.05(d). Reference US 20240036963 A1 (Azeez) describes this well (“Anomaly detection refers to identifying data values that deviate from an observed norm. Oftentimes anomaly detection may indicate an issue that requires attention. For example, in the context of network traffic, anomaly detection may include identifying traffic loads that deviate from historical norms, which may indicate a service outage or a network intrusion event. Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”). The process of gathering data and identifying values that deviate from normal, although time-consuming and manual, is a well-understood, routine, and conventional process. Automating a mental process and adding well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality does not qualify as “significantly more” [see MPEP 2106.05 “Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include: … ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));”].
The claim’s cloud environment, a processor, a memory connected to the processor, “wherein the memory stores program instructions executed by the processor to”, data source endpoint, monitoring agent, plurality of clusters, a machine learning-based prediction model, infra controller, do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims could work with any system with data endpoints and only generally links the abstract idea to the field of cloud environments. The same process except for the descriptors would also work for managing services in a cloud environment, managing components in a car, managing patient health during surgery, managing building security systems, managing a single computer with multiple components. [See MPEP 2106.05(h) “Field of Use and Technological Environment”]
Combined and considered as a whole, the claim describes a system that finds endpoints, collects data, feeds the data into a machine learning model, finds a root cause based on the analysis done by the machine learning model, and triggers a recovery action tied to the root cause. The claim as a whole takes the judicial exception and only adds data gathering/storing/presenting steps[MPEP 2106.05(g)], which are conventional [MPEP 2106.05(d)], and generic [MPEP 2106.05(h)], and do not amount to significantly more than the judicial exception itself. [see MPEP 2106.05].
Conclusion: In light of the above, the limitations in claim 8 recite and are directed to an abstract idea and recite no additional elements that would amount to significantly more than the identified abstract idea. Claim 8 is therefore not patent eligible.
Regarding claim 9, it is the method that the apparatus of claim 8 implements and is rejected for the same reasons.
Regarding claim 10, it is the computer-readable recording medium storing instructions to implement the method which claim 8 implements and is rejected for the same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1,4-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210026723 A1 (Nadger) in view of US 20240036963 A1 (Azeez).
Regarding claim 1, Nadger teaches,
A root cause analysis system in a cloud environment(par 4 “Some embodiments of the invention provide methods for performing root cause analysis for non-deterministic anomalies in a datacenter.”), comprising:
an infra controller configured to search data source endpoints, and obtain address and port information of the data source endpoints (par 41 “The network path isolation of the managed components identifies the portion of the constructed graph by using flow identifying techniques to identify one or more network traffic paths between one or more pairs of endpoints (e.g., VMs, containers, computers, etc.) in the datacenter. Examples of such flow identifying techniques that are used in some embodiments include NetFlow, sFlow, and deep packet inspection (DPI). Such flow identifying techniques can be used to extract the network traffic path for any given source and destination endpoints in the datacenter. In some embodiments, the network traffic path is overlaid on the constructed topological graph to identify a portion of the graph to analyze.”; par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data. The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).”) by monitoring agents installed in respective clusters of a plurality of clusters; (par 40 “In some embodiments, the discovery process that is used to construct the graph associates the managed components (e.g., forwarding L2/L3 components, service components, etc.) with tags that identify the tenants that use the managed components. The shared managed components of a specific tenant in some embodiments are identified using the L2 and L3 networking constructs. At the end of the discovery, a topology can be defined and displayed to represent the tenant instance and its relation with the physical/logical managed components.”; par 71 “As shown, each host in some embodiments executes one or more …, performance monitoring (PM) agents 816, performance monitoring VMs 818, ….”; par 72 “This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806.”)
a monitoring module configured, under a control of the infra controller(par 71 “As shown, each host in some embodiments executes one or more …, performance monitoring (PM) agents 816, performance monitoring VMs 818, ….”; par 72 “This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806.”; fig 8:810; par 72 “Through this network 850, one or more performance monitoring servers/appliances 810 communicate with the hosts 802-806 and the managers/controllers 815 to collect performance monitoring data.”), to:
register the address and port information of the data source endpoints,(par 29 “The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).” par 41 “The network path isolation of the managed components identifies the portion of the constructed graph by using flow identifying techniques to identify one or more network traffic paths between one or more pairs of endpoints (e.g., VMs, containers, computers, etc.) in the datacenter. Examples of such flow identifying techniques that are used in some embodiments include NetFlow, sFlow, and deep packet inspection (DPI). Such flow identifying techniques can be used to extract the network traffic path for any given source and destination endpoints in the datacenter. In some embodiments, the network traffic path is overlaid on the constructed topological graph to identify a portion of the graph to analyze.”; although the data source endpoint address and port information is not specifically mentioned in Nadger, it would be included in the data source endpoint network traffic path information, and in the additional data collected from the components.)
request monitoring of the data source endpoints,(par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data.”)
receive metric information regarding the data source endpoints,(par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data.”; fig 8; par 72 “FIG. 8 also illustrates a set of managers and controllers 815 for managing and controlling the service VMs, service engines, GVMs, and SFEs. These managers/controllers communicate with the hosts through the network 850, which is a local area network in some embodiments, while in other embodiments is a wide area network or a network of networks (such as the Internet). Through this network 850, one or more performance monitoring servers/appliances 810 communicate with the hosts 802-806 and the managers/controllers 815 to collect performance monitoring data. This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806. Conjunctively, or alternatively, the performance monitoring data in some embodiments is collected from other modules ( e.g., SFEs, service engines, SVMs) executing on the host computers, and/or from the managers/ controllers 815.”) and
transmit federate-endpoint-api information for the data source endpoints; (par 31 “The performance monitoring system in some embodiments iteratively ( e.g., continuously or periodically) updates the graph that it defines for the datacenter.”; fig 7, par 63 “The discovery engine 715 identifies components in the network and relationships between these components, and stores this information in the component database 721. In some embodiments, the discovery engine 715 uses known techniques to gather this information. Data regarding the operation and performance of these components are gathered by the data collector 717 and the event processor 719, which store their collected information in the component data store 721 or a related data store 723.”)
a prediction and localization module configured to: predict an abnormal accident in the plurality of clusters(par 50 “After reducing the number of data tuples to analyze, the process 100 then analyzes (at 125) the remaining data tuples to determine whether it detects an anomaly in the remaining data tuples that might be due to a potential performance degradation of one or more components.”) by inputting metric streams into a machine learning-based prediction model(par 51 “To detect anomaly on the reduced data tuples for the component nodes of the remaining portion of the graph, the process 100 uses different data analysis processes in different embodiments. Examples of such data analysis processes include (1) clustering-based processes, such as DBSCAN, …, (2) nearest neighbor based processes, such as K-Nearest Neighbor (KNN), …, (3) statistics-based processes, such as Histogram Based Outlier Score (HBOS), … and ( 4) forecasting/prediction based processes, such as ARIMA, ….”), and
calculate a root score through a feature score for the abnormal accident, obtained in response to the predicted abnormal accident in order to search a root cause of the abnormal accident;(par 53 “Upon identifying ( at 125) a time instance for which the associated, analyzed data tuples (that remain after the filtering at 120) include at least one anomaly, the process 100 generates (at 130) a digital signature to represent the associated, analyzed data tuples, and compares (at 135) this signature with each of several pre-tabulated signatures in a codebook. Each codebook signature is associated with a root cause problem (e.g., a reason) for the performance degradation of one or more components in the datacenter.”; par 31 “In some embodiments, these data tuples (that are associated with the nodes) include the symptom data tuples, the metric data tuples and the KPI data tuples.”) and
a remediation module configured to present reports based on the root cause by interfacing with the infra controller.(par 73 “the performance monitoring system 700 provides a user interface for the administrators to query performance data and/or to view reports regarding the performance data.”)
wherein the monitoring module comprises:
a data source management registering the address and port information of the data source endpoints according to the control of the infra controller; (par 29 “The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).”) and
a data collector receiving the metric information regarding the data source endpoints from the data source management, and transmitting the federate-endpoint-api information to the data source management. (par 31 “The performance monitoring system in some embodiments iteratively ( e.g., continuously or periodically) updates the graph that it defines for the datacenter.”; fig 7, par 63 “The discovery engine 715 identifies components in the network and relationships between these components, and stores this information in the component database 721. In some embodiments, the discovery engine 715 uses known techniques to gather this information. Data regarding the operation and performance of these components are gathered by the data collector 717 and the event processor 719, which store their collected information in the component data store 721 or a related data store 723.”)
However, Nadger does not specifically teach using a log score, or triggering a recovery action.
On the other hand, Azeez teaches,
A root cause analysis system in a cloud environment(par 5 “The system may generate an aggregate anomaly score based on the anomaly scores from the machine learning models, thereby detecting anomalies based on different behavioral patterns of the same metric. In this way, the system may determine whether a data value of a metric is an anomaly based on multiple learned behaviors of the metric.”), comprising:
a monitoring module configured, under a control of the infra controller, to: receive metric information regarding the data source endpoints;(par 25 “The computer system 110 may access the metrics 101-105 from various sources, depending on the context of these metrics. For example, metrics 101-105 may relate to a computer network domain, as will be described in other examples throughout this disclosure. In the computer network domain, the computer system 110 may obtain a metric 101-105 from one or more network devices of a monitored system (not shown). In another example, for application level contexts, the computer system 110 may obtain a metric 101-105 from one or more applications or services executing on the monitored system.”)
a prediction and localization module configured to: predict an abnormal accident in the plurality of clusters by inputting metric streams into a machine learning-based prediction model,(fig 7:704; par 151 “At 704, the method 700 may include providing the data value to a plurality of machine learning models trained to detect anomalies based on behaviors of historical data values of the metric.”) and calculate a root score through a combination of a feature score for the abnormal accident and a log score (par 155 “To test whether early warning anomalies were detectable, three metrics for each host was captured. …. These metrics were obtained at periodic intervals from various log sources.” fig 3:304 “Metric ID: 5; Metric Name: Log error count”; table 8, table 9; ) obtained in response to the predicted abnormal accident, in order to search a root cause of the abnormal accident;(fig 7:406; par 152 “At 706, the method 700 may include generating, based on execution of the plurality of models, a plurality of anomaly scores comprising at least a first anomaly score generated by a first model trained to detect anomalies based on a first behavior of the historical data values of the metric and at least a second anomaly score generated by a second model trained to detect anomalies based on a second behavior of the historical data values of the metric.”; fig 7:708; par 153 “At 708, the method 700 may include generating an aggregate anomaly score based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous. At 710, the method 700 may include identifying a mitigative action to take based on the aggregate anomaly score.”) and
a remediation module configured to trigger a recovery action based on the root cause by interfacing with the infra controller, (“fig 7:708; par 153 “At 708, the method 700 may include generating an aggregate anomaly score based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous. At 710, the method 700 may include identifying a mitigative action to take based on the aggregate anomaly score.”; par 161 “Using the mitigative actions illustrated in Table 3, this anomaly would have been flagged to be escalated, providing an early warning for mitigation to potentially prevent the outage.” Claim 16 “identifying, by the computer system, a mitigative action to take based on the aggregate anomaly score.”; par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Nadger to incorporate the log data analysis and preliminary recovery process of Azeez. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Nadger -- a need for how to identify the source of issues so that mitigative action can be performed (Azeez par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed. One problem that arises in anomaly detection is early detection.”) -- with Azeez providing a known method to solve a similar problem. Azeez provides “The system may generate an aggregate anomaly score based on the anomaly scores from the machine learning models, thereby detecting anomalies based on different behavioral patterns of the same metric. In this way, the system may determine whether a data value of a metric is an anomaly based on multiple learned behaviors of the metric.”(Azeez par 5)
Regarding claim 4, Nadger and Azeez teaches,
The root cause analysis system of claim 1,
Azeez further teaches,
wherein the prediction and localization module comprise:
a data processor continuously querying data to the data collector;(par 25 “The computer system 110 may access the metrics 101-105 from various sources, depending on the context of these metrics. For example, metrics 101-105 may relate to a computer network domain, as will be described in other examples throughout this disclosure. In the computer network domain, the computer system 110 may obtain a metric 101-105 from one or more network devices of a monitored system (not shown). In another example, for application level contexts, the computer system 110 may obtain a metric 101-105 from one or more applications or services executing on the monitored system.”)
a predictor predicting the abnormal accident by inputting a metric stream according to the query into the machine learning-based prediction model provided from the data processor;(fig 6:606,608; par 145 “At 608, the method 600 may include generating an aggregate anomaly score (such as aggregate anomaly score 131) based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous.” Fig 7:704; par 515 “At 704, the method 700 may include providing the data value to a plurality of machine learning models trained to detect anomalies based on behaviors of historical data values of the metric.”) and
a root cause analyzer calculating a root score through a combination of the feature score acquired from the predictor for the abnormal accident and the log score acquired from the data processor.(fig 6:610; par 146 “At 610, the method 600 may include identifying a mitigative action based on the aggregate anomaly score. For example, the mitigative actions may be mapped to aggregate anomaly scores.”; par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed. ”)
Regarding claim 5, Nadger and Azeez teaches,
The root cause analysis system of claim 4,
Nadger further teaches,
wherein the root cause analyzer acquires the feature score for the abnormal accident through a response from the predictor, and acquires the feature score,(par 51 “To detect anomaly on the reduced data tuples for the component nodes of the remaining portion of the graph, the process 100 uses different data analysis processes in different embodiments. Examples of such data analysis processes include (1) clustering-based processes, such as DBSCAN, …, (2) nearest neighbor based processes, such as K-Nearest Neighbor (KNN), …, (3) statistics-based processes, such as Histogram Based Outlier Score (HBOS), … and ( 4) forecasting/prediction based processes, such as ARIMA, ….”)
However, Nadger does not specifically teach analyzing logs.
Azeez further teaches,
wherein the root cause analyzer acquires the feature score for the abnormal accident through a response from the predictor, and acquires the feature score(fig 6:606; par 144 “At 606, the method 600 may include generating, via the pluggable plurality of models, a plurality of anomaly scores (such as anomaly scores 121A-N). The anomaly scores may include at least a first anomaly score ( such as any one of the anomaly scores 121A-N) generated by the first model (such as any one of the ML models 120A-N) based on the first behavior of the historical data values of the metric”), and then requests the log score to the data processor(fig 6:606; par 144 “and at least a second anomaly score generated by the second model (such as any other one of the ML models 120A-N) based on the second behavior of the historical data values of the metric.”), and
the data processor transmits a log query to the data collector when receiving the log score request, and calculates the log score by receiving a response thereto. (fig 6:606; par 144 “and at least a second anomaly score generated by the second model (such as any other one of the ML models 120A-N) based on the second behavior of the historical data values of the metric.”)
Regarding claim 6, Nadger and Azeez teaches,
The root cause analysis system of claim 4,
Nadger further teaches,
wherein the root cause analyzer searches a potential root cause of the abnormal accident through the root score. (par 53 “Upon identifying ( at 125) a time instance for which the associated, analyzed data tuples (that remain after the filtering at 120) include at least one anomaly, the process 100 generates (at 130) a digital signature to represent the associated, analyzed data tuples, and compares (at 135) this signature with each of several pre-tabulated signatures in a codebook. Each codebook signature is associated with a root cause problem (e.g., a reason) for the performance degradation of one or more components in the datacenter.”; par 31 “In some embodiments, these data tuples (that are associated with the nodes) include the symptom data tuples, the metric data tuples and the KPI data tuples.”)
Regarding claim 7, Nadger and Azeez teaches,
The root cause analysis system of claim 6,
Nadger further teaches,
wherein the remediation module comprises:
a trigger receiving the potential root cause from the root cause analyzer;(par 70 “The signature comparator 739 identifies the codebook signature that is closest ( e.g., has the smallest hamming distance) to the generated signature, and selects the root cause problem of the identified codebook signature as the root cause of the detected anomaly. The identified root cause in some embodiments specifies one possible reason for the identified degradation of performance of one or more components.”)
wherein the trigger transmits a PUSH notification to a notifier in order to notify a system state together with information on an accident occurrence time, an accident location, and the potential root cause.(par 73 “Also, in some embodiments, the performance monitoring system 700 provides a user interface for the administrators to query performance data and/or to view reports regarding the performance data.”; par 31 “The performance monitoring system in some embodiments iteratively ( e.g., continuously or periodically) updates the graph that it defines for the datacenter.”)
However, Nadger does not specifically go into detail about the message repository and the action repository.
On the other hand, Azeez teaches,
wherein the remediation module comprises:
a trigger receiving the potential root cause from the root cause analyzer;(par 1 “ Oftentimes anomaly detection may indicate an issue that requires attention. For example, in the context of network traffic, anomaly detection may include identifying traffic loads that deviate from historical norms, which may indicate a service outage or a network intrusion event. Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed. Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”; fig 6:608; par 145 “At 608, the method 600 may include generating an aggregate anomaly score (such as aggregate anomaly score 131) based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous.” )
a message repository receiving a message query from the trigger, and transmitting a response thereto;(fig 6:612; par 147 “At 612, the method 600 may include performing a lookup of a stored association of a metric identifier and label identifier pair based on the metric identifier to identify a source of the data value using the label identifier. For example, the lookup may be performed based on a query or other data recall action against one or more of the data structures 302, 304, 306, 402, and 502.”) and
an action repository receiving an action query from the trigger, and transmitting a response thereto,(fig 6:610; par 146 “At 610, the method 600 may include identifying a mitigative action based on the aggregate anomaly score. For example, the mitigative actions may be mapped to aggregate anomaly scores.”)
wherein the trigger transmits a PUSH notification to a notifier in order to notify a system state together with information on an accident occurrence time, an accident location, and the potential root cause.(fig 6:614; par 148 “At 614, the method 600 may include generating for display an indication of the mitigative action and the identified source of the data value based on the stored association. For example, the UI subsystem 140 may generate data, for display via a user interface of a client device 160, an indication of the mitigative action and identified source.”)
Regarding claim 8, Nadger teaches,
A root cause analysis apparatus in a cloud environment(par 4 “Some embodiments of the invention provide methods for performing root cause analysis for non-deterministic anomalies in a datacenter.”), comprising:
a processor; and a memory connected to the processor, wherein the memory stores program instructions executed by the processor (par 78 “From these various memory units, the processing unit(s) 910 retrieve instructions to execute and data to process in order to execute the processes of the invention.”) to
search a data source endpoint, and obtain address and port information of the data source endpoint(par 41 “The network path isolation of the managed components identifies the portion of the constructed graph by using flow identifying techniques to identify one or more network traffic paths between one or more pairs of endpoints (e.g., VMs, containers, computers, etc.) in the datacenter. Examples of such flow identifying techniques that are used in some embodiments include NetFlow, sFlow, and deep packet inspection (DPI). Such flow identifying techniques can be used to extract the network traffic path for any given source and destination endpoints in the datacenter. In some embodiments, the network traffic path is overlaid on the constructed topological graph to identify a portion of the graph to analyze.”; par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data. The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).”) from monitoring agents that are installed in respective clusters of a plurality of clusters, (par 40 “In some embodiments, the discovery process that is used to construct the graph associates the managed components (e.g., forwarding L2/L3 components, service components, etc.) with tags that identify the tenants that use the managed components. The shared managed components of a specific tenant in some embodiments are identified using the L2 and L3 networking constructs. At the end of the discovery, a topology can be defined and displayed to represent the tenant instance and its relation with the physical/logical managed components.”; par 71 “As shown, each host in some embodiments executes one or more …, performance monitoring (PM) agents 816, performance monitoring VMs 818, ….”; par 72 “This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806.”)
register the address and port information of the data source endpoints, (par 29 “The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).”)
request monitoring of the data source endpoints, (par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data.”)
receive metric information regarding the data source endpoints, (par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data.”; fig 8; par 72 “FIG. 8 also illustrates a set of managers and controllers 815 for managing and controlling the service VMs, service engines, GVMs, and SFEs. These managers/controllers communicate with the hosts through the network 850, which is a local area network in some embodiments, while in other embodiments is a wide area network or a network of networks (such as the Internet). Through this network 850, one or more performance monitoring servers/appliances 810 communicate with the hosts 802-806 and the managers/controllers 815 to collect performance monitoring data. This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806. Conjunctively, or alternatively, the performance monitoring data in some embodiments is collected from other modules ( e.g., SFEs, service engines, SVMs) executing on the host computers, and/or from the managers/ controllers 815.”)
transmit federate-endpoint-api information for the data source endpoints, (par 31 “The performance monitoring system in some embodiments iteratively ( e.g., continuously or periodically) updates the graph that it defines for the datacenter.”; fig 7, par 63 “The discovery engine 715 identifies components in the network and relationships between these components, and stores this information in the component database 721. In some embodiments, the discovery engine 715 uses known techniques to gather this information. Data regarding the operation and performance of these components are gathered by the data collector 717 and the event processor 719, which store their collected information in the component data store 721 or a related data store 723.”)
predict an abnormal accident in the plurality of clusters by inputting metric streams (par 50 “After reducing the number of data tuples to analyze, the process 100 then analyzes (at 125) the remaining data tuples to determine whether it detects an anomaly in the remaining data tuples that might be due to a potential performance degradation of one or more components.”) into a machine learning-based prediction model, (par 51 “To detect anomaly on the reduced data tuples for the component nodes of the remaining portion of the graph, the process 100 uses different data analysis processes in different embodiments. Examples of such data analysis processes include (1) clustering-based processes, such as DBSCAN, …, (2) nearest neighbor based processes, such as K-Nearest Neighbor (KNN), …, (3) statistics-based processes, such as Histogram Based Outlier Score (HBOS), … and ( 4) forecasting/prediction based processes, such as ARIMA, ….”)
calculate a root score through a feature score for the abnormal accident, obtained in response to the predicted abnormal accident, for searching a root cause of the abnormal accident (par 53 “Upon identifying ( at 125) a time instance for which the associated, analyzed data tuples (that remain after the filtering at 120) include at least one anomaly, the process 100 generates (at 130) a digital signature to represent the associated, analyzed data tuples, and compares (at 135) this signature with each of several pre-tabulated signatures in a codebook. Each codebook signature is associated with a root cause problem (e.g., a reason) for the performance degradation of one or more components in the datacenter.”; par 31 “In some embodiments, these data tuples (that are associated with the nodes) include the symptom data tuples, the metric data tuples and the KPI data tuples.”) and
present reports based on the root cause by interfacing with an infra controller.(par 73 “the performance monitoring system 700 provides a user interface for the administrators to query performance data and/or to view reports regarding the performance data.”)
However, Nadger does not specifically teach using a log score, or triggering a recovery action.
On the other hand, Azeez teaches,
A root cause analysis apparatus in a cloud environment(par 5 “The system may generate an aggregate anomaly score based on the anomaly scores from the machine learning models, thereby detecting anomalies based on different behavioral patterns of the same metric. In this way, the system may determine whether a data value of a metric is an anomaly based on multiple learned behaviors of the metric.”), comprising:
a processor;(par 27 “As shown in FIG. 1, processor 112 is programmed
to execute one or more computer program components…”) and a memory connected to the processor, wherein the memory stores program instructions executed by the processor(par 172 “The electronic storage may store software algorithms, information determined by the processors, information obtained from servers, information obtained from client devices, or other information that enables the functionalities described herein.”) to
receive metric information regarding the data source endpoints, (par 25 “The computer system 110 may access the metrics 101-105 from various sources, depending on the context of these metrics. For example, metrics 101-105 may relate to a computer network domain, as will be described in other examples throughout this disclosure. In the computer network domain, the computer system 110 may obtain a metric 101-105 from one or more network devices of a monitored system (not shown). In another example, for application level contexts, the computer system 110 may obtain a metric 101-105 from one or more applications or services executing on the monitored system.”)
predict an abnormal accident in the plurality of clusters by inputting metric streams into a machine learning-based prediction model,(fig 7:704; par 151 “At 704, the method 700 may include providing the data value to a plurality of machine learning models trained to detect anomalies based on behaviors of historical data values of the metric.”)
calculate a root score through a combination of a feature score for the abnormal accident(fig 7:406; par 152 “At 706, the method 700 may include generating, based on execution of the plurality of models, a plurality of anomaly scores comprising at least a first anomaly score generated by a first model trained to detect anomalies based on a first behavior of the historical data values of the metric and at least a second anomaly score generated by a second model trained to detect anomalies based on a second behavior of the historical data values of the metric.”; fig 7:708; par 153 “At 708, the method 700 may include generating an aggregate anomaly score based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous. At 710, the method 700 may include identifying a mitigative action to take based on the aggregate anomaly score.”) and a log score obtained in response to the predicted abnormal accident, (par 155 “To test whether early warning anomalies were detectable, three metrics for each host was captured. …. These metrics were obtained at periodic intervals from various log sources.” fig 3:304 “Metric ID: 5; Metric Name: Log error count”; table 8, table 9; ) for searching a root cause of the abnormal accident; ;(fig 7:406; par 152 “At 706, the method 700 may include generating, based on execution of the plurality of models, a plurality of anomaly scores comprising at least a first anomaly score generated by a first model trained to detect anomalies based on a first behavior of the historical data values of the metric and at least a second anomaly score generated by a second model trained to detect anomalies based on a second behavior of the historical data values of the metric.”; fig 7:708; par 153 “At 708, the method 700 may include generating an aggregate anomaly score based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous. At 710, the method 700 may include identifying a mitigative action to take based on the aggregate anomaly score.”) and
trigger a recovery action based on the root cause by interfacing with an infra controller. (“fig 7:708; par 153 “At 708, the method 700 may include generating an aggregate anomaly score based on the plurality of anomaly scores, the aggregate anomaly score representing an aggregate prediction that the data value is anomalous. At 710, the method 700 may include identifying a mitigative action to take based on the aggregate anomaly score.”; par 161 “Using the mitigative actions illustrated in Table 3, this anomaly would have been flagged to be escalated, providing an early warning for mitigation to potentially prevent the outage.” Claim 16 “identifying, by the computer system, a mitigative action to take based on the aggregate anomaly score.”; par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Nadger to incorporate the log data analysis and preliminary recovery process of Azeez. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Nadger -- a need for how to identify the source of issues so that mitigative action can be performed (Azeez par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed. One problem that arises in anomaly detection is early detection.”) -- with Azeez providing a known method to solve a similar problem. Azeez provides “The system may generate an aggregate anomaly score based on the anomaly scores from the machine learning models, thereby detecting anomalies based on different behavioral patterns of the same metric. In this way, the system may determine whether a data value of a metric is an anomaly based on multiple learned behaviors of the metric.”(Azeez par 5)
Regarding claim 9, it is the method that the apparatus of claim 8 implements and is rejected for the same reasons.
Regarding claim 10, it is the computer-readable recording medium storing instructions to implement the method which claim 8 implements and is rejected for the same reasons.
Response to Arguments
Applicant’s arguments, see Remarks pg 6, filed 01/12/2026, with respect to the objections of claims 1 and 2 have been fully considered and are persuasive. The objection of 10/10/2025 has been withdrawn.
Applicant's arguments, see Remarks pg 6-9 filed 01/12/2026 regarding the rejections under 35 U.S.C. 101 have been fully considered but they are not persuasive.
With respect to the independent claims, the applicant has argued that claim 1 is eligible in step 2A prong one because the limitations in claim 1 can not be performed solely by human thought and requires a technical configuration premised on interactions among multiple computer systems. The examiner respectfully disagrees. The abstract ideas cited are “predicting an abnormal accident in the plurality of clusters”, “calculate a root score through a combination of a feature score for the abnormal accident and a log score obtained in response to the predicted abnormal accident, in order to search a root cause of the abnormal accident;”, and “trigger a recovery action based on the root cause”. As currently claimed, these predicting, calculating, and triggering processes can practically be performed either in the human mind or using a computer as a tool. Even if the limitations require a computer, it can still be a mental process [see MPEP 2106.04(a)(2) III. C. "A Claim That Requires a Computer May Still Recite a Mental Process"]. Predicting an abnormal accident in the clusters by using a prediction model, calculating a root cause score according to feature scores and log scores, and triggering a recovery process based on the root cause are directed to mental processes of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”], because the steps are recited at a high level of generality and merely use computers as a tool to perform the processes.
With respect to the independent claims, the applicant has argued that claim 1 is eligible in step 2A prong two because the claim as a whole provides improvements in a root cause analysis system within a cloud environment. Applicant further explains that the claim “does not merely collect or analyze data or present information, but instead directly controls the operational state of a cloud system by performing recovery actions in conjunction with an infrastructure controller based on the analysis results”. The examiner respectfully disagrees. The claim limitation “a remediation module configured to trigger a recovery action based on the root cause” merely places abstract idea of triggering a recovery action in the cloud computing environment. These limitations do not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment[see MPEP 2106.04(d)(1)].
With respect to the independent claims, the applicant has argued that claim 1 is eligible in step 2A prong two because the additional elements confer a technological improvement to a technical problem. The claim’s cloud environment, infra controller, data source endpoint, monitoring agent, plurality of clusters, monitoring module, a prediction and localization module, a machine learning-based prediction model, a remediation module, data source management, data collector, do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims could work with any system with data endpoints and only generally links the abstract idea to the field of cloud environments. The same process except for the descriptors would also work for managing services in a cloud environment, managing components in a car, managing patient health during surgery, managing building security systems, managing a single computer with multiple components. [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two” and also MPEP 2106.05(h) “Field of Use and Technological Environment”]
Applicant's arguments, see Remarks pg 10-12 filed 01/12/2026 regarding the rejections under 35 U.S.C. 103 as being unpatentable over US 20210026723 A1 (Nadger) in view of US 20240036963 A1 (Azeez) have been fully considered but they are not persuasive.
With respect to the independent claims, the applicant has argued that Nadger does not teach limitations “an infra controller configured to search data source endpoints, and to obtain address and port information of the data source endpoints by monitoring agents installed in respective clusters of a plurality of clusters;”. The examiner respectfully disagrees. Nadger teaches in the cited (par 41 “The network path isolation of the managed components identifies the portion of the constructed graph by using flow identifying techniques to identify one or more network traffic paths between one or more pairs of endpoints (e.g., VMs, containers, computers, etc.) in the datacenter. Examples of such flow identifying techniques that are used in some embodiments include NetFlow, sFlow, and deep packet inspection (DPI). Such flow identifying techniques can be used to extract the network traffic path for any given source and destination endpoints in the datacenter. In some embodiments, the network traffic path is overlaid on the constructed topological graph to identify a portion of the graph to analyze.”; par 29 “As shown, the process 100 initially identifies (at 105) components of a datacenter for monitoring, collects operational data regarding the identified components, and generates additional data from the collected data. The identified set of components in some embodiments include compute components ( e.g., virtual machines, containers, computers, etc.), network components (e.g., switches, routers, ports, etc.), and/or service components (e.g., middlebox components, such as firewalls, load balancers, etc.).”). The examiner interprets this as limitations “an infra controller configured to search data source endpoints, and to obtain address and port information of the data source endpoints by monitoring agents”. Nadger teaches in the cited (par 71 “As shown, each host in some embodiments executes one or more …, performance monitoring (PM) agents 816, performance monitoring VMs 818, ….”; par 72 “This data in some embodiments is collected by PM agents 816 and/or PMVMs 818 executing on the host computers 802-806.”) The examiner interprets this as limitations “by monitoring agents installed in respective clusters of a plurality of clusters;”.
With respect to the independent claims, the applicant has argued that Nadger does not teach limitations “transmit federate-endpoint-api information for the data source endpoints;”. Explaining that the amended claims further include an operational control structure in which data collection is federatively managed via an API. The examiner respectfully disagrees. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., an operational control structure in which data collection is federatively managed via an API) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See [MPEP 2111.01 “it is improper to import claim limitations from the specification”]. Nadger teaches in the cited (par 31 “The performance monitoring system in some embodiments iteratively ( e.g., continuously or periodically) updates the graph that it defines for the datacenter.”; fig 7, par 63 “The discovery engine 715 identifies components in the network and relationships between these components, and stores this information in the component database 721. In some embodiments, the discovery engine 715 uses known techniques to gather this information. Data regarding the operation and performance of these components are gathered by the data collector 717 and the event processor 719, which store their collected information in the component data store 721 or a related data store 723.”). The examiner interprets this as limitations “transmit federate-endpoint-api information for the data source endpoints;”.
With respect to the independent claims, the applicant has argued that Azeez does not teach limitations “a remediation module configured to trigger a recovery action based on the root cause by interfacing with the infra controller”. Applicant explains that Azeez stops at the anomaly detection stage in paragraph 4. The examiner respectfully disagrees. Azeez teaches in the cited (Claim 16 “identifying, by the computer system, a mitigative action to take based on the aggregate anomaly score.”; par 1 “Anomaly detection may also be used to identify the source of the issue so that mitigative action can be performed.”). The examiner interprets this as limitations “a remediation module configured to trigger a recovery action based on the root cause by interfacing with the infra controller”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20170123957 A1 - Gupta - automated root cause analysis of layered architecture. Focuses a lot on global timing stuff.
US 20220206886 A1 - Srivastava - root cause analysis of logs.
US 20220334904 A1 - Chesneau - automated incident detection and root cause analysis
US 20230229545 A1 - Yadav - log analysis and retention system
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL XU whose telephone number is (571)272-5688. The examiner can normally be reached Monday-Friday 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.X./Examiner, Art Unit 2113
/MARC DUNCAN/Primary Examiner, Art Unit 2113