NON-FINAL ACTION
Response to Amendment
Applicant's arguments filed 1/02/2026 have been fully considered but they are not persuasive.
Regarding U.S.C. 101, the amended elements into independent claims 1 and 11 still do not integrate the abstract ideas into a practical application. The data is obtained, and the data already has associations between nodes determined. The recited improvement is the result of inputting gathered data to a GNN, which performs a mental process of making a determination. Initiating a remedial action is only deciding that a particular action should be done, an alert is mere data output. It still does not actually perform the remedial action that is connected to the gathered data and analysis of said data by the GNN itself. Performing the determined remedial action would bring the scope of the invention into the claimed improvement, since the improvement makes it not routine or conventional, and a remedial action cannot reasonably be performed in the mind.
Regarding U.S.C. 103, upon further review it is agreed that the provisional application documentation does lack support specifically for the vector autoregression cited in ¶26 of the published application. A different element in the provisional that teaches generating relationship data is provided below. For the logical connection between nodes, it is unclear why a causal relationship is not an example of a logical connection between nodes. If a relationship exists between nodes, there must be some connection between them and logic that affects the relationship, such as a cause and effect. A more specific type of logical connection may overcome the prior art. For the use of feature vectors and relationship data to generate an embedding for each node, while the word vector is not used, it is reasonable to interpret the input features described in ¶37 and also on page 4 of the provisional specification as vectors since they are being fed into a GNN. The GNN uses data input to obtain the embeddings for the nodes, including data from neighboring nodes. This is described in more detail in ¶83.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-9, 11, 14-18 rejected under 35 U.S.C. 101 because they are directed to judicial exceptions without significantly more. The claims recite mental processes and mathematical concepts, but lack an integration into a practical application or anything amounting to significantly more.
Step 1-
Claims 1, 4-8 recite a method, claim 9 recites a non-transitory medium, and claims 11, 14-18 recite an RCA agent which is interpreted as a computer system, all of which are eligible statutory categories of invention.
Step 2A Prong 1-
Claim 1 recites root cause analysis and determining whether to indicate nodes, which are mental processes. Generating vectors and relationship data which are mathematical concepts. It also recites an adjacency matrix, which is mathematical, and determining which is mental. Generating an embedding is a mathematical concept.
Claim 4 recites using the embedding to classify nodes, which is a mental process.
Claim 5 recites determining a pair of nodes, which is mental. Generating embedding and calculating are mathematical.
Claim 6 recites concatenating vectors which is mathematical.
Claim 8 recites using data to make a determination, which is mental.
Claim 9 recites the method of claim 1.
Claims 11-18 recite the same mental processes and mathematical concepts as claims 1-8 respectively.
Step 2A Prong 2-
Claim 1 recites additional limitations: obtaining data (KPI data, GNN information) and inputting to a graph neural network, which are insignificant extra solution activity. Root cause analysis in a network and nodes are mere instructions to apply the judicial exception with generic computer hardware. The addition of raising an alarm is only presenting information to a user. The initiation of remedial actions does not actually execute them, so it is not significantly more than making the decision to perform something. The amended elements recite: limitations on the data contained in the matrix, but nothing that specifically integrates into a practical application, and using the GNN to generate the embedding, which are mere instructions to apply the exception. The graph neural network is recited with high generality and no specific limitations on how it makes the determination
Claim 4 recites the GNN being used to make a determination, which are also mere instructions to apply.
Claim 5 recites the GNN determining a pair of nodes, also mere instructions to apply.
Claim 6 recites creating input and getting output, which are extra solution activity. The data is put into into a GNN, but this is also recited with a high level of generality and are mere instructions to apply the exception.
Claim 8 recites limitations on the data itself, but they are only applied to nodes which are generic computer hardware.
Claim 9 recites a non transitory computer readable storage medium which contains instructions to perform the methods of claim 1, but these are also mere instructions to apply the exceptions of claim 1 with generic computer hardware.
Claim 11 recites a data storage system and processing circuitry, which are also mere instructions to apply the exception with generic computer hardware.
Claims 12-18 recite the same additional limitations as claims 2-8
Step 2B-
Claim 1 recites a predicted root cause and victim node, but the only action performed with them is indicating or not, merely providing information is not a sufficient improvement to a computer or any field (See MPEP 2106.05(a)). As discussed in step 2A prong 2 above, the obtaining, inputting and generating of data are extra solution activity, and amount to receiving or transmitting data over a network which are well understood routing and conventional, see MPEP 2106.05d subsection II.
Claim 4 recites the GNN being used to make a determination, using data transmitted and sending the determination as output are well understood routine and conventional.
Claim 5 recites the GNN generating embedding data and calculations, transmitting the outputs of which are all well understood routine and conventional.
Claim 6 recites creating input and getting output, which are extra solution activity. The data is put into into a GNN, transmitting data to a neural network is well understood routine and conventional.
Claim 8 recites limitations on the data itself, but they are only applied to nodes which are generic computer hardware. No detail is given about the nodes that amounts to significantly more than mere instructions to apply.
Claim 9 recites a non transitory computer readable storage medium which contains instructions to perform the methods of claim 1, but these are also mere instructions to apply the exceptions of claim 1 with generic computer hardware, with no limitations amounting to significantly more.
Claim 11 recites a data storage system and processing circuitry, which are also mere instructions to apply the exception with generic computer hardware.
Claims 14-18 recite the same additional limitations as claims 4-8
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-6, 8, 9, 11, 14-16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (20230069074 and Provisional app no. 63/235,205) in view of Gao (US 20210006481) and Li (US 10540573).
Regarding claim 1, Chen teaches A method for root cause analysis in a network comprising a set of nodes Ni for i=1 to N, where N > 2, the method comprising: obtaining N sets of key performance indicator (KPI} data, each one of the N sets of KPI data being for one of the N nodes (“The method includes using a time series generated by each of a plurality of nodes to train a graph neural network to generate a causal graph, and identifying interdependent causal networks that depict hierarchical causal links from low-level nodes to high-level nodes to the system key performance indicator (KPI).” ¶8); generating relationship data using the feature vectors, (“Besides original input features, aggregated embeddings from between-level
learning are added to form the augmented input features. The resulting causal graph in the higher level is more robust and more consistent by taking the lower-level influence into consideration” Provisional page 4); the generated relationship data indicating relationships between the nodes within the set of N nodes (“In order to achieve the goal, when a system failure happens, it firstly conducts topological cause learning by extracting causal relations from entity metrics data” Provisional specification page 3); wherein the relationship data comprises an NxN adjacency matrix, and each value within the matrix is associated with a different pair of nodes and indicates whether the pair of nodes is determined to be logically connected to each other (“In each layer, embedding s are aggregated according to the adjacent matrix, and then fed to the next layer. With different adjacent matrix for each layer and directed acyclic graph (DAG) constraint enforcing a stronger sparsity, the GNN can capture the causal relations between entities more effectively while enable a faster learning process.” ¶38). A pair can be any 2 nodes which are connected in some way; Inputting to a graph neural network, (GNN) the generated relationship data and the feature vectors (“During hierarchical causal graph learning, GNNs are treated as building blocks to construct the hierarchical structure. A GNN with L layers takes the input features or augmented input features from the previous adjacent level” ¶37); comprising the NxN adjacency matrix and the feature vectors, wherein the GNN is configured to use the features vectors and the relationship data to generate an embedding for each one of the N nodes (“The method further includes simulating causal relations between entities by aggregating embeddings from neighbors in each layer, and generating output embeddings for entity metrics prediction and between-level aggregation” ¶8, “Specifically, the L layers of GNN can be applied to the time-lagged data {x.sub.t−1, . . . , x.sub.t−p}∈R.sup.d×p to obtain its embedding. In the l-th layer, the embedding z.sup.(l) is obtained by aggregating the nodes' embedding and their neighbors' information at the l−1 layer. Then, the embedding at the last layer z(L) is used to predict the metric value at the time step t by a MLP layer.” ¶83).
Obtaining from the GNN information indicating that at least node Nj is a candidate root cause node and at least node Nk is a candidate victim node, where k =/= j (“This process outputs the topological causal score of each system component. Finally, we integrate the individual and topological causal score and rank all system components based on the integrated one. The top K components are considered to be the most possible root causes of system faults.” ¶40); using the relationship data to i) determine whether to indicate the candidate root cause node Ni as a predicted root cause node and/or ii) determine whether to indicate the candidate victim node Nk as a predicted victim node (“This process outputs the topological causal score 420 of each low-level node 222, 232 and each high-level node 210, 220, 230, indicating which nodes are likely to be root causes and which nodes are affected more significantly by the failure/fault events.” ¶64). Chen does not teach for each one of the N nodes, using the set of KPI data associated with the node to generate feature vectors for the node.
Gao teaches for each one of the N nodes, using the set of KPI data associated with the node to generate feature vectors for the node (“a plurality of pieces of target key performance indicator (KPI) data of the network device within preset duration;… to generate an element corresponding to each piece of feature information; and forming, by the warning analysis device, the feature vector by using generated elements corresponding to the plurality of pieces of feature information,” ¶8). It would have been obvious to one of ordinary skill in the art prior to the filing of the claimed invention to combine the generation of feature vectors taught by Gao with the root GNN system cause analysis methods taught by Chen. Chen teaches the GNN taking input features or augmented input features (¶37) so it would be obvious to also have the ability to generate feature vectors which are needed for the GNN.
Chen and Gao do not teach and after i) determining whether to indicate the candidate root cause node Ni as a predicted root cause node and/or ii) determining whether to indicate the candidate victim node Nk as a predicted victim node, raising an alarm and/or initiating a remedial action.
Li teaches And after i) determining whether to indicate the candidate root cause node Ni as a predicted root cause node and/or ii) determining whether to indicate the candidate victim node Nk as a predicted victim node, raising an alarm and/or initiating a remedial action (“Finally, in a third phase of the method 200 (i.e., steps 240-242 of FIG. 2C), modules of the server computing device 106 conduct a remediation process to identify root causes or deficiencies in the data associated with the current stories that may be associated with the cycle time prediction and generate an alert message to be displayed on the client computing device” column 7 lines 2-8). It would have been obvious for one of ordinary skill in the art prior to the filing of the claimed invention to combine the GNN and KPI vector RCA system taught by Chen and Gao with the alerting of an identified cause as taught by Li. This would help the issue be understood and fixed (colum 7 lines 9-10).
Regarding claim 4, Chen teaches The method of claim 1, wherein, for each one of the N nodes, the GNN is configured to use a node's embedding to classify the node as either a candidate root cause node,( RCN) or a candidate victim node, (VN) (“During between-level learning, the output embeddings of GNNs from the previous level are aggregated according to the causal relations between related entities or between entities” ¶46, “This process outputs the topological causal score 420 of each low-level node 222, 232 and each high-level node 210, 220, 230, indicating which nodes are likely to be root causes and which nodes are affected more significantly by the failure/fault events.” ¶64).
Regarding claim 5, Chen teaches wherein the GNN is configured to generate an embedding for a given one of the N nodes, Nx, by performing a process that includes: determining a pair of nodes Ny, Nz where each node of the pair is indicated as being logically connected to node Nx (“Given a g×g main network G, where g is the number of nodes in a 2-dimensional arrangement, a set of domain specific networks A={A.sub.1, . . . , A.sub.g}, and a one-to-one mapping function θ that maps each node, g, in G to a domain specific network, a NoN can be defined as a triplet R=<G, A, θ>. The node set in G, which can be referred to as high-level nodes, can be denoted as VG, and the node set in A, which can be referred to as low-level nodes” ¶62).
Regarding claim 6, Chen teaches The method of claim 5, wherein the method further comprises: creating an input vector by concatenating a feature vector for node Nx and the aggregated embedding and feeding the input vector into a neural network to produce an embedding for node Nx (“So, the initial embedding of high-level nodes…is the concatenation of their time-lagged data…and aggregated low-level embeddings, which can be formulated as follows:” ¶87), the embeddings are the concatenated data (features), which are used by the GNN as input.
Regarding claim 8, Chen teaches , wherein using the relationship data to determine whether to indicate the candidate victim node as a predicted victim node comprises determining whether the relationship data indicates that the candidate victim node is logically connected to the candidate root cause node either directly or indirectly via one or more other candidate victim nodes(“Applying causal graph learning and network propagation can be used to analyze how different components of a system are affected by a root cause through interactions within the system for identifying a topological cause” ¶36).
Regarding claim 9, Chen teaches A non-transitory computer readable storage medium storing a computer program comprising instructions (“Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.” ¶134), Chen and Gao teach the method of claim 1.
Regarding claim 11, Chen teaches A root cause analysis (RCA) agent , the RCA agent comprising: a data storage system; and processing circuitry (“A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.” ¶136). The claim recites the same additional limitations as claim 1.
Regarding claims 12-16 and 18, they recite the same additional limitations as claims 2-6 and 8 respectively.
Claim(s) 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen and Gao in view of Kondo (US 20190251203).
Regarding claim 7, Chen and Gao teach the method of claim 1, they do not teach wherein each set of KPI data comprises M KPI vectors; and each feature vector is of length K, where K < M.
Kondo teaches wherein each set of KPI data comprises M KPI vectors; and each feature vector is of length K, where K < M. (“At the time of converting each feature vector 210, which is included in the feature vector set 200, into a compressed code; firstly, as illustrated in FIG. 4, a single feature vector 210 is retrieved from the feature vector set 200 and is divided into M number of sub-vectors…In this way, the D-dimensional feature vector 210 gets converted into the compressed code 250 having the length M” ¶29). If the feature set is divided into M number of sub-vectors, M must be less than the total set which represents, and the feature vector gets compressed to having length M, so the length must be less than the number of total vectors.
Regarding claim 17, Chen and Gao teach the RCA agent of claim 11, and Kondo teaches the same additional limitations as claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN KEVIN MCNAMARA whose telephone number is (703)756-1884. The examiner can normally be reached Monday-Friday 7:30-5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at 571-272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEAN KEVIN MCNAMARA/Examiner, Art Unit 2113
/PHILIP GUYTON/Primary Examiner, Art Unit 2113