DETAILED ACTION
In response to communication filed on 30 January 2026, claims 1, 13 and 18-20 are amended. Claim 8 is canceled. Claims 1-7 and 9-21 are amended.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see “Objections to the Claims” filed 30 January 2026, have been carefully considered. Based on the persuasive arguments provided, the claim objections are withdrawn.
Applicant’s arguments, see “Rejections Under 35 U.S.C. § 101” filed 30 January 2026, have been carefully considered. Based on the arguments and amendments, the rejections are withdrawn.
Applicant’s arguments, see “Rejections Under 35 U.S.C. § 103” filed 30 January 2026, have been carefully considered. The arguments are related to newly added limitations and are addressed in the rejection below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-13 and 15-21 are rejected under 35 U.S.C. 103 as being unpatentable over Martinez Canedo et al. (US 11,853,903 B2, hereinafter “Martinez”) in view of Fountoulakis et al. (“Local Hyper-Flow Diffusion”, hereinafter “Fountoulakis”) further in view of Subbian et al. (US 2019/0114373 A1, hereinafter “Subbian”).
Regarding claim 1, Martinez teaches
A method for (see Martinez, [col 3 lines 49-51] “methods, systems, and apparatuses related to a Structured Graph Convolutional Neural Network (SGCNN)”) obtaining local node embeddings for heterogeneous graphs, comprising: (see Martinez, [col 11 lines 44-47] “the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”).
obtaining, (see Martinez, [col 5 lines 32-33] “an input graph 205 is received or retrieved by a computer system”) by a computing system comprising one or more processors, (see Martinez, [col 15 lines 30-54] “the present disclosure may be implemented with any combination of hardware and software. For example, aside from parallel processing architecture presented in FIG. 12, standard computing platforms… the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system”) a heterogeneous graph comprising a plurality of nodes representing entities associated with a computing network, wherein the heterogeneous graph comprises a plurality of subgraphs; (see Martinez, [col 11 lines 34-38] “a knowledge graph is generated that comprises a plurality of nodes representing a system… the knowledge graph is a DTG with a plurality of subgraphs and each subgraph represents a digital twin of a subsystem of the system”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”).
determining, by the computing system, a plurality of weight values respectively associated with the plurality of subgraphs, (see Martinez, [col 10 lines 28-50] “The task of the subgraph convolution kernel layer 260 shown in FIG. 2 is to extract feature vectors from graphs or subgraphs… the SGCNN can be easily expanded to a weighted graph by applying a weighted adjacency matrix… define a graph convolution kernel to be a k by k weight matrix”; [col 5 lines 33-36] “The computer system receives, retrieves, or otherwise determines labels for a plurality of node features 210 from the graph and applies a context generator”; [col 15 lines 30-54] “the present disclosure may be implemented with any combination of hardware and software. For example, aside from parallel processing architecture presented in FIG. 12, standard computing platforms… the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system”) a respective subgraph of the plurality of subgraphs defining a respective set of nodes from at least two different classifications… (see Martinez, [col 11 lines 25-62] “we consider the extracted feature vectors… the neighbor node feature vectors may be derived for each subgraph by generating a neighbor feature matrix comprising neighbor paths in the subgraph, and then applying a 2D convolution operation with a trainable matrix and a bias variable to extract the neighbor node feature vectors from the neighbor feature matrix. The neighbor feature matrix may be generated for each subgraph by first determining all neighboring paths for the subgraph. This may be accomplished by identifying, for each vertex in the subgraph”; [col 4 lines 57-65] “a path-based neighbor nodes aggregation method that aggregates vertex domain neighbor nodes information onto subgraphs… where 'V is the set of vertices and 'E is the set of edges. The graph edges can be weighted and directed”; [col 4 lines 33-44] “to classify structures, we utilize labelled examples of structures within the graph. We can then build up the neighborhood around such substructures and sample from these neighborhoods to provide context for the structure. This gives us two levels of context, context within a structure and context around the structure… to provide features to the SGCNN which are generic in the sense that they do not encode overly specific information about the various structures in which a node appears, this is the job of the SGCNN, but which are specific with respect to our proposed SGCNN architecture”) and comprises a corresponding subset of the respective set of nodes and is characterized by a corresponding weight value and… (see Martinez, [col 11 lines 25-62] “we consider the extracted feature vectors… These contexts may be generated for each node by first identifying a set of nodes with a predetermined radius from the node in the knowledge graph, and then generating the contexts around each node by randomly sampling the set of nodes a predetermined number of times… the neighbor node feature vectors may be derived for each subgraph by generating a neighbor feature matrix comprising neighbor paths in the subgraph, and then applying a 2D convolution operation with a trainable matrix and a bias variable to extract the neighbor node feature vectors from the neighbor feature matrix. The neighbor feature matrix may be generated for each subgraph by first determining all neighboring paths for the subgraph. This may be accomplished by identifying, for each vertex in the subgraph”; [col 4 lines 57-65] “a path-based neighbor nodes aggregation method that aggregates vertex domain neighbor nodes information onto subgraphs… where 'V is the set of vertices and 'E is the set of edges. The graph edges can be weighted and directed”).
selecting, by the computing system, at least one node from among the plurality of nodes; (see Martinez, [col 8 lines 46-48] “a node in the knowledge graph is selected. Next, all the nodes within a certain radius r of that node are identified”; [col 5 lines 33-36] “The computer system receives, retrieves, or otherwise determines labels for a plurality of node features 210 from the graph and applies a context generator”).
learning, by the computing system and (see Martinez, [col 11 lines 31-33] “a computer-implemented method 900 for learning structural relationships between nodes of a graph”) using an embedding objective computed (see Martinez, [col 11 lines 41-43] “the SGCNN learns subgraphs representing embeddings of the nodes of the knowledge graph into a vector space… the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”) based on the plurality of weight values,… (see Martinez, [col 9 lines 16-42] “3 step process for labelling structures from heterogeneous graph data: node-level embedding… the network rapidly finds weights which do a good job of labelling outputs without throwing out much information contained in the inputs… The hybrid embedding and SGCNN model described herein achieves better performance using shallower, easier to interpret networks by providing pre-compressed features via the embedding”) an embedding for the at least one node selected from among the plurality of nodes, wherein the embedding is based on… (see Martinez, [col 11 lines 41-43] “the SGCNN learns subgraphs representing embeddings of the nodes of the know ledge graph into a vector space… the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”; [col 8 lines 46-48] “a node in the knowledge graph is selected”) the at least one node selected from among the plurality of nodes; and (see Martinez, [col 8 lines 46-48] “a node in the knowledge graph is selected. Next, all the nodes within a certain radius r of that node are identified”).
processing, by the computing system, the heterogeneous graph based on the embedding… (see Martinez, [col 8 lines 39-41] “context generation is an essential step in the embedding process performed by graph-based learning layer 225”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”; [col 15 lines 30-54] “the present disclosure may be implemented with any combination of hardware and software. For example, aside from parallel processing architecture presented in FIG. 12, standard computing platforms… the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system”).
Martinez does not explicitly teach and a respective set of hyperedges, wherein each hyperedge is connectable to more than two nodes among the plurality of nodes; a corresponding cut-cost function, wherein the corresponding cut-cost function is configured to receive an input descriptive of a subset of nodes of the respective hyperedge among a plurality of different subsets of nodes of the respective hyperedge, the subset subdivided from the respective hyperedge by a cut of the respective hyperedge, and output a cost value, the cost value corresponding to a cut-cost for partitioning the respective hyperedge into the subset of nodes; based on a teleportation parameter associated with the plurality of nodes which is associated with a probability of teleporting from a first node to a second node among the plurality of nodes, a diffusion of an initial value distribution assigned to the at least one node selected from among the plurality of nodes; to perform one or more computing tasks with respect to associated with the at least one node.
However, Fountoulakis discloses hypergraphs and teaches
and a respective set of hyperedges, wherein each hyperedge is connectable to more than two nodes among the plurality of nodes… (see Fountoulakis, [page 1 – 1 Introduction] “Hypergraphs [8] generalize graphs by allowing a hyperedge to consist of multiple nodes that capture higher-order relationships in complex systems and datasets”; [page 11 – 6.2] “A set of nodes are connected by a hyperedge”) a corresponding cut-cost function, (see Fountoulakis, [page 4 – Submodular hypergraph] “A hypergraph H = (V, E) is defined by a set of nodes V and a set of hyperedges… each hyperedge… is a subset of V… The weight we(S) indicates the cut-cost of splitting the hyperedge e into two subsets, Sand e \ S”) wherein the corresponding cut-cost function is configured to receive an input descriptive of a subset of nodes of the respective hyperedge among a plurality of different subsets of nodes of the respective hyperedge, (see Fountoulakis, [page 1 – Introduction] “a hyperedge and the associated cut-cost function is given in Figure 1… we obtain a unit cut-cost hypergraph. In a slightly more general setting, the cut-costs are determined solely by the number of nodes in either side of the hyperedge cut… we obtain a cardinality-based hypergraph”) the subset subdivided from the respective hyperedge by a cut of the respective hyperedge, and (see Fountoulakis, [page 6 – 4 Local Hypergraph Clustering] “We consider a subset of nodes having low conductance as a good cluster, i.e., these nodes are well-connected internally and well-separated from the rest of the hypergraph. Following prior work on local hypergraph clustering… We prove that applying sweep-cut to an optimal solution”) output a cost value for the cut, the cost value corresponding to a cut-cost for partitioning the respective hyperedge into the subset of nodes; (see Fountoulakis, [page 2 – 1. Introduction] “cluster around a given set of seed nodes, where nodes in the cluster are densely connected to each other while relatively isolated to the rest of the graph”; [page 3 – 3 Diffusion as an Optimization Problem] “the task of spreading mass from a small set of seed nodes to a larger set of nodes”; [page 6 – 3 Diffusion as an Optimization Problem] “the goal is to separate/cut the nodes with source mass from the rest of the hypergraph. Observe that the linear term in the dual objective function encourages raising x higher on the seed nodes and setting it lower on others. The cost… captures the discrepancy in node heights over a hyperedge e and encourages smooth height transition over adjacent nodes”).
a diffusion of an initial value distribution assigned to each node (see Fountoulakis, [page 4 – 3 Diffusion as an Optimization Problem] “we assign each node some initial mass specified by a source function… node v… holds… amount of mass at the start of the diffusion. In order to encourage the spread of mass in the hypergraph, the initial mass on the seed nodes is larger than their capacity. This forces the seed nodes to diffuse mass to neighbor nodes to remove their excess mass”).
… to perform one or more computing tasks with respect to associated with the at least one node (see Fountoulakis [page 6 – 3 Diffusion as an Optimization Problem] “embeds nodes into the nonnegative real line, and this embedding is what we actually use for local clustering and node ranking”; [page 41 - Additional dataset, node ranking using general submodular cut-cost] “Table C.7 shows the top-2 ranking results. In this example, we use Iran as the seed node and we rank other countries according to the ordering of dual variables returned by HFD”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of hyperedge connecting plurality of nodes, cut-cost function, subset subdivided from hyperedge by a cut, output cost value for cut, diffusion of an initial value, hypergraph, submodular hypergraph, submodular function, higher-order semantic relationships, submodular cut-cost, different cut-costs, cardinality-based cut-cost determination, number of nodes in each subset, score, smooth score transition, embed nodes into a nonnegative real line and ranking of nodes as being disclosed and taught by Fountoulakis, in the system taught by Martinez to yield the predictable results of improving accuracy significantly for hypergraphs (see Fountoulakis, [page 3 – 1.1 Our main contributions] “We show that our method improves accuracy significantly for hypergraphs with unit, cardinality-based and general submodular cut-costs for local clustering”).
The proposed combination of Martinez and Fountoulakis does not explicitly teach based on a teleportation parameter associated with the plurality of nodes which is associated with a probability of teleporting from a first node to a second node among the plurality of nodes.
However, Subbian discloses nodes according to the link structure of the graph and teaches
based on a teleportation parameter associated with the plurality of nodes which is associated with a probability of teleporting from a first node to a second node among the plurality of nodes, (see Subbian, [0018] “as restarting at the particular user's node with a probability given by the teleport probability (instead of restarting at a random node). That is, for a teleport probability “c”, at each user node, the random walk may, with a probability c, restart at the initial user node or, with a probability (1−c), select one of the user node's outgoing edges at random and follow the randomly-selected edge to another user node”; [0039] “the PRV vector of each of the four non-excluded user nodes u (Nodes 1, 2, 3, 4, 5, and 6) is initialized to {u: 0.15}, where 0.15 is the teleport probability c”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of teleportation parameters as being disclosed and taught by the proposed combination of Martinez and Fountoulakis, in the system taught by Subbian to yield the predictable results of efficiently traversing through large scale graphs (see Subbian, [0017] “random walks implemented as graph traversals may be difficult to scale to large graphs, such as social graphs in online social networks. The Personalized Rank algorithm may be used to implement random walks by introducing a "teleport" probability, which is a threshold probability that the random walk jumps to a random node instead of following an edge to an adjacent node. Personalized Rank may be implemented as an iterative process using a power iteration technique that converges relatively quickly. Using power iteration, vectors of counts may be computed iteratively for each user node that is adjacent to the initial user node. Each element of a vector may be a count associated with a particular user node. Personalized Rank may generate an approximation of a random walk, and the approximation may become more accurate as the number of iterations increases”)
Claims 19 and 20 incorporate substantively all the limitations of claim 1 in a system (see Martinez, [col 3 lines 49-51] “methods, systems, and apparatuses related to a Structured Graph Convolutional Neural Network (SGCNN)”; [col 11 lines 44-47] “the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”; [col 15 lines 50-62] “comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input… These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters”) and computer-readable medium form (see Martinez, [col 3 lines 49-51] “methods, systems, and apparatuses related to a Structured Graph Convolutional Neural Network (SGCNN)”; [col 15 lines 50-62] “comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input… These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters”) and are rejected under the same rationale.
Regarding claim 2, the proposed combination of Martinez, Fountoulakis and Subbian teaches
further comprising: creating, (see Fountoulakis, [page 1 1- Introduction] “Hypergraphs [8] generalize graphs by allowing a hyperedge to consist of multiple nodes that capture higher-order relationships in complex systems and datasets”) based on the heterogeneous graph, (see Martinez, [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”) a hypergraph, (see Fountoulakis, [page 1 1- Introduction] “Hypergraphs [8] generalize graphs by allowing a hyperedge to consist of multiple nodes that capture higher-order relationships in complex systems and datasets”) wherein each subgraph from among the plurality of subgraphs comprises (see Martinez, [col 11 lines 35-37] “the knowledge graph is a DTG with a plurality of subgraphs and each subgraph represents a digital twin of a subsystem of”) a hyperedge of the hypergraph (see Fountoulakis, [page 1 1- Introduction] “Hypergraphs [8] generalize graphs by allowing a hyperedge to consist of multiple nodes that capture higher-order relationships in complex systems and datasets”). The motivation for the proposed combination is maintained.
Regarding claim 3, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the hypergraph is a submodular hypergraph, wherein each hyperedge of the hypergraph is associated with a submodular function (see Fountoulakis, [page 4 - 2 Preliminaries and Notations] “A hypergraph H = (V, E) is defined by a set of nodes V and a set of hyperedges… A hypergraph is termed submodular… associated with a submodular function… A submodular hypergraph is written… the definition reduces to unit cut-cost hypergraphs”). The motivation for the proposed combination is maintained.
Regarding claim 4, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the plurality of subgraphs describe (see Martinez, [col 11 lines 35-38] “the knowledge graph is a DTG with a plurality of subgraphs”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”) higher-order semantic relationships across the plurality of nodes of the heterogeneous graph (see Fountoulakis, [page 4 - 2 Preliminaries and Notations] “This general form allows us to describe the potentially complex higher-order relation among multiple nodes”). The motivation for the proposed combination is maintained.
Regarding claim 5, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the higher-order semantic relationships comprise (see Fountoulakis, [page 4 - 2 Preliminaries and Notations] “This general form allows us to describe the potentially complex higher-order relation among multiple nodes”) determining a connection between a first node of the heterogeneous graph and a second node of the heterogeneous graph (see Martinez, [col 12 lines 9-11] “generates an adjacency matrix defining connections between the nodes of the knowledge graph”; [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”). The motivation for the proposed combination is maintained.
Regarding claim 9, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the cut-cost function comprises a submodular cut-cost function (see Fountoulakis, [page 3 – 1.1 Our main contributions] “a generic local diffusion model that applies to hypergraphs characterized by a rich class of cut-cost functions that covers many previously studied special cases, e.g., unit, cardinality-based and submodular cut-costs”). The motivation for the proposed combination is maintained.
Regarding claim 10, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein each weight value (see Martinez, [col 10 lines 45-46] “a weighted graph by applying a weighted adjacency matrix”) from among the plurality of weight values comprises (see Martinez, [col 10 lines 28-50] “The task of the subgraph convolution kernel layer 260 shown in FIG. 2 is to extract feature vectors from graphs or subgraphs… the SGCNN can be easily expanded to a weighted graph by applying a weighted adjacency matrix… define a graph convolution kernel to be a k by k weight matrix”; [col 5 lines 33-36] “The computer system receives, retrieves, or otherwise determines labels for a plurality of node features 210 from the graph and applies a context generator”) the cut-cost of (see Fountoulakis, [page 4 – Submodular hypergraph] “The weight we(S) indicates the cut-cost of splitting the hyperedge e into two subsets, Sand e \ S”) the subgraph respectively associated with the weight value (see Martinez, [col 10 lines 28-50] “The task of the subgraph convolution kernel layer 260 shown in FIG. 2 is to extract feature vectors from graphs or subgraphs… the SGCNN can be easily expanded to a weighted graph by applying a weighted adjacency matrix… define a graph convolution kernel to be a k by k weight matrix”; [col 5 lines 33-36] “The computer system receives, retrieves, or otherwise determines labels for a plurality of node features 210 from the graph and applies a context generator”). The motivation for the proposed combination is maintained.
Regarding claim 11, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the cut-cost comprises (see Fountoulakis, [page 4 – Submodular hypergraph] “The weight we(S) indicates the cut-cost of splitting the hyperedge e into two subsets, Sand e \ S”) one or more of a submodular cut-cost, (see Fountoulakis, [page 3 – 1.2 Previous work] “Different methods require different assumptions about the hyperedge cut-cost, which can be roughly categorized into unit cut-cost, cardinality-based (and submodular) cut-cost and general submodular cut-cost”; [page 12 - Experiments for general submodular cut-cost] “we show another application of submodular cut-cost for node-ranking”). The motivation for the proposed combination is maintained.
Regarding claim 12, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the cut-cost is a submodular cut-cost (see Fountoulakis, [page 3 – 1.1 Our main contributions] “a generic local diffusion model that applies to hypergraphs characterized by a rich class of cut-cost functions that covers many previously studied special cases, e.g., unit, cardinality-based and submodular cut-costs”) which assigns different costs to different cuts (see Fountoulakis, [page 12 – Experiments for general submodular cut-cost] “compares HFD using different cutcosts”) of the respective hyperedge (see Fountoulakis, [page 6 – 4 Local Hypergraph Clustering] “We consider a generic hypergraph H = (V, E, W) with submodular hyperedge weights W”). The motivation for the proposed combination is maintained.
Claim 21 incorporates substantively all the limitations of claim 12 in a computer-readable medium form and is rejected under the same rationale.
Regarding claim 13, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the one or more computing tasks include ranking the plurality of nodes, (see Fountoulakis [page 6 – 3 Diffusion as an Optimization Problem] “embeds nodes into the nonnegative real line, and this embedding is what we actually use for local clustering and node ranking”; [page 41 - Additional dataset, node ranking using general submodular cut-cost] “Table C.7 shows the top-2 ranking results. In this example, we use Iran as the seed node and we rank other countries according to the ordering of dual variables returned by HFD”). The motivation for the proposed combination is maintained.
Regarding claim 15, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the embedding for the at least one node comprises (see Martinez, [col 11 lines 41-43] “the SGCNN learns subgraphs representing embeddings of the nodes of the knowledge graph into a vector space… the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”) a score, (see Fountoulakis [page 11 - 6.2 Experiments using real-world data] “improving F1 scores by up to 20% for local clustering and providing the only meaningful result for node ranking”) and wherein the embedding objective is configured to (see Martinez, [col 11 lines 41-43] “the SGCNN learns subgraphs representing embeddings of the nodes of the knowledge graph into a vector space… the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”) encourage smooth score transition over adjacent nodes from among the plurality of nodes (see Fountoulakis [page 6 – 3 Diffusion as an Optimization Problem] “encourages smooth height transition over adjacent nodes”)The motivation for the proposed combination is maintained.
Regarding claim 16, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein scores for node ranking (see Fountoulakis [page 11 - 6.2 Experiments using real-world data] “improving F1 scores by up to 20% for local clustering and providing the only meaningful result for node ranking”) for the plurality of nodes correspond to weight values of edges (see Martinez, [col 4 lines 57-65] “a path-based neighbor nodes aggregation method that aggregates vertex domain neighbor nodes information onto subgraphs… where 'V is the set of vertices and 'E is the set of edges. The graph edges can be weighted and directed”) between one or more local nodes and the plurality of nodes (see Martinez, [col 11 lines 48-49] “each node by first identifying a set of nodes with a predetermined radius from the node in the knowledge graph”). The motivation for the proposed combination is maintained.
Regarding claim 17, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the embedding comprises (see Martinez, [col 11 lines 44-47] “the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”) a vector of a length equal to (see Martinez, [col 10 lines 5-] “to find all length d paths… Thus N is an n by d matrix with each element being a feature vector of a neighbor node”) a number of nodes that embeds nodes into a nonnegative real line (see Fountoulakis [page 6 – 3 Diffusion as an Optimization Problem] “embeds nodes into the nonnegative real line, and this embedding is what we actually use for local clustering and node ranking”). The motivation for the proposed combination is maintained.
Regarding claim 18, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the one or more computing tasks include ranking of the plurality of nodes, (see Fountoulakis [page 6 – 3 Diffusion as an Optimization Problem] “embeds nodes into the nonnegative real line, and this embedding is what we actually use for local clustering and node ranking”; [page 41 - Additional dataset, node ranking using general submodular cut-cost] “Table C.7 shows the top-2 ranking results. In this example, we use Iran as the seed node and we rank other countries according to the ordering of dual variables returned by HFD”). The motivation for the proposed combination is maintained.
Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Martinez, Fountoulakis and Subbian in view of Baek (US 2023/0084278 A1, hereinafter “Baek”).
Regarding claim 6, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the higher-order semantic relationships comprise… (see Fountoulakis, [page 4 - 2 Preliminaries and Notations] “This general form allows us to describe the potentially complex higher-order relation among multiple nodes”) the heterogeneous graph… (see Martinez, [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”) the heterogeneous graph (see Martinez, [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”).
The proposed combination of Martinez, Fountoulakis and Subbian does not explicitly teach determining a connection between a first node of the heterogeneous graph and two or more nodes of the heterogeneous graph.
However Baek discloses knowledge graph generation unit and teaches
determining a connection between a first node of a knowledge graph and two or more nodes of the knowledge graph (see Baek, [0081] “the incidence matrix is constructed based on only nodes (node 1, node 2, and node 3) belonging to a knowledge graph of 'agent a'”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of determining connections between nodes as being disclosed and taught by Baek, in the system taught by the proposed combination of Martinez, Fountoulakis and Subbian to yield the predictable results of effectively generating hypergraphs and performing analysis to determine data similarity (see Baek, [0011] “The knowledge graph concatenation unit may include a hypergraph-based random sample module configured to generate a hypergraph… through the analysis of the similarity between the agents using the agent embedding vector and an analysis of a similarity between data of a specific agent and data collected by an agent adjacent to the specific agent using the agent embedding vector”).
Regarding claim 7, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein the higher-order semantic relationships comprise… (see Fountoulakis, [page 4 - 2 Preliminaries and Notations] “This general form allows us to describe the potentially complex higher-order relation among multiple nodes”) the heterogeneous graph… (see Martinez, [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”) the heterogeneous graph (see Martinez, [col 8 lines 35-37] “GCNN can learn more complicated structures, relationships, and relationships between structures present in the heterogeneous knowledge graph”).
The proposed combination of Martinez, Fountoulakis and Subbian does not explicitly teach determining a connection between two or more nodes of the heterogeneous graph and another node of the heterogeneous graph.
However Baek discloses knowledge graph generation unit and teaches
determining a connection between two or more nodes of a knowledge graph and another node of the knowledge graph (see Baek, [0081] “the incidence matrix is constructed based on only nodes (node 1, node 2, and node 3) belonging to a knowledge graph of 'agent a'”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of determining connections between nodes as being disclosed and taught by Baek, in the system taught by the proposed combination of Martinez, Fountoulakis and Subbian to yield the predictable results of effectively generating hypergraphs and performing analysis to determine data similarity (see Baek, [0011] “The knowledge graph concatenation unit may include a hypergraph-based random sample module configured to generate a hypergraph… through the analysis of the similarity between the agents using the agent embedding vector and an analysis of a similarity between data of a specific agent and data collected by an agent adjacent to the specific agent using the agent embedding vector”).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Martinez, Fountoulakis and Subbian in view of Velipasaoglu (US 2020/0379892 A1, hereinafter “Velipasaoglu”).
Regarding claim 14, the proposed combination of Martinez, Fountoulakis and Subbian teaches
wherein using the embedding objective comprises… (see Martinez, [col 11 lines 41-43] “the SGCNN learns subgraphs representing embeddings of the nodes of the knowledge graph into a vector space… the embedding of each node is learned with respect to a plurality of contexts and each context corresponds to a neighborhood of nodes connected to the node in the knowledge graph”).
The proposed combination of Martinez, Fountoulakis and Subbian does not explicitly teach iteratively computing a proxy objective.
However Velipasaoglu discloses performance evaluation criteria and teaches
iteratively computing a proxy objective (see Velipasaoglu, [0056] “If during the iterations, a measured objective value larger than the proxy is encountered, the proxy value can be updated dynamically”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of iteratively computing proxy as being disclosed and taught by Velipasaoglu, in the system taught by the proposed combination of Martinez, Fountoulakis and Subbian to yield the predictable results of utilizing proxy value to effectively continue the search process in absence of measured objective value (see Velipasaoglu, [0054] “Test planning, configuration and execution engine 152 can utilize a back off solution in which the application is reconfigured back to a known responsive state, to address unresponsiveness, but a proxy value is used as the objective value instead of one measured from the application. In this use, a proxy value, which can be determined dynamically in some implementations, allows the search process to continue in the absence of a measured objective value”).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISHALI SHAH whose telephone number is (571)272-8532. The examiner can normally be reached Monday - Friday (7:30 AM to 4:00 PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AJAY BHATIA can be reached at (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VAISHALI SHAH/Primary Examiner, Art Unit 2156