Prosecution Insights
Last updated: April 19, 2026
Application No. 18/138,969

NEURAL NETWORK FOR GENERATING BOTH NODE EMBEDDINGS AND EDGE EMBEDDINGS FOR GRAPHS

Non-Final OA §101§103§112§DP
Filed
Apr 25, 2023
Examiner
JABLON, ASHER H.
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Salesforce Inc.
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
40 granted / 90 resolved
-10.6% vs TC avg
Strong +44% interview lift
Without
With
+43.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
25 currently pending
Career history
115
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
26.9%
-13.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 90 resolved cases

Office Action

§101 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4). The reference characters “240” and “242” have been used to designate the node classifier throughout specification paragraphs [0032]-[0033]. The reference character “242” has been used to designate the node classifier in [0032] and the edge classifier in [0033]. Based on Fig. 2 in the drawings, every instance of “node classifier 242” in [0032] should recite “node classifier 240”. The term “node classifier 242” in [0033], line 1 appears to constitute a minor informality and should disclose “edge classifier 242 ”. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 1-2, 5- 7 , 10-11, 14-16, and 19-20 are objected to because of the following informalities: In claim 1, line 11-12, the limitations “(k-1) th set of node embeddings and (k-1) th set of edge embeddings” should recite “a (k-1) th set of node embeddings and a (k-1) th set of edge embeddings”. In lines 14-15, the limitations “(k-1) th set of edge embeddings are output from (k-1) th layer” should recite “the (k-1) th set of edge embeddings are output from a (k-1) th layer”. In claim 2, lines 5-6 and 9-10, the limitations “(k-1) th layer” should recite “the (k-1) th layer”. In claim 5, lines 6 and 8, the limitations “(k-1) th layer” should recite “the (k-1) th layer”. In claim 6, t he final line should end with a period. In claim 7, line 2, the limitation “Kth layer” should recite “the Kth layer”. Claims 10 and 19 each recites the same minor informalities as claim 1. Claims 11 and 20 each recites the same minor informalities as claim 2. Claim 14 recites the same minor informalities as claim 5. Claim 15 recites the same minor informalities as claim 6. Claim 16 recites the same minor informalities as claim 7. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 2-3, 11-12 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “ neighborhood ” in claim 2, line s 4 and 8 is a relative term which renders the claim indefinite. The term “ neighborhood ” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The term “neighborhood” is a subjective term because the specification does not supply an objective standard for measuring its scope , as discussed in MPEP 2173.05(b) subsection IV . It is unclear whether a neighborhood is a measure of distance, and it is unclear what constitutes a neighborhood of a target node. Examiner treats a “neighborhood of a target node” as a predefined distance from a target node. Claim 3 is rendered indefinite for failing to cure the deficiencies of claim 2. In claim 11, the term “neighborhood” in lines 4 and 8 render the claim indefinite for the same reasons given for claim 2. In claim 11, the terms “node features” in lines 5-6 and “edge features” in lines 9-10 render the claim indefinite. Based on claim 10, node features and edge features are input to a first layer of the neural network to generate node embeddings and edge embeddings. Each layer thereafter receives node and edge embeddings to generate new node and edge embeddings (as depicted in Fig. 3A) . It is unclear if the terms “node features” and “edge features” in claim 11 are supposed to recite “node embeddings” and “edge embeddings”, respectively. It is unclear if there is any difference between a feature and an embedding. In claim 11, Examiner treats the terms “node features” as “node embeddings” and “edge features” as “edge embeddings”. Claim 12 is rejected for failing to cure the deficiencies of claim 11. Claim 20 recites the terms “neighborhood” in lines 4 and 8, the term “node features” in lines 5-6, and the term “edge features” in lines 9-10. Claim 20 is rendered indefinite for the same reasons given for claims 2 and 11. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim s 1, 4-7, 10, 13-16, and 19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim 1 and 4-7 each recites a method, claims 10 and 13 - 16 each recites a non-transitory computer-readable medium (a product), and claim 19 recites a computer system comprising a processor (a system). Each of a method, a product, and a system falls within at least one of the four statutory categories of patent-eligible subject matter. Claim 1 Step 2A Prong 1: D etermining a set of node features for each of the plurality of nodes based on information associated with the node is a n observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Specification paragraph [0032], lines 6-8 discloses that when each node represents a merchant, a node features may include a merchant’s industry. A person can reasonably identify this information in the mind. D etermining a set of edge features for each of the plurality of edges based on information associated with the edge is a n observation mental process which can reasonably be performed in the human mind with the aid of pencil and paper. Specification paragraph [0032], lines 10-12 discloses that when each edge represents transactions, an edge node features may include time of transaction. A person can reasonably identify this information in the mind. The claim recites an abstract idea. Step 2A Prong 2 : A ccessing a neural network having K layers, where K is a natural number, K > 1 amounts to insignificant pre-solution activity under MPEP 2106.05(g). A ccessing a graph comprising a plurality of nodes and a plurality of edges linking the plurality of nodes amounts to insignificant pre-solution activity under MPEP 2106.05(g). A pplying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A pplying a kth layer of the neural network to (k-1) th set of node embeddings and (k- 1) th set of edge embeddings to output a kth set of node embeddings and a kth set of edge embeddings, where k is a natural number, wherein the (k-1) th set of node embeddings and (k - 1) th set of edge embeddings are output from (k-1) th layer of the neural network, k is a natural number, and K ≥ k > 1 amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The additional elements as disclosed above, alone or in combination, do not integrate the abstract ideas into a practical application as they are mere insignificant pre- solution activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is directed to an abstract idea. Step 2 B: A ccessing a neural network having K layers, where K is a natural number, K > 1 is analogous to retrieving information from memory, which the courts have recognized as a well-understood, routine, conventional activity under MPEP 2106.05(d). A ccessing a graph comprising a plurality of nodes and a plurality of edges linking the plurality of nodes is analogous to retrieving information from memory, which the courts have recognized as a well-understood, routine, conventional activity under MPEP 2106.05(d). A pplying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). A pplying a kth layer of the neural network to (k-1) th set of node embeddings and (k- 1) th set of edge embeddings to output a kth set of node embeddings and a kth set of edge embeddings, where k is a natural number, wherein the (k-1) th set of node embeddings and (k - 1) th set of edge embeddings are output from (k-1) th layer of the neural network, k is a natural number, and K ≥ k > 1 amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The additional elements as disclosed above, in combination with the abstract ideas, are not sufficient to amount to significantly more than the abstract ideas as they are well-understood, routine and conventional activities as disclosed in combination with generic computer functions that are implemented to perform the abstract ideas disclosed above. The claim is not patent eligible. Claim 2 incorporates the rejection of claim 1. Step 2A Prong 1: The abstract ideas of claim 1 are incorporated. I dentifying a subset of nodes that are in a neighborhood of the target node is an observation mental process which can reasonably be performed in the human mind. O btaining node embeddings of the subset of nodes output from (k-1) th layer is a mathematical calculation as disclosed by specification paragraphs [0040]-[0041]. A ggregating the node embeddings of the subset of nodes output from (k-1) th layer into an aggregated node vector is a mathematical calculation as disclosed by specification paragraphs [0044]-[0045]. I dentifying a subset of edges that are linking the subset of nodes in the neighborhood is an observation mental process which can reasonably be performed in the human mind. O btaining edge embeddings associated with the subset of edges in (k-1) th layer is a mathematical calculation as disclosed by specification paragraphs [0042]-[0043]. A ggregating the edge embeddings of the subset of edges in (k-1) th layer into an aggregated edge vector is a mathematical calculation as disclosed by specification paragraphs [0046]-[0047]. D etermining a set of node embeddings in kth layer based in part on the aggregated node vector and the aggregated edge vector is a mathematical calculation as disclosed by specification paragraphs [0048]-[0049] and [005 6 ], which explains the node embedding generator ma y apply Equations (3)-(5) to generate the kth layer node embeddings. Step 2A Prong 2 and Step 2 B: A pplying the kth layer in the plurality of layers of the neural network to output a k- th set of node embeddings for a target node amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 3 incorporates the rejection of claim 2 . Step 2A Prong 1: The abstract ideas of claim 2 are incorporated. D etermining the set of node embeddings in kth layer based in part on the aggregated node vector and the aggregated edge vector comprises: concatenating the aggregated node vector and the aggregated edge vector to generate a concatenated vector is a is a mathematical calculation as disclosed by specification paragraphs [0048]-[0049] and [0059], which explains the node embedding generator may apply Equations (3)-(5) to generate the kth layer node embeddings. Step 2A Prong 2 and Step 2 B: P assing the concatenated vector through kth layer of the neural network with an activation function to generate a set of node embeddings amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 4 incorporates the rejection of claim 1. Step 2A Prong 1: The abstract ideas of claim 1 are incorporated. D etermining a set of node embeddings for each of the plurality of nodes is a mathematical calculation as disclosed by specification paragraphs [0040]-[0041]. N ormalizing each set of node embeddings based on all sets of node embeddings in a same layer of the neural network is a mathematical calculation as disclosed by specification paragraph s [0052] -[0053] . Step 2A Prong 2 and Step 2 B: The claim does not recite any additional elements which, alone or in combination, would integrate the abstract ideas into a practical application or which, in combination with the abstract ideas, would be sufficient to amount to significantly more than the abstract ideas . The claim is not patent eligible. Claim 5 incorporates the rejection of claim 1. Step 2A Prong 1: The abstract ideas of claim 1 are incorporated. O utputting a first set of node embeddings for the node u is a mathematical calculation as disclosed by specification paragraphs [0040]-[0041]. The node u belongs to the set V of nodes. O utputting a second set of node embeddings for the node v is a mathematical calculation as disclosed by specification paragraphs [0040]-[0041]. The node v belongs to the set V of nodes. O btaining a set of edge embeddings for the edge (u, v) output from (k-1) th layer is a mathematical calculation as disclosed by specification paragraphs [0042]-[0043]. O utputting a set of edge embeddings for the edge (u, v) based in part on the set of edge embeddings for the edge output from (k-1) th layer, the first set of node embeddings, and the second set of node embeddings is a mathematical calculation as disclosed by specification paragraphs [0050]-[0051] and [0056]-[0057]. Step 2A Prong 2 and Step 2 B: A pplying the kth layer in the plurality of layers of the neural network to output a set of edge embeddings for an edge (u, v) linking a node u and a node v amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 6 incorporates the rejection of claim 1. Step 2A Prong 1: The abstract ideas of claim 1 are incorporated. D etermining a set of edge embeddings for each of the plurality of nodes is a mathematical calculation as disclosed by specification paragraphs [0042]-[0043]. N ormalizing each set of edge embeddings based on all sets of edge embeddings in a same layer of the neural network is a mathematical calculation based on specification paragraph s [0054] -[0055] . Step 2A Prong 2 and Step 2 B: The claim does not recite any additional elements which, alone or in combination, would integrate the abstract ideas into a practical application or which, in combination with the abstract ideas, would be sufficient to amount to significantly more than the abstract ideas . The claim is not patent eligible. Claim 7 incorporates the rejection of claim 1. Step 2A Prong 1: The abstract ideas of claim 1 are incorporated. Step 2A Prong 2 and Step 2 B: T he neural network includes a total of K layers, and the node embeddings and edge embeddings for Kth layer are final set of node embeddings and final set of edge embeddings amounts to mere instructions to apply the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 10 recites a product which implements the same features as the method of claim 1 and is therefore rejected for at least the same reasons. In Step 2A Prong 2 and Step 2B , a non-transitory computer-readable medium, stored thereon computer-executable instructions, that when executed by a processor of a computer system, cause the computer system to perform operations amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claims 11 - 16 each recites a product which implements the same features as the method of claims 2 -7 , respectively, and are therefore rejected for at least the same reasons. Claim 1 9 recites a system which implements the same features as the method of claim 1 and is therefore rejected for at least the same reasons. In Step 2A Prong 2 and Step 2B , a computer system comprising: a processor; and a non-transitory computer-readable storage medium, stored thereon computer-executable instructions, that when executed by the processor, cause the processor to perform operations amounts to a generic computer component for applying the abstract ideas on a generic computer under MPEP 2106.05(f). The claim is not patent eligible. Claim 20 recites a system which implements the same features as the method of claim 2 and is therefore rejected for at least the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim s 1 -3 , 5, 7, 10 -12 , 14, 16, and 19 -20 are rejected under 35 U.S.C. 103 as being unpatentable over Lo et al. (“E- Graph Neural Network based Intrusion Detection System for IoT ”) and Gui et al. ( US 20210064959 A1 ). Regarding claim 1, Lo teaches: A computer-implemented method, the method comprising: accessing a neural network having K layers, where K is a natural number, K > 1; ( Page 3 , col. 2 , from the third-to-last line to equation 2 , Fig. 2 and its caption, and Algorithm 1, “input: depth K” and Line 2 discloses accessing a neural network having 2 layers.) accessing a graph comprising a plurality of nodes and a plurality of edges linking the plurality of nodes; (On page 4, Algorithm 1 discloses an input is a graph G(V,E). On page 3, col. 2, § B, lines 7-8 discloses V is the set of nodes and E is the set of edges.) determining a set of node features for each of the plurality of nodes based on information associated with the node; (On page 4, Algorithm 1 discloses an input is a set of node features x v for each node, which is further explained o n page 4, col. 2, § A, lines 17-21 . An identifier of a node “v” is information associated with the node.) determining a set of edge features for each of the plurality of edges based on information associated with the edge; (On page 4, Algorithm 1 discloses an input is a set of edge features e r uv for each edge, which is further explained on page 4, col. 2, § A, lines 13-17. An identifier of an edge “ uv ” is information associated with the edge.) applying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings ; ( Page 4, col. 1, line 4 to equation 2; Page 4, Algorithm 1, lines 1-5 and from page 4, col. 2, § A, line 22 to page 5, col. 1, lines 1-10 discloses calculating a first set of node embeddings h v 1 by applying a first layer of a neural network. Based on L ine s 4 and 5 of the algorithm , the “CONCAT” function applies both h v 0 (node features) and e uv 0 (edge features). ) applying a kth layer of the neural network to (k-1) th set of node embeddings and (k- 1) th set of edge embeddings to output a kth set of node embeddings and a kth set of edge embeddings, where k is a natural number, wherein the (k-1) th set of node embeddings and (k-1) th set of edge embeddings are output from (k-1) th layer of the neural network, k is a natural number, and K ≥ k > 1 . ( Examiner treats “ a kth layer ” as a final layer K. Page 4, col. 1, line 4 to equation 2; Page 4, Algorithm 1, lines 1- 8 and from page 4, col. 2, § A, line 22 to page 5, col. 1, lines 1- 17 discloses applying (in Alg. 1, Line 5) a Kth layer of the neural network to (K-1) th set of node and edge embeddings to output a Kth set of node embeddings. Alg. 1, Lines 6-8 discloses outputting a Kth set of edge embeddings based on the node embeddings output from the Kth layer. ) Lo discloses that an output h v k of each layer k is a node v embedding. Thus, Lo does not explicitly teach: applying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings; wherein the (k-1) th se t of node embeddings and (k-1) th set of edge embeddings are output from (k-1) th layer of the neural network , But Gui teaches: applying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings; ( [0004], lines 1-3, [0025] , and [0094] - [009 6 ] discloses applying a n (l+1) th hidden layer to node features h v l to output a first set of node embeddings h v l+1 . A first layer as claimed corresponds to Gui’s (l+1) th hidden layer , and thus node features h v l are initial node features. [0095], lines 6 -9 and [0097]-[0098] discloses applying an (l+1) th hidden layer to edge features representing the aligned embedding of the edge E vv ’ at layer l to output edge embeddings . Since a first layer as claimed is Gui’s (l+1) th hidden layer, edge features are initial edge features. ) wherein the (k-1) th set of node embeddings and (k-1) th set of edge embeddings are output from (k-1) th layer of the neural network , ([0025] and [0094]-[009 8 ] discloses applying a an (l+1) th hidden layer to node features h v l to output node embeddings h v l+1 and applying the (l+1) th hidden layer to the edge features to output edge embeddings . A (k-1) th layer as claimed corresponds to Gui’s layer l, and a kth layer as claimed corresponds to Gui’s layer l+1.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied Lo’s neural network to generate both node embeddings and edge embeddings as taught by Gui. A motivation for the combination is that node embedding and edge embedding can be enhanced by each other, where node embedding and edge embedding can be jointly modeled. (Gui, [0018], final 4 lines) Regarding claim 2 , the combination of Lo and Gui teaches: The computer-implemented method of claim 1, Lo teaches: wherein applying the kth layer in the plurality of layers of the neural network to output a k- th set of node embeddings for a target node comprises: (Examiner treats “a kth layer” as a final layer K. Page 4, col. 1, line 4 to equation 2; Page 4, Algorithm 1, lines 1-8 and from page 4, col. 2, § A, line 22 to page 5, col. 1, lines 1-17 discloses applying (in Alg. 1, Line 5) a Kth layer of the neural network to (K-1) th set of node embeddings to output a Kth set of node embeddings. Alg. 1, Lines 6-8 discloses outputting a Kth set of edge embeddings based on the node embeddings output from the Kth layer.) identifying a subset of nodes that are in a neighborhood of the target node; (Page 3, col. 2, § B, first sentence of the paragraph starting with “At each iteration” where identifying corresponds to sampling. ) obtaining node embeddings of the subset of nodes output from (k-1) th layer; ( Page 5, col. 1, lines 4-7 and Algorithm 1, Line 5 disclose applying h v k-1 . ) aggregating the node embeddings of the subset of nodes output from (k-1) th layer into an aggregated node vector; (Page 3, col. 2, § B, the entire paragraph starting with “At each iteration” and the entire paragraph below equation 1.) identifying a subset of edges that are linking the subset of nodes in the neighborhood; (Page 4, col. 2, § A, entire paragraph starting with “In Line 4” where identifying corresponds to sampling.) obtaining edge embeddings associated with the subset of edges in (k-1) th layer; (Page 4, col. 2, § A, entire paragraph starting with “In Line 4”) aggregating the edge embeddings of the subset of edges in (k-1) th layer into an aggregated edge vector; and (Page 4, col. 2, § A, entire paragraph starting with “In Line 4” and page 5, col. 1, line s 1- 4) determining a set of node embeddings in kth layer based in part on the aggregated node vector and the aggregated edge vector. (Page 5, col. 1, lines 4-7 and Alg. 1, Line 5 discloses determining a set of node embeddings h v k based in part on the node vector h v k-1 and the aggregated edge vector.) Lo at page 4, § IV, first 2 paragraphs teaches that the GraphSAGE algorithm aggregates embeddings of all nodes u in the neighborhood of v of a kth layer to predict classification of a target node , and the E- GraphSAGE algorithm replaces the aggregated node vector with an aggregated edge vector in order to predict classification of a target edge . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied both the aggregated node vector and the aggregated edge vector to determine Lo’s node embeddings in the kth layer in Alg. 1, Line 5. A motivation for the combination is that by incorporate additional information about the target node’s neighborhood into the node embeddings , the model can predict classifications for both node s and edges in a graph. Regarding claim 3, the combination of Lo and Gui teaches: The computer-implemented method of claim 2, Lo teaches: wherein determining the set of node embeddings in kth layer based in part on the aggregated node vector and the aggregated edge vector comprises: concatenating the aggregated node vector and the aggregated edge vector to generate a concatenated vector; and (Page 4, Alg. 1, Line 5, where the “CONCAT” function concatenates the node vector h v k-1 and the aggregated edge vector.) passing the concatenated vector through kth layer of the neural network with an activation function to generate a set of node embeddings. (Page 4, Alg. 1, Line 5, where the output of the “CONCAT” function is the kth layer input, and σ represents an activation function to generate a set of node embeddings h v k .) Lo at page 4, § IV, first 2 paragraphs teaches that the GraphSAGE algorithm aggregates embeddings of all nodes u in the neighborhood of v of a kth layer to predict classification of a target node, and the E- GraphSAGE algorithm replaces the aggregated node vector with an aggregated edge vector in order to predict classification of a target edge. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied both the aggregated node vector and the aggregated edge vector to determine Lo’s node embeddings in the kth layer in Alg. 1, Line 5. A motivation for the combination is that by incorporate additional information about the target node’s neighborhood into the node embeddings, the model can predict classifications for both nodes and edges in a graph. Regarding claim 5, the combination of Lo and Gui teaches: The computer-implemented method of claim 1, Lo teaches: wherein applying the kth layer in the plurality of layers of the neural network to output a set of edge embeddings for an edge (u, v) linking a node u and a node v comprises: ( Page 5, col. 1, lines 11-17) outputting a first set of node embeddings for the node u; (Page 5, col. 1, lines 11-17 discloses outputting z u k ) outputting a second set of node embeddings for the node v; (Page 5, col. 1, lines 11-17 discloses outputting z v k ) … outputting a set of edge embeddings for the edge (u, v) based in part on the set of edge embeddings for the edge output from (k-1) th layer, the first set of node embeddings, and the second set of node embeddings. (Page 5, col. 1, lines 11-17 discloses outputting z uv K ) However, Lo does not explicitly teach: obtaining a set of edge embeddings for the edge (u, v) output from (k-1) th layer; and outputting a set of edge embeddings for the edge (u, v) based in part on the set of edge embeddings for the edge output from (k-1) th layer, But Gui teaches: obtaining a set of edge embeddings for the edge (u, v) output from (k-1) th layer; and ( [0095], lines 6 -9 and [0097]-[0098] discloses applying an (l+1) th hidden layer to edge features representing the aligned embedding of the edge E vv ’ at layer l to output edge embeddings . A (k-1) th layer as claimed corresponds to Gui’s layer l. Applying edge embeddings to the (l+1) th hidden layer requires obtaining edge embeddings output from the l- th hidden layer. ) outputting a set of edge embeddings for the edge (u, v) based in part on the set of edge embeddings for the edge output from (k-1) th layer, ([0095], lines 6-9 and [0097 ]-[0098] discloses applying an (l+1) th hidden layer to edge features representing the aligned embedding of the edge E vv ’ at layer l to output edge embeddings .) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Gui’s generation of edge embeddings based on edge embeddings output by a preceding layer into the combination of Lo and Gui. A motivation for the combination is that node embedding and edge embedding can be enhanced by each other, where node embedding and edge embedding can be jointly modeled. (Gui, [0018], final 4 lines) Regarding claim 7, the combination of Lo and Gui teaches: The computer-implemented method of claim 1, Lo teaches: wherein the neural network includes a total of K layers, and the node embeddings and edge embeddings for Kth layer are final set of node embeddings and final set of edge embeddings. (Page 5, col. 1, lines 11-17) Claim 10 recites a product which implements the same features as the method of claim 1 and is therefore rejected for at least the same reasons. However, Lo does not explicitly teach: A non-transitory computer-readable medium, stored thereon computer-executable instructions, that when executed by a processor of a computer system, cause the computer system to perform operations. But Gui teaches: A non-transitory computer-readable medium, stored thereon computer-executable instructions, that when executed by a processor of a computer system, cause the computer system to perform operations. ([0106], lines 1-2 (processor) and Gui’s claim 8, lines 1-5 on page 9) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Gui’s processor and computer readable storage medium into the combination of Lo and Gui. A motivation for the combination is to execute Gui and Lo’s method on a real-world computer. Claim s 1 1-1 2 , 14, and 16 each recites a product which implements the same features as the method of claims 2-3, 5, and 7 respectively, and are therefore rejected for at least the same reasons. Claim 1 9 recites a system which implements the same features as the method of claim 1 and is therefore rejected for at least the same reasons. However, Lo does not explicitly teach: A computer system comprising: a processor; and a non-transitory computer-readable storage medium, stored thereon computer- executable instructions, that when executed by the processor, cause the processor to perform operations. But Gui teaches: A computer system comprising: a processor; ([0106], lines 1-2) and a non-transitory computer-readable storage medium, stored thereon computer- executable instructions, that when executed by the processor, cause the processor to perform operations. (Gui’s claim 8, lines 1-5 on page 9) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Gui’s processor and computer readable storage medium into the combination of Lo and Gui. A motivation for the combination is to execute Gui and Lo’s method on a real-world computer. Claim 20 recites a product which implements the same features as the method of claim 2 and is therefore rejected for at least the same reasons. Claims 4 , 6, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lo et al. (“E- Graph Neural Network based Intrusion Detection System for IoT ”), Gui et al. ( US 20210064959 A1 ), and Hamilton et al. (“Inductive Representation Learning on Large Graphs”). Regarding claim 4, the combination of Lo and Gui teaches: The computer-implemented method of claim 1, However, Lo and Gui do not explicitly teach: wherein determining a set of node embeddings for each of the plurality of nodes further comprises normalizing each set of node embeddings based on all sets of node embeddings in a same layer of the neural network. But Hamilton teaches: wherein determining a set of node embeddings for each of the plurality of nodes further comprises normalizing each set of node embeddings based on all sets of node embeddings in a same layer of the neural network. (On page 4, Algorithm 1, Line 7 discloses replacing a set of node embeddings with a normalized set. The notation used by Hamilton is the same as Gui. This is taught by the “Inputs” of Algorithm 1 and the entire paragraph starting on page 4, line 4.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have normalized Lo and Gui’s set of node embeddings using Hamilton’s technique. A motivation for the combination is to reduce variance in node embeddings for each node at each layer. Regarding claim 6, the combination of Lo and Gui teaches: The computer-implemented method of claim 1, Lo teaches : wherein determining a set of edge embeddings for each of the plurality of nodes further comprises [calculating] normalizing each set of edge embeddings based on all sets of edge embeddings in a same layer of the neural network . (Page 5, col. 1, lines 11-17) However, Lo and Gui do not explicitly teach: normalizing each set of edge embeddings based on all sets of edge embeddings in a same layer of the neural network . But Hamilton teaches: normalizing each set of [node] edge embeddings based on all sets of [node] edge embeddings in a same layer of the neural network. (On page 4, Algorithm 1, Line 7 discloses replacing a set of node embeddings with a normalized set. The notation used by Hamilton is the same as Gui. This is taught by the “Inputs” of Algorithm 1 and the entire paragraph starting on page 4, line 4.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have normalized Lo and Gui’s set of edge embeddings in layer K using Hamilton’s technique for normalizing node embeddings. A motivation for the combination is to reduce variance in edge embeddings for each edge at each layer. Claim s 13 and 15 each recites a product which implements the same features as the method of claims 4 and 6, respectively, and are therefore rejected for at least the same reasons. Claim s 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Lo et al. (“E- Graph Neural Network based Intrusion Detection System for IoT ”), Gui et al. ( US 20210064959 A1 ), a nd Titov et al. ( US 20210319323 A1 ). Regarding claim 8, the combination of Lo and Gui teaches: The computer-implemented method of claim 7, Lo teaches: wherein the neural network further includes a classifier head, a linear layer , and a softmax layer, and the classifier head is configured to: (Page 5, col. 1, § B, lines 1-8 and page 6, col. 1, lines 11-28 disclose a “classifier head” comprising a softmax layer.) receive the final set of [edge] node embeddings as inputs; and (Page 6, col. 1, lines 11-19) pass the final set of [edge] node embeddings through the linear layer and the softmax layer to output a classification score. (Page 6, col. 1, lines 11-28 discloses passing the output edge embedding through a softmax layer to compute a classification.) Gui teaches node-classification in the Abstract, lines 10-14. However, Lo and Gui do not explicitly teach: wherein the neural network further includes a classifier head, a linear layer , and a softmax layer, and the classifier head is configured to: receive the final set of node embeddings as inputs; and pass the final set of node embeddings through the linear layer and the softmax layer to output a classification score. But Titov teaches: wherein the neural network further includes a classifier head, a linear layer, and a softmax layer, and ( [0021] from line 1 to col. 2, line 5 discloses a multi-layer perceptron , and [0026], lines 4-5 clarify that layer 304 is a “fully connected layer” . A classifier head is the entire MLP, a linear layer is first fully-connected layer 304, and a softmax layer is softmax engine 310.) the classifier head is configured to: receive the final set of node embeddings as inputs; and ([0021], lines 8-11) pass the final set of node embeddings through the linear layer and the softmax layer to output a classification score. ([0021], line 8 to col. 2, line 5 . The node embedding vectors are passed through both the first fully-connected layer 304 and softmax engine 310. ) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied Lo and Gui’s final node embeddings to Titov’s multi-layer perceptron comprising both a fully-connected layer and a softmax layer for node classification. A motivation for the combination is that a fully-connected layer plus softmax layer learns non-linear combinations of the node embedding features , which may improve node classification accuracy. Regarding claim 9, the combination of Lo and Gui teaches: The computer-implemented method of claim 7, Lo teaches: wherein the neural network further includes a classifier head, a linear layer , and a softmax layer, and the classifier head is configured to: (Page 5, col. 1, § B, lines 1-8 and page 6, col. 1, lines 11-28 disclose a “classifier head” comprising a softmax layer.) receive the final set of edge node embeddings as inputs; and (Page 6, col. 1, lines 11- 19 ) pass the final set of edge embeddings through the linear layer and the softmax layer to output a classification score. (Page 6, col. 1, lines 11-28 discloses passing the output edge embedding through a softmax layer to compute a classification. ) However, Lo and Gui do not explicitly teach: wherein the neural network further includes a classifier head, a linear layer , and a softmax layer, and the classifier head is configured to: pass the final set of edge embeddings through the linear layer and the softmax layer . But Titov teaches: wherein the neural network further includes a classifier head, a linear layer, and a softmax layer, and ([0021] from line 1 to col. 2, line 5 discloses a multi-layer perceptron, and [0026], lines 4-5 clarify that layer 304 is a “fully connected layer”. A classifier head is the entire MLP, a linear layer is first fully-connected layer 304, and a softmax layer is softmax engine 310.) the classifier head is configured to: pass the final set of [node] edge embeddings through the linear layer and the softmax layer. ([0021], line 8 to col. 2, line 5. The node embedding vectors are passed through both the first fully-connected layer 304 and softmax engine 310.) Lo teaches passing a final set of edge embeddings through a softmax activation function, but not passing it through both a linear layer and a softmax layer. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have input Lo and Gui’s final edge embeddings to Titov’s multi-layer perceptron comprising a fully-connected layer and a softmax layer for node classification. A motivation for the combination is that a fully-connected layer plus softmax layer learns non-linear combinations of the edge embedding features, which may improve edge classification accuracy. Claim s 17-18 each recites a product which implements the same features as the method of claims 8-9, respectively, and are therefore rejected for at least the same reasons. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claim s 1 and 7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of copending Application No. 18 / 138 , 962 (hereinafter the reference claims ) in view of Lo et al. (“E- Graph Neural Network based Intrusion Detection System for IoT ”) and Gui et al. ( US 20210064959 A1 ). Reference claim 9 incorporates the limitations of reference claim 1. Regarding instant claim 1, the limitation “ A computer-implemented method, the method comprising: accessing a neural network …” is taught by reference claim 9 limitation “The computer-implemented method of claim 1, wherein the method further comprising: accessing a neural network model trained over data associated with the graph”. The limitation “ accessing a graph comprising a plurality of nodes and a plurality of edges linking the plurality of nodes ” is taught by reference claim 1 limitation “accessing a graph comprising a plurality of nodes and a plurality of edges linking the plurality of nodes”. The limitation “ determining a set of node features for each of the plurality of nodes based on information associated with the node ” is taught by reference claim 1 limitation “extracting a set of node features for each of the plurality of nodes”. Extracting would be based at least on an identity of a node, which is information associated with the node. The limitation “ determining a set of edge features for each of the plurality of edges based on information associated with the edge ” is taught by reference claim 1 limitation “extracting a set of edge features for each of the plurality of edges”. Extracting would be based at least on an identity of a edge, which is information associated with the edge. The limitation “ applying a first layer of the neural network to the node features and the edge features to output a first set of node embeddings and a first set of edge embeddings ” is taught by reference claim 9 limitation “wherein the neural network is trained to: receive the set of node features and the set of edge features to generate the first set of node embeddings for the first node, … and the edge embeddings for the edge” . However, reference claim 9 does not explicitly teach : accessing a neural network having K layers , where K is a natural number, K > 1 ; applying a kth layer of the neural network to (k-1) th set of node embeddings and (k- 1) th set of edge embeddings to output a kth set of node embeddings and a kth set of edge embeddings, where k is a natural number, wherein the (k-1) th set of node embeddings and (k-1) th set of edge embeddings are output from (k- 1) th layer of the neural network, k is a natural number, and K ≥ k > 1. But Lo teaches: accessing a neural network having K layers, where K is a natural number, K > 1; (Page 3, col. 2, from the third-to-last line to equation 2, Fig. 2 and its caption, and Algorithm 1, “input: depth K” and Line 2 discloses accessing a neural network having 2 layers.) applying a kth layer of the neural network to (k-1) th set of node embeddings and (k- 1) th set of edge embeddings to output a kth set of node embeddings and a kth set of edge embeddings, where k is a natural number, wherein the (k-1) th set of node embeddings and (k-1) th set of edge embeddings are output from (k-1) th layer of the neural network, k is a natural number, and K ≥ k > 1 . (Examiner treats “a kth layer” as a final layer K. Page 4, col. 1, line 4 to equation 2; Page 4, Algorithm 1, lines 1-8 and from page 4, col. 2, § A, line 22 to page 5, col. 1, lines 1-17 discloses applying (in Alg. 1, Line 5) a Kth layer of the neural network to (K-1) th set of node and edge embeddings to output a Kth set of node embeddings. Alg. 1, Lines 6-8 discloses outputting a Kth set of edge embeddings based on the node embeddings output from the Kth layer.) I t would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have applied Lo’s algorithm to reference cl
Read full office action

Prosecution Timeline

Apr 25, 2023
Application Filed
Apr 01, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572794
SYSTEM AND METHOD FOR AUTOMATED OPTIMAZATION OF A NEURAL NETWORK MODEL
2y 5m to grant Granted Mar 10, 2026
Patent 12456047
Distilling from Ensembles to Improve Reproducibility of Neural Networks
2y 5m to grant Granted Oct 28, 2025
Patent 12450493
DIMENSION REDUCTION IN THE CONTEXT OF UNSUPERVISED LEARNING
2y 5m to grant Granted Oct 21, 2025
Patent 12437198
CLASSIFICATION OF A NON-MONETARY DONATION BASED ON MACHINE LEARNING
2y 5m to grant Granted Oct 07, 2025
Patent 12437215
DEVICE, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EXECUTING INFERENCE USING INPUT SIGNAL
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
88%
With Interview (+43.9%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 90 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month