Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 28 is objected to because of the following informalities: “wherein s number of times of the querying...” should read “wherein a number of times of the querying…”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-39 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 21,
Claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 21 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining vulnerable features of target nodes in a graph …wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes”
“grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes”
“obtaining the adversarial examples based on the plurality of clusters.”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“based on querying the GNN model”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 22,
Claim 22 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 22 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each cluster of the plurality of clusters, obtaining a feature of a corresponding one of the adversarial examples by averaging the vulnerable features of the target nodes in the cluster”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 21.
Step 2B Analysis: See corresponding analysis of claim 21.
Regarding Claim 23,
Claim 23 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 23 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each cluster of the plurality of clusters, obtaining an initial feature of a corresponding one of the adversarial examples based on the vulnerable features of the target nodes in the cluster”
“modifying the graph by connecting each of the adversarial examples having the initial features to the target nodes in a corresponding one of the plurality of clusters”
“updating the features of the adversarial examples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“based on querying the GNN model with the modified graph”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 24,
Claim 24 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 24 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 21.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“wherein the querying of the GNN model includes querying the GNN model with modified graphs which are obtained by adding a fake node to the graph”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 25,
Claim 25 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 25 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each target node of the target nodes in the graph: obtaining a modified graph by connecting one fake node to the target node in the graph”
“determining the vulnerable feature of the target node”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“based on querying the GNN model with the modified graph”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 26,
Claim 26 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 26 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each of a plurality of feature components of the fake node: modifying the feature component of the fake node”
“updating the feature component of the fake node …wherein the feature of the fake node includes the updated feature components being taken as the vulnerable feature of the target node”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“querying the GNN model with the modified graph having the modified feature component of the fake node”
“based on a result of the querying”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 27,
Claim 27 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 27 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function”
“maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 26.
Step 2B Analysis: See corresponding analysis of claim 26.
Regarding Claim 28,
Claim 28 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 28 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 26.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein s number of times of the querying for the plurality of feature components of the fake node equals to a smaller one of a predefined value and a feature dimension of a node in the graph”
As drafted, is an additional element that does not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 29,
Claim 29 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 29 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“grouping the target nodes into the plurality of clusters according to similarity of vulnerable features of target nodes in each of the clusters”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 21.
Step 2B Analysis: See corresponding analysis of claim 21.
Regarding Claim 30,
Claim 30 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 30 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“grouping the target nodes into the plurality of clusters by solving a minimization of an object function of clustering for the vulnerable features of target nodes”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 29.
Step 2B Analysis: See corresponding analysis of claim 29.
Regarding Claim 31,
Claim 31 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 31 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each of the plurality of clusters, obtaining an initial feature of a corresponding one of a plurality of fake nodes based on the vulnerable features of the target nodes in the cluster”
“modifying the graph by connecting each of the plurality of fake nodes having the initial features to the target nodes in a corresponding one of the plurality of clusters”
“updating the feature of each of the plurality of fake nodes”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“based on querying the GNN model with the modified graph”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 32,
Claim 32 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 32 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“for each of a plurality of feature components of the fake node: modifying the feature component of the fake node”
“updating the feature component of the fake node …wherein the fake nodes with the feature including the updated feature components being taken as the obtained adversarial examples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“querying the GNN model with the modified graph having the modified feature component of the fake node”
“based on result of the querying”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 33,
Claim 33 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 33 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function”
“maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 32.
Step 2B Analysis: See corresponding analysis of claim 32.
Regarding Claim 34,
Claim 34 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 34 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“setting a label for each of the adversarial examples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: See corresponding analysis of claim 21.
Step 2B Analysis: See corresponding analysis of claim 21.
Regarding Claim 35,
Claim 35 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 35 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 21.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“wherein the GNN model is a Graph Convolutional Network (GCN) model”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 36,
Claim 36 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 36 is directed to a method for generating adversarial examples for a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: See corresponding analysis of claim 21.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that do not apply the exception in a meaningful way (See MPEP 2106.05(e)).
The limitations:
“wherein the graph represents a social network or a citation network or a financial network”
As drafted, is an additional element that does not apply an exception for the abstract ideas in a meaningful way. See MPEP 2106.05(e).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements do not apply the exception in a meaningful way. The claim is not patent eligible.
Regarding Claim 37,
Claim 37 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 37 is directed to a method for training a Graph Neural Network (GNN) model, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“obtaining adversarial examples for the GNN model”
“determining vulnerable features of target nodes in a graph …wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes”
“grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes”
“obtaining the adversarial examples based on the plurality of clusters”
“setting a label for each of the adversarial examples”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“based on querying the GNN model”
“training the GNN model by using the adversarial examples with the labels”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 38,
Claim 38 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 38 is directed to a computer system, comprising: one or more processors; and one or more storage devices storing computer-executable instructions, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining vulnerable features of target nodes in a graph …wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes”
“grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes”
“obtaining the adversarial examples based on the plurality of clusters”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“A computer system, comprising: one or more processors; and one or more storage devices storing computer-executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model, the instructions, when executed by the one or more processors cause the one or more processors to perform the following steps”
“based on querying the GNN model”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 39,
Claim 39 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 39 is directed to one or more non-transitory computer readable storage media on which are stored computer-executable instructions executable instructions, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining vulnerable features of target nodes in a graph …wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes”
“grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes”
“obtaining the adversarial examples based on the plurality of clusters”
As drafted, under their broadest reasonable interpretations, cover mental processes, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). The above limitations in the context of this claim correspond to mental processes, e.g., evaluation and judgement with assistance of pen and paper.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recited additional elements that are mere instructions to apply an exception (See MPEP 2106.05(f)).
The limitations:
“One or more non-transitory computer readable storage media on which are stored computer-executable instructions executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model, the instructions, when executed by one or more processors cause the one or more processors to perform the following steps”
“based on querying the GNN model”
As drafted, are additional elements that amount to no more than mere instructions to apply an exception for the abstract ideas. See MPEP 2106.05(f).
Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract ideas into a practical application, all of the additional elements are “mere instructions to apply”. Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 21-26, 28-32, 35-36, and 38-39 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pham et al. (Graph Adversarial Attacks and Defense: An Empirical Study on Citation Graph) (“Pham”).
Regarding claim 21, Pham teaches a method for generating adversarial examples for a Graph Neural Network (GNN) model, comprising the following steps: determining vulnerable features of target nodes in a graph based on querying the GNN model (Pham Section IV Attacker “After finding an effective attack strategy on the graph’s structure, we keep it unchanged and adopt gradient based techniques for feature attacks. By doing this, we do not have to keep track of the gradients of the adjacency matrix, and we can attack many target nodes at once.” Section V Defender “The main ideas for model selection are that we explore several standard model types for Graph Neural Networks using the given dataset. …Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides determining vulnerable features of target nodes in a graph neural network by performing feature attacks on target nodes, corresponding to determining vulnerable features of target nodes in a graph based on querying the GNN model.), wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes (Pham Section II Datasets and Attacking/Defending Tasks “This section defines our research problem and its requirements. Figure 1 summarizes the dataset and tasks for a typical graph adversarial attack and defense case. The left panel depicts the dataset with embedded features for the nodes and the adjacency matrix for the edges”; Section V Defender “The main ideas for model selection are that we explore several standard model types for Graph Neural Networks using the given dataset. …Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides a graph neural network including target nodes and edges, which connect two nodes, as shown in Figure 1, corresponding to the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes.); grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch.”; Section V Defender “Furthermore, as discussed earlier, to evaluate the vulnerability, we used our generated model to predict the labels for the target nodes (the last 50,000) with and without having the fake nodes in the graph.” Pham provides clustering target nodes to determine vulnerability including the use of K-means clustering on node feature vectors, corresponding to grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes.) and obtaining the adversarial examples based on the plurality of clusters (Pham Section IV Attacker “For each structure attack, we apply a feature attack with a loss function to measure the result. To calculate the accuracy drop from the data provided, we create a test set in which the ground-truth can be accessed. It is worth noting that the attack will perform on robust defenders, which already have defense mechanisms. With that in mind, the choice of final attack strategy is not only based on the performance on the test set but also consider how likely the defender will detect it (i.e., using conservative attack strategies for unnoticeable perturbations)… Regarding feature attack, FGSM is likely to be detected by defenders because the features tend to lie at the extreme values (i.e., -1.73 or 1.62) as these seem to have higher attacking impacts. The defenders can easily ignore fake features by looking at the distribution and reshape these values (or using any other techniques to deal with outlying kind of values)... The features of fake nodes are updated on their corresponding gradients”; Section VII Conclusion “We aim to reduce the model prediction performance significantly and constraint the fake features to have a similar node value distribution to the origin data” Pham provides performing feature attacks based on the structure attack (clustering of target nodes), which includes the use of perturbations, fake features and ground truth data to measure model performance against an adversarial attack, which is designed to trick the network using the fake features and fake nodes, corresponding to obtaining the adversarial examples based on the plurality of clusters, wherein the generated “fake features” including the “fake nodes” correspond to the “adversarial examples”.).
Regarding claim 22, Pham teaches the method of claim 21, wherein the obtaining of the adversarial examples based on the plurality of clusters includes: for each cluster of the plurality of clusters, obtaining a feature of a corresponding one of the adversarial examples by averaging the vulnerable features of the target nodes in the cluster (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch.”; Section V Defender “For instance, Figure 6 shows the histogram describing the distributions of value counts over 100 bins for a sample feature (the first one of the 100 features). We further investigated the statistical descriptions of the values in all the features and found that they have min, max, mean values as approximately -1.735, 1.622, and -0.013, respectively. These feature descriptions help devise a strategy to increase model robustness by giving a constraint to the feature data as roughly in the range of -1.8 to 1.8.” Pham provides clustering target nodes including vulnerable features and calculating mean feature values, corresponding to for each cluster of the plurality of clusters, obtaining a feature of a corresponding one of the adversarial examples by averaging the vulnerable features of the target nodes in the cluster.).
Regarding claim 23, Pham teaches the method of claim 21, wherein the obtaining of the adversarial examples based on the plurality of clusters comprising: for each cluster of the plurality of clusters, obtaining an initial feature of a corresponding one of the adversarial examples based on the vulnerable features of the target nodes in the cluster (Pham Section IV Attacker “For each structure attack, we apply a feature attack with a loss function to measure the result. To calculate the accuracy drop from the data provided, we create a test set in which the ground-truth can be accessed. It is worth noting that the attack will perform on robust defenders, which already have defense mechanisms. With that in mind, the choice of final attack strategy is not only based on the performance on the test set but also consider how likely the defender will detect it (i.e., using conservative attack strategies for unnoticeable perturbations)… The fake nodes’ features are first initialized randomly by uniform distribution in the range of [-0.1, 0.1].” Pham provides obtaining fake features from perturbations and target node clustering through the structure attack, including obtaining initial features for the fake nodes, corresponding to for each cluster of the plurality of clusters, obtaining an initial feature of a corresponding one of the adversarial examples based on the vulnerable features of the target nodes in the cluster.); modifying the graph by connecting each of the adversarial examples having the initial features to the target nodes in a corresponding one of the plurality of clusters (Pham Section IV Attacker “After filtering some target nodes to attack, we can further cluster them to connect similar target nodes to the same fake node.” Pham provides further clustering the graph to connect similar target nodes to the same fake node, corresponding to modifying the graph by connecting each of the adversarial examples having the initial features to the target nodes in a corresponding one of the plurality of clusters); and updating the features of the adversarial examples based on querying the GNN model with the modified graph (Pham Section IV Attacker “Thus, in “targeted attack” loss, these minor classes are chosen as the targets. We expect to see an increase in the predictions of minor labels, and that it can help to reduce the performance of the model further. It’s worth noting that these labels’ actual meanings are unknown to the analysts in this case. The features of fake nodes are updated on their corresponding gradients.” Pham provides updating the features of the fake nodes based on calculating a targeted attack loss after modifying the graph, corresponding to updating the features of the adversarial examples based on querying the GNN model with the modified graph.).
Regarding claim 24, Pham teaches the method of claim 21, wherein the querying of the GNN model includes querying the GNN model with modified graphs which are obtained by adding a fake node to the graph (Pham Section II Datasets and Attacking/Defending Tasks “For the attacking task, attackers need to perturb the graph by adding no more than 500 nodes and up to 100 edges per fake node”; Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes.” Pham provides modifying graphs by adding fake nodes.).
Regarding claim 25, Pham teaches the method of claim 24, wherein the determining of the vulnerable features of target nodes in the graph based on querying of the GNN model includes: for each target node of the target nodes in the graph: obtaining a modified graph by connecting one fake node to the target node in the graph (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes” Pham provides connecting targeting nodes to fake nodes in the graph.), and determining the vulnerable feature of the target node based on querying the GNN model with the modified graph (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes… The features of fake nodes are updated on their corresponding gradients”; Section V Defender “The main ideas for model selection are that we explore several standard model types for Graph Neural Networks using the given dataset. …Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides determining vulnerable features in a graph neural network by adding fake nodes to the graph and updating their corresponding features based on gradients, corresponding to determining the vulnerable feature of the target node based on querying the GNN model with the modified graph.).
Regarding claim 26, Pham teaches the method of claim 25, wherein the determining of the vulnerable feature of the target node based on the querying of the GNN model with the modified graph includes: for each of a plurality of feature components of the fake node: modifying the feature component of the fake node (Pham Section IV Attacker “The fake nodes’ features are first initialized randomly by uniform distribution in the range of [-0.1, 0.1]. After 30 epochs, we got a good result, which can reduce the accuracy from 60% to 30% on the surrogate 2-layer GCN model generated for our attack strategies’ testing purposes. More importantly, the generated features also have a more similar distribution to the origin data.” Pham provides initializing features of a fake node corresponding to modifying the feature component of the fake node.), querying the GNN model with the modified graph having the modified feature component of the fake node (Pham Section IV Attacker “Via experiments, we notice that other sophisticated structure attack approaches are not any better than the random method. Regarding the loss function, the predictions’ distribution fluctuated when using “targeted attack” loss. Furthermore, this loss function could not reduce the model’s performance much compared to the simple cross-entropy loss used in the final attack strategy. Thus, we first fix the structure attack by only getting 100 target nodes for each fake node. Regarding feature attack, FGSM is likely to be detected by defenders because the features tend to lie at the extreme values (i.e., -1.73 or 1.62) as these seem to have higher attacking impacts. The defenders can easily ignore fake features by looking at the distribution and reshape these values (or using any other techniques to deal with outlying kind of values).” Pham provides calculating a “targeted attack” loss based on performing the structure attack using the fake nodes for the Graph Neural Network, corresponding to querying the GNN model with the modified graph having the modified feature component of the fake node.), and updating the feature component of the fake node based on a result of the querying, wherein the feature of the fake node includes the updated feature components being taken as the vulnerable feature of the target node (Pham Section IV Attacker “Thus, in “targeted attack” loss, these minor classes are chosen as the targets. We expect to see an increase in the predictions of minor labels, and that it can help to reduce the performance of the model further. It’s worth noting that these labels’ actual meanings are unknown to the analysts in this case. The features of fake nodes are updated on their corresponding gradients.”; Section V Defender “Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides determining vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph neural network, and updating features of fake nodes based on corresponding gradients from the structure attack, corresponding to updating the feature component of the fake node based on a result of the querying, wherein the feature of the fake node includes the updated feature components being taken as the vulnerable feature of the target node.).
Regarding claim 28, Pham teaches the method of claim 26, wherein s number of times of the querying for the plurality of feature components of the fake node equals to a smaller one of a predefined value and a feature dimension of a node in the graph (Pham Section II Datasets and Attacking/Defending Tasks “Also, each node was embedded by a 100-dimensional feature vector consisting of float numbers between (−2, 2), so the data was ready for training and evaluating deep graph neural networks without having to go through the initial graph embedding processes… Besides, there are also memory and prediction time constraints. Specifically, the size of the defending model should not exceed a maximum of 1GB. The model should take less than 10 seconds on a K80 GPU and 60 GB memory server to perform node classification on the whole graph.” Pham provides predefined memory and time constraints corresponding to a “predefined value” and a feature dimension of 100 for each node in the graph, wherein the number of queries of feature components of the fake node will equal the node feature dimension, unless the “predefined” memory or time constraints are reached prior, which will result in a smaller number of queries than that for the entire feature dimension, corresponding to the number of times of the querying for the plurality of feature components of the fake node equals to a smaller one of a predefined value and a feature dimension of a node in the graph.).
Regarding claim 29, Pham teaches the method of claim 21, wherein the grouping of the target nodes into the plurality of clusters includes: grouping the target nodes into the plurality of clusters according to similarity of vulnerable features of target nodes in each of the clusters (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch.”; Section V Defender “Furthermore, as discussed earlier, to evaluate the vulnerability, we used our generated model to predict the labels for the target nodes (the last 50,000) with and without having the fake nodes in the graph.” Pham provides clustering target nodes to determine vulnerability including the use of K-means clustering on node feature vectors, corresponding to grouping the target nodes into the plurality of clusters according to similarity of vulnerable features of target nodes in each of the clusters.).
Regarding claim 30, Pham teaches the method of claim 29, wherein the grouping of the target nodes into the plurality of clusters includes: grouping the target nodes into the plurality of clusters by solving a minimization of an object function of clustering for the vulnerable features of target nodes (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch… For each batch, the loss function is cross-entropy calculated on target nodes only. Besides this, we also have an alternative for the loss function called “targeted attack” loss. The only difference between these two functions is that instead of just maximizing the loss of the correct class, the latter maximizes the correct class’s loss while also minimizing the loss of the target class. From the distribution of labels (Figure 8), it is obvious that some classes such as 12 and 14 have just a few observations. We assume the distribution is similar in the test set. In other words, it is also true that nodes that belong to these classes in the 50,000 target nodes are rare.” Pham provides minimizing the loss of the target class for the clustering of the 50,000 target nodes, corresponding to grouping the target nodes into the plurality of clusters by solving a minimization of an object function of clustering for the vulnerable features of target nodes.).
Regarding claim 31, Pham teaches the method of claim 25, wherein the obtaining of the adversarial examples based on the plurality of clusters includes: for each of the plurality of clusters, obtaining an initial feature of a corresponding one of a plurality of fake nodes based on the vulnerable features of the target nodes in the cluster (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch… The fake nodes’ features are first initialized randomly by uniform distribution in the range of [-0.1, 0.1]. After 30 epochs, we got a good result, which can reduce the accuracy from 60% to 30% on the surrogate 2-layer GCN model generated for our attack strategies’ testing purposes. More importantly, the generated features also have a more similar distribution to the origin data.” Pham provides obtaining an initial feature of a corresponding one of a plurality of fake nodes based on the vulnerable features of the target nodes in the cluster.); modifying the graph by connecting each of the plurality of fake nodes having the initial features to the target nodes in a corresponding one of the plurality of clusters (Pham Section IV Attacker “After filtering some target nodes to attack, we can further cluster them to connect similar target nodes to the same fake node.” Pham provides connecting each of the plurality of fake nodes having the initial features to the target nodes in a corresponding one of the plurality of clusters.); and updating the feature of each of the plurality of fake nodes based on querying the GNN model with the modified graph (Pham Section IV Attacker “Thus, in “targeted attack” loss, these minor classes are chosen as the targets. We expect to see an increase in the predictions of minor labels, and that it can help to reduce the performance of the model further. It’s worth noting that these labels’ actual meanings are unknown to the analysts in this case. The features of fake nodes are updated on their corresponding gradients.” Pham provides updating features of the fake nodes based on the targeted attack corresponding to updating the feature of each of the plurality of fake nodes based on querying the GNN model with the modified graph.).
Regarding claim 32, Pham teaches the method of claim 31, wherein the updating of the feature of each of the plurality of fake nodes based on querying the GNN model with the modified graph includes: for each of a plurality of feature components of the fake node: modifying the feature component of the fake node (Pham Section IV Attacker “The fake nodes’ features are first initialized randomly by uniform distribution in the range of [-0.1, 0.1]. After 30 epochs, we got a good result, which can reduce the accuracy from 60% to 30% on the surrogate 2-layer GCN model generated for our attack strategies’ testing purposes. More importantly, the generated features also have a more similar distribution to the origin data.” Pham provides initializing features of a fake node corresponding to modifying the feature component of the fake node.), querying the GNN model with the modified graph having the modified feature component of the fake node (Pham Section IV Attacker “Via experiments, we notice that other sophisticated structure attack approaches are not any better than the random method. Regarding the loss function, the predictions’ distribution fluctuated when using “targeted attack” loss. Furthermore, this loss function could not reduce the model’s performance much compared to the simple cross-entropy loss used in the final attack strategy. Thus, we first fix the structure attack by only getting 100 target nodes for each fake node. Regarding feature attack, FGSM is likely to be detected by defenders because the features tend to lie at the extreme values (i.e., -1.73 or 1.62) as these seem to have higher attacking impacts. The defenders can easily ignore fake features by looking at the distribution and reshape these values (or using any other techniques to deal with outlying kind of values).” Pham provides calculating a “targeted attack” loss based on performing the structure attack using the fake nodes for the Graph Neural Network, corresponding to querying the GNN model with the modified graph having the modified feature component of the fake node.), and updating the feature component of the fake node based on result of the querying, wherein the fake nodes with the feature including the updated feature components being taken as the obtained adversarial examples (Pham Section IV Attacker “Thus, in “targeted attack” loss, these minor classes are chosen as the targets. We expect to see an increase in the predictions of minor labels, and that it can help to reduce the performance of the model further. It’s worth noting that these labels’ actual meanings are unknown to the analysts in this case. The features of fake nodes are updated on their corresponding gradients.”; Section V Defender “Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides determining vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph neural network, and updating features of fake nodes based on corresponding gradients from the structure attack, corresponding to updating the feature component of the fake node based on result of the querying, wherein the fake nodes with the feature including the updated feature components being taken as the obtained adversarial examples.).
Regarding claim 35, Pham teaches the method of claim 21, wherein the GNN model is a Graph Convolutional Network (GCN) model (Pham Section I Introduction “We also contrive and experiment techniques to build a robust Graph Convolutional Network model leveraging the knowledge gained through data exploration and characteristics of the poisoned data generated by our attacking strategies, with a prediction time constraint” Pham provides a Graph Convolutional Network (GCN) model.).
Regarding claim 36, Pham teaches the method of claim 21, wherein the graph represents a social network or a citation network or a financial network (Pham Section I Introduction “We devise strategies to attack the citation graphs with heuristics to avoid real-world data constraints such as large graph structure, discrete graph structure values, and conservative/unnoticeable perturbations” Pham provides a citation graph corresponding to the graph represents a citation network.).
Regarding claim 38, it is the computer system embodiment of claim 1 with similar limitations to claim 1 and is rejected using the same reasoning found above in the rejection of claim 1. Further, Pham teaches a computer system, comprising: one or more processors; and one or more storage devices storing computer-executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model (Pham Section II Datasets and Attacking/Defending Tasks “Besides, there are also memory and prediction time constraints. Specifically, the size of the defending model should not exceed a maximum of 1GB. The model should take less than 10 seconds on a K80 GPU and 60 GB memory server to perform node classification on the whole graph.” Pham provides memory and GPU devices for performing the disclosed embodiments, corresponding to a computer system, comprising: one or more processors; and one or more storage devices storing computer-executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model).
Regarding claim 39, it is the one or more non-transitory computer readable storage media embodiment of claim 1 with similar limitations to claim 1 and is rejected using the same reasoning found above in the rejection of claim 1. Further, Pham teaches one or more non-transitory computer readable storage media on which are stored computer-executable instructions executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model (Pham Section II Datasets and Attacking/Defending Tasks “Besides, there are also memory and prediction time constraints. Specifically, the size of the defending model should not exceed a maximum of 1GB. The model should take less than 10 seconds on a K80 GPU and 60 GB memory server to perform node classification on the whole graph.” Pham provides memory and GPU devices for performing the disclosed embodiments, corresponding to one or more non-transitory computer readable storage media on which are stored computer-executable instructions executable instructions for generating adversarial examples for a Graph Neural Network (GNN) model.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 27, 33-34, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Pham et al. (Graph Adversarial Attacks and Defense: An Empirical Study on Citation Graph) (“Pham”) in view of Wang et al. (Attack Graph Convolutional Networks by Adding Fake Nodes) (“Wang”).
Regarding claim 27, Pham teaches the method of claim 26 as discussed above in the rejection of claim 26, but fails to explicitly teach wherein the updating of the feature component of the fake node based on the result of the querying includes: changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function; maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function.
However, Wang teaches wherein the updating of the feature component of the fake node based on the result of the querying includes: changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function (Wang Algorithm 2; Section 4.1.2 Greedy-GAN Attack “We want to generate fake nodes with features similar to the real ones to fool the discriminator. Since the output of discriminator is binary, we use binary cross entropy loss defined by L(p, y) = −(y log(p)+(1−y) log(1−p)), where y is binary indicator of the ground-truth (real or fake images), and p is the predicted probability by our discriminator. Then we solve the following optimization problem: (6) where Y is the ground-truth (real/fake) indicator for nodes and c is the parameter determine with the weight of discriminator and the GCN performance. For example, if c is set with a very large value, the objective function is dominated by the discriminator, so the node features generated will be very close to real but with lower attack successful rate... The Greedy-GAN algorithm is given as Algorithm 2. Greedy-GAN supports both adding and dropping links and features” Wang provides a Greedy-GAN Attack, as shown by Algorithm 2, which provides a cross entropy loss to generate features for fake nodes by adding or dropping fake node features according the conditional “if-statements” in Algorithm 2, which add or drops features for fake nodes according to a smaller loss value according to a loss function, as indicated by the “>” sign in the conditional statements of the algorithm to conform the fake features as much as possible to real ones, corresponding to changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function.); maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function (Wang Algorithm 2; Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones… We want to generate fake nodes with features similar to the real ones to fool the discriminator. Since the output of discriminator is binary, we use binary cross entropy loss defined by L(p, y) = −(y log(p)+(1−y) log(1−p)), where y is binary indicator of the ground-truth (real or fake images), and p is the predicted probability by our discriminator…. We adopt the GAN-like framework to train both features/adjacency matrices and discriminator parameters iteratively… In the algorithm, we add or drop elements according to the absolute gradient of elements, and the one with larger absolute value will be chosen” Wang provides generating fake features that are similar to real ones according to iteratively calculating loss values according to a cross entropy loss function and choosing the fake features with the larger absolute value, wherein the fake features that are most similar to the real ones have the smallest loss values and are the chosen features, corresponding to maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function.).
Pham and Wang are both considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically applied to Fake Nodes in Graph Convolutional Networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham with the above teachings of Wang. Doing so would allow for generating fake features that are most similar to real features (Wang Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones.”).
Regarding claim 33, Pham teaches the method of claim 32, as discussed above in the rejection of claim 32, but fails to teach wherein the updating of the feature component of the fake node based on result of the querying includes: changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function; maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function.
However, Wang teaches wherein the updating of the feature component of the fake node based on result of the querying includes: changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function (Wang Algorithm 2; Section 4.1.2 Greedy-GAN Attack “We want to generate fake nodes with features similar to the real ones to fool the discriminator. Since the output of discriminator is binary, we use binary cross entropy loss defined by L(p, y) = −(y log(p)+(1−y) log(1−p)), where y is binary indicator of the ground-truth (real or fake images), and p is the predicted probability by our discriminator. Then we solve the following optimization problem: (6) where Y is the ground-truth (real/fake) indicator for nodes and c is the parameter determine with the weight of discriminator and the GCN performance. For example, if c is set with a very large value, the objective function is dominated by the discriminator, so the node features generated will be very close to real but with lower attack successful rate... The Greedy-GAN algorithm is given as Algorithm 2. Greedy-GAN supports both adding and dropping links and features” Wang provides a Greedy-GAN Attack, as shown by Algorithm 2, which provides a cross entropy loss to generate features for fake nodes by adding or dropping fake node features according the conditional “if-statements” in Algorithm 2, which add or drops features for fake nodes according to a smaller loss value according to a loss function, as indicated by the “>” sign in the conditional statements of the algorithm to conform the fake features as much as possible to real ones, corresponding to changing the feature component of the fake node to the modified feature component when the modified feature component leads to a smaller loss value according to a loss function.); maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function (Wang Algorithm 2; Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones… We want to generate fake nodes with features similar to the real ones to fool the discriminator. Since the output of discriminator is binary, we use binary cross entropy loss defined by L(p, y) = −(y log(p)+(1−y) log(1−p)), where y is binary indicator of the ground-truth (real or fake images), and p is the predicted probability by our discriminator…. We adopt the GAN-like framework to train both features/adjacency matrices and discriminator parameters iteratively… In the algorithm, we add or drop elements according to the absolute gradient of elements, and the one with larger absolute value will be chosen” Wang provides generating fake features that are similar to real ones according to iteratively calculating loss values according to a cross entropy loss function and choosing the fake features with the larger absolute value, wherein the fake features that are most similar to the real ones have the smallest loss values and are the chosen features, corresponding to maintaining the feature component of the fake node when the modified feature component does not lead to a smaller loss value according to the loss function.).
Pham and Wang are both considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically applied to Fake Nodes in Graph Convolutional Networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham with the above teachings of Wang. Doing so would allow for generating fake features that are most similar to real features (Wang Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones.”).
Regarding claim 34, Pham teaches the method of claim 21, as discussed above in the rejection of claim 21, but fails to explicitly teach further comprising setting a label for each of the adversarial examples.
However, Wang teaches setting a label for each of the adversarial examples (Wang Section 4.2 Targeted Attack “In our method, when attacking the whole dataset, the fake nodes labels are given by a uniform distribution, which is the same as in the non-targeted attack setting.” Wang provides labels for fake nodes, wherein “fake nodes” are “adversarial examples”, corresponding to setting a label for each of the adversarial examples.).
Pham and Wang are both considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically applied to Fake Nodes in Graph Convolutional Networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham with the above teachings of Wang. Doing so would allow for generating fake features that are most similar to real features (Wang Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones.”).
Regarding claim 37, Pham teaches a method for training a Graph Neural Network (GNN) model, comprising: obtaining adversarial examples for the GNN model by: determining vulnerable features of target nodes in a graph based on querying the GNN model (Pham Section IV Attacker “After finding an effective attack strategy on the graph’s structure, we keep it unchanged and adopt gradient based techniques for feature attacks. By doing this, we do not have to keep track of the gradients of the adjacency matrix, and we can attack many target nodes at once.” Section V Defender “The main ideas for model selection are that we explore several standard model types for Graph Neural Networks using the given dataset. …Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides determining vulnerable features of target nodes in a graph neural network by performing feature attacks on target nodes, corresponding to determining vulnerable features of target nodes in a graph based on querying the GNN model.), wherein the graph includes nodes including the target nodes and edges, each of the edges connecting two of the nodes (Pham Section II Datasets and Attacking/Defending Tasks “This section defines our research problem and its requirements. Figure 1 summarizes the dataset and tasks for a typical graph adversarial attack and defense case. The left panel depicts the dataset with embedded features for the nodes and the adjacency matrix for the edges”; Section V Defender “The main ideas for model selection are that we explore several standard model types for Graph Neural Networks using the given dataset. …Also, we utilize our attacking data generated as described in Section IV to check how vulnerable our models are to attacks. Since we do not have labels for the target nodes of the attacks (assumed to be the last 50,000 nodes in the given dataset), we measure the vulnerability by the percentage of nodes predicted differently with and without having the attacking nodes in the graph.” Pham provides a graph neural network including target nodes and edges, which connect two nodes, as shown in Figure 1.), grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes (Pham Section IV Attacker “The most straightforward way for a structure attack is to randomly choose 100 target nodes for each fake node. The more advanced one would be clustering the 50,000 target nodes (e.g., K-means on feature vectors and/or labels to cluster the target nodes) so that each fake node will connect to 100 similar target nodes. The aims of doing these are that the attack can be more efficient since clustered nodes are similar and can be attacked in batch.”; Section V Defender “Furthermore, as discussed earlier, to evaluate the vulnerability, we used our generated model to predict the labels for the target nodes (the last 50,000) with and without having the fake nodes in the graph.” Pham provides clustering target nodes to determine vulnerability including the use of K-means clustering on node feature vectors, corresponding to grouping the target nodes into a plurality of clusters according to the vulnerable features of the target nodes.), and obtaining the adversarial examples based on the plurality of clusters (Pham Section IV Attacker “For each structure attack, we apply a feature attack with a loss function to measure the result. To calculate the accuracy drop from the data provided, we create a test set in which the ground-truth can be accessed. It is worth noting that the attack will perform on robust defenders, which already have defense mechanisms. With that in mind, the choice of final attack strategy is not only based on the performance on the test set but also consider how likely the defender will detect it (i.e., using conservative attack strategies for unnoticeable perturbations)… Regarding feature attack, FGSM is likely to be detected by defenders because the features tend to lie at the extreme values (i.e., -1.73 or 1.62) as these seem to have higher attacking impacts. The defenders can easily ignore fake features by looking at the distribution and reshape these values (or using any other techniques to deal with outlying kind of values)... The features of fake nodes are updated on their corresponding gradients”; Section VII Conclusion “We aim to reduce the model prediction performance significantly and constraint the fake features to have a similar node value distribution to the origin data” Pham provides performing feature attacks based on the structure attack (clustering of target nodes), which includes the use of perturbations, fake features and ground truth data to measure model performance against an adversarial attack, which is designed to trick the network using the fake features and fake nodes, corresponding to obtaining the adversarial examples based on the plurality of clusters, wherein the generated “fake features” including the “fake nodes” correspond to the “adversarial examples”.).
Pham fails to explicitly teach setting a label for each of the adversarial examples; and training the GNN model by using the adversarial examples with the labels.
However, Wang teaches setting a label for each of the adversarial examples (Wang Section 4.2 Targeted Attack “In our method, when attacking the whole dataset, the fake nodes labels are given by a uniform distribution, which is the same as in the non-targeted attack setting.” Wang provides labels for fake nodes, wherein “fake nodes” are “adversarial examples”, corresponding to setting a label for each of the adversarial examples.); and training the GNN model by using the adversarial examples with the labels (Wang Section 4.1.2 Greedy-GAN Attack “We adopt the GAN-like framework to train both features/adjacency matrices and discriminator parameters iteratively. In experiments, at each epoch we conduct 10, 000 greedy updates for B, C, Xfake and then 10 iterations of D updates. The Greedy-GAN algorithm is given as Algorithm 2. Greedy-GAN supports both adding and dropping links and features. In the algorithm, we add or drop elements according to the absolute gradient of elements, and the one with larger absolute value will be chosen.”; Section 5.3 Parameter Sensitivity “In this experiment, a fixed amount of fake data are added per iteration, i.e. more iterations means more fake data. Figure 2 shows that when training a ”clean” GCN, with no attack, there is only a slight difference in performance, and row-wise normalized GCN is better than the symmetric normalized one.” Wang provides training a Graph Convolutional Network (GCN), including the use of fake nodes, as shown in Algorithm 2, wherein the “fake nodes” are “adversarial examples” designed to trick the network, and include set labels, corresponding to training the GNN model by using the adversarial examples with the labels.).
Pham and Wang are both considered to be analogous to the claimed invention because they are in the same field of artificial intelligence and more specifically applied to Fake Nodes in Graph Convolutional Networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pham with the above teachings of Wang. Doing so would allow for generating fake features that are most similar to real features (Wang Section 4.1.2 Greedy-GAN Attack “Next we will present the attack based on the idea of Generative Adversarial Network (GAN). The main idea is to add a discriminator to generate fake features that are similar to the original ones.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURT NICHOLAS PRESSLY whose telephone number is (703)756-4639. The examiner can normally be reached M-F 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KURT NICHOLAS PRESSLY/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125