Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract without significantly more. The claims recite mental processes of observation, judgment and evaluation that can be performed with the aid of pen and paper as well as mathematical functions. This judicial exception is not integrated into a practical application and is not sufficient to amount to significantly more than the judicial exception because it is mere extra-solution activity in combination with generic computer hardware performing generic functions to execute the abstract idea. See the analysis before for further details.
Claim 1
Step 1: The claim recites a method, therefore, it falls into the statutory category of a method.
Step 2A Prong 1: The claim recites, inter alia:
Generating a bipartite graph based, at least in part, on the historical interaction data, the bipartite graph representing a computer-based graph representation of the first set of entities as first nodes and the second set of entities as second nodes and interactions between the first nodes and the second nodes as edges; (This is a mental process of user generating a graph representing the interaction between entities, it can be done with the aid of pen and paper.)
determining final node representations of the first nodes and the second nodes the final node representations determined by executing a plurality of operations for each node in graph traversal manner, the plurality of operations comprising:
sampling direct neighbor nodes and skip neighbor nodes associated with a node based, at least in part, on a neighborhood sampling method;
executing direct and skip neighborhood aggregation methods to obtain direct neighborhood embedding and skip neighborhood embedding associated with the node, respectively; and
optimizing, by the server system, a combination of the direct and skip neighborhood embeddings for obtaining a final node representation associated with the node based, at least part, on a neural network model; ( The above steps of determining the final node representation, no traversal, sampling direct and skip neighbor nodes, direct and skip neighborhood aggregation, and optimizing the combination of the direct and skip neighborhood embedding are all mathematical functions, see instant specification par. 105, 109-111, 117-121.)
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
accessing historical interaction data comprising a plurality of interactions from a database, each interaction associated with an entity of a first set of entities and at least one entity of a second set of entities; (This is amount to extra-solution activity of data collecting, see MPEP 2106.05(g).)
executing at least one of a plurality of graph context prediction tasks based, at least in part, on the final node representations of the first nodes and the second nodes. (This amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).)
using a server system and a bipartite graph neural network (BipGNN) model. (This is amount to using generic computer hardware to execute the abstract idea, see MPEP 2106.05(f).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity in combination of generic computer functions that are implemented to perform the disclosed abstract idea above.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements of “accessing historical interaction data comprising a plurality of interactions from a database, each interaction associated with an entity of a first set of entities and at least one entity of a second set of entities;” amount to data collect or accessing data and is well-understood, routine and conventional and does not amount to significantly more. See MPEP 2106.06(d)(II)(iv) wherein it cites “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; ”. The limitations of “executing at least one of a plurality of graph context prediction tasks based, at least in part, on the final node representations of the first nodes and the second nodes; and using a server system and a bipartite graph neural network (BipGNN) model.” amounts to using machine learning as tool to apply an abstract idea and using generic hardware to execute the abstract idea, see MPEP 2106.05(f).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions that are implemented to perform the disclosed abstract idea above.
Claim 2
Step 2A Prong 1: The claim recites, inter alia:
wherein the neighborhood sampling method defines a probability function that is proportional to a strength of weighted edge between a neighboring node and the node.; ( The above neighborhood sampling method is a mathematical function, see instant specification para. 105, 109-111, 117-121. Also see equation 1 in para. 106 that teaches probability function.)
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
There are no additional limitations.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. There are no additional limitations.
Claim 3
Step 2A Prong 1: The claim recites, inter alia:
Combining the direct and skip neighborhood embeddings of the node to generate a comprehensive node embedding associated with the node based, at least in part, on an attention mechanism.
( The combining/fusing of embeddings of the node is a mathematical function, see instant specification para. 115 and equation 5.)
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
There additional limitation of using a server system amount to using generic computer hardware to execute the abstract idea, see MPEP 2106.05(f).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. There additional limitation of using a server system amount to using generic computer hardware to execute the abstract idea, see MPEP 2106.05(f).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Claim 4
Step 2A Prong 1: The claim recites, inter alia:
wherein the attention mechanism defines attention weights of the direct and skip neighborhood embeddings of the node based, at least in part, on corresponding correlation values of the direct and skip neighborhood embeddings with self-node features of the node. ( This is a mathematical function, see instant specification para. 115-116 and equation 5.)
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
There are no additional limitations.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. There are no additional limitations.
Claim 5
Step 2A Prong 1: The claim recites, inter alia:
Inherits the abstract idea of claims 1 and 3-4.
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
the neural network model represents a decoder model that is configured to maximize mutual information between the comprehensive node embedding and the self-node features of the node. (This limitation is cited a high level of generality and result in using the neural network model as tool to implement the abstract idea, see MPEP 2106.05(f).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitation of “the neural network model represents a decoder model that is configured to maximize mutual information between the comprehensive node embedding and the self-node features of the node” is cited a high level of generality and result in using the neural network model as tool to implement the abstract idea, see MPEP 2106.05(f).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Claim 6
Step 2A Prong 1: The claim recites, inter alia:
Inherits the abstract idea of claims 1 and 3-4.
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
wherein the BipGNN model comprises a plurality of graph neural networks and the decoder model, and wherein the BipGNN model is trained based, at least in part, on a combination of a first loss value and a second loss value. (This limitation is cited a high level of generality and result in using the neural network model as tool to implement the abstract idea, see MPEP 2106.05(f). Also, the training of the model is cited at high level of generality and results in adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). )
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitation of “wherein the BipGNN model comprises a plurality of graph neural networks and the decoder model, and wherein the BipGNN model is trained based, at least in part, on a combination of a first loss value and a second loss value.” is cited a high level of generality and result in using the neural network model as tool to implement the abstract idea, see MPEP 2106.05(f). Also, the training of the model is cited at high level of generality and results in adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are generic computer hardware performing generic functions that are implemented to perform the disclosed abstract idea above.
Claim 7
Step 2A Prong 1: The claim recites, inter alia:
wherein the first loss value preserves mutual information between the comprehensive node embedding and the self-node features of the node, and wherein the second loss value preserves graph structure of the bipartite graph. (This is a mathematical function, see instant specification para. 121-122 and equation 8 and equation for ofinal in para. 122.)
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
There are no additional limitations.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. There are no additional limitations.
Claim 8
Step 2A Prong 1:
Claim 8 inherits the abstract idea of claim 1
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
wherein the first set of entities represents a plurality of merchants and the second set of entities represents a plurality of cardholders who have performed at least one payment transaction with at least one of the plurality of merchants. (This amounts to linking the abstract idea to a field of user, see MPEP 2106.05(h).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere linking the abstract idea to a particular field of use.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are: “wherein the first set of entities represents a plurality of merchants and the second set of entities represents a plurality of cardholders who have performed at least one payment transaction with at least one of the plurality of merchants” which amount to linking the abstract idea to a field of use, see MPEP 2106.05(h). It does not amount significantly more as merely limiting the abstract idea to using merchant/customer transaction data.
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are merely linking the abstract idea.
Claim 9
Step 2A Prong 1:
Claim 9 inherits the abstract idea of claim 1
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
wherein the plurality of graph context prediction tasks comprises at least one of: (a) predict fraudulent or non-fraudulent payment transactions, (b) calculate an account intelligence score, and (c) calculate a carbon footprint score. (This amounts to linking the abstract idea to a field of user, see MPEP 2106.05(h).
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere linking the abstract idea to a particular field of use.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are: “wherein the plurality of graph context prediction tasks comprises at least one of: (a) predict fraudulent or non-fraudulent payment transactions, (b) calculate an account intelligence score, and (c) calculate a carbon footprint score.” which amount to linking the abstract idea to a field of use, see MPEP 2106.05(h). It does not amount significantly more as merely limiting the abstract idea to using merchant/customer transaction data.
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are merely linking the abstract idea to a field of technology.
Claim 10
Step 2A Prong 1:
Claim 10 inherits the abstract idea of claim 1
Step 2A Prong 2:
This judicial exception is no integrated into a practical application. Aside from the limitations above, the claim recites:
a server system configured to perform the computer-implemented method as claimed in claim 1. (This amount to using generic computer hardware to execute the abstract idea in claim 1, see MPEP 2106.05(f).)
The additional elements as disclosed above alone or in combination do not integrate the judicial exception into a practical application as they are mere generic computer hardware used to execute the abstract idea.
Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are: “a server system configured to perform the computer-implemented method as claimed in claim 1.” amount using generic computer hardware to execute the abstract idea, see MPEP 2106.05(f).
The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are mere generic computer hardware performing generic functions used to implement the abstract idea.
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because it is a system claim that does not claim any physical elements. When looking to the specification para. [0072] cites the server is embodied as cloud-based and/or SaaS-Based (software as a service) architecture. When using the broadest reasonable interpretation, this would mean that the server system is software and such is rejected as being software per see. Thus, the claim only recites software per se (descriptive material covered in MPEP 2106.01), which constitute as non-statutory subject matter.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it repeats information given in the title. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (“Bipartite Graph Neural Networks for Efficient Node Representation Learning” – hereinafter He) in view of Xue et al. (“Multi-hop Hierarchical Graph Neural Networks” – hereinafter Xue) .
In regards to claim 1, He discloses a computer-implemented method comprising:
accessing, by a server system, historical interaction data comprising a plurality of interactions from a database, each interaction associated with an entity of a first set of entities and at least one entity of a second set of entities and generating, by the server system, a bipartite graph based, at least in part, on the historical interaction data, the bipartite graph representing a computer-based graph representation of the first set of entities as first nodes and the second set of entities as second nodes and interactions between the first nodes and the second nodes as edges; (He et al. page 1 Introduction section cites “A bipartite graph (Fig. 1) is a graph whose vertices are divided into two independent partitions such that every edge connects nodes from one partition to the other. For example, in an e-commerce recommendation system, the two distinct partitions are represented by users and products, and an edge from a member from one partition to a member of the other represents the user purchasing the product [13].”, which teachers a bipartite graph being created based on a set of users (first set of entities), items or products (second set of entities), and historical purchase data (historical interaction data) used as edges between the two sets. He page 7 section 4.2 last paragraph teaches experiments conduct on a GPU server. Also see figure 1 on page 2:
PNG
media_image1.png
382
534
media_image1.png
Greyscale
)
determining, by the server system, final node representations of the first nodes and the second nodes based, at least in part, on a bipartite graph neural network (BipGNN) model, the final node representations determined by executing a plurality of operations for each node in graph traversal manner, the plurality of operations comprising: (He page 1, abstract section, cites “Thus, we propose Cascade Bipartite Graph Neural Networks, Cascade-BGNN, a novel node representation learning for bipartite graphs that is domain-consistent, self-supervised, and efficient.” and “Moreover, we formulate a multi-layer BGNN in a cascaded training manner to enable multi-hop relationship modeling while improving training efficiency.” This teaches a Bipartite graph neural network that learns node representation by graph traversal (multi-hop) operations. Also see algorithm 1 on page 5 wherein it teaches input is Graph G(U, V, E) (wherein U is users, V is items, and E is the edges between the two, see fig. 1) , and the output Node representation Zu and Zv, which is the final node representation found by graph traversals (hops.).)
sampling, by the server system, direct neighbor nodes and skip neighbor nodes associated with a node based, at least in part, on a neighborhood sampling method; (He page 4 left column second paragraph cite “As we can see from Eq.7, there are two important characteristics that differ IDMP from conventional GCNs: 1. IDMP only performs aggregation on each node’s neighbor nodes without involving the node itself, while conventional GCN methods usually consider the self-loop computation; 2. The propagation is only one-hop-neighbor aware.” and page 5 algorithm 1 teaches doing 1-hop from U to V, would be a depth of 1 users (U in figure 1) and items (V in figure 1) are direct neighbor connections or one-hop connections. He page 1 abstract cites “Moreover, we formulate a multi-layer BGNN in a cascaded training manner to enable multi-hop relationship modeling while improving training efficiency.” , He page 5 second paragraph cites “We argue that cascaded training can embed information from multi-hops in bipartite graphs…”, in combination with algorithm 1 on page 5, and figure 1 teaches skip neighbor nodes. Fig. 1 shows the skip neighbor node in the red connection between the two users is the result of a two hop connection, wherein the connection goes from the middle U user to the laptop in V and from V to the bottom U users, thus connection U middle to U bottom using two hops or skip neighbor nodes. He page 5 section 3.4 teaches sampling method used for the bipartite graph neural network.)
executing, by the server system, at least one of a plurality of graph context prediction tasks based, at least in part, on the final node representations of the first nodes and the second nodes. (He page 5 section 3.4 cites “The final representation can be extracted in the last layer 𝐾, which can then be used for downstream tasks.”, which teaches a final node representation and using it to a task. Then on page 7 section 4.3 it cites “We evaluated our BGNN results on a classification downstream task with 𝐹1 score [6] which is a popular metric for classification. For binary classification on the Tencent dataset, we report 𝐹1 scores. For other multi-classification tasks, we use both micro- and macro-averaged 𝐹1 scores.” This is a prediction task the final representation is used for.)
However, He does not explicitly disclose executing, by the server system, direct and skip neighborhood aggregation methods to obtain direct neighborhood embedding and skip neighborhood embedding associated with the node, respectively; and optimizing, by the server system, a combination of the direct and skip neighborhood embeddings for obtaining a final node representation associated with the node based, at least in part, on a neural network model.
Xue discloses executing direct and skip neighborhood aggregation methods to obtain direct neighborhood embedding and skip neighborhood embedding associated with the node, respectively; (Xue figure 2 and page 85 left column cites “We define hiK as k hop features. Following above observation, hik can be obtained as:
hik = AGGREGATION (
h
i
k
-
1
)
(5)
where AGGREGATION is any aggregation functions. For preserving features independently, all hop-level features apply concatenation operation to connect features in series. Thus, multi-hop features
hi = [hi0 | hi1 | ... | hiK]
=
∥
k
+
1
K
A
G
G
R
E
G
A
T
I
O
N
(
h
i
k
-
1
)
(6)
where K is the maximum hop set manually and || represents concatenation operation.
Multi-hop hierarchical structure makes full use of neighbors features and concatenates all hop-level features in one layer successfully and effectively.”
PNG
media_image2.png
378
762
media_image2.png
Greyscale
.” These two together teach direct neighbor aggregation and skip neighbor aggregation associated with the node. In fig. 2, the yellow node is the current node, blude nodes are 1-hop neighbors, and green nodes are 2-hop, thus having a hierarchical aggregation from hop to hop. The citation above in equation 5 teaches aggregating nodes of the same level. Meaning 1-hop nodes (direct neighboring, hi1 ) are aggregated together, 2-hop (neighboring nodes, hi2 ) are aggregated together, and so. The citation above goes on to teach for preserving features (embeddings) independently, all hop-level features apply concatenation operation to connect features in series. So, this is show in equation 6 where all the levels are aggregated together, but preserved independently. Equation 6 teaches hi0 (0-hop or the node itself), hi1 (aggregation of 1-hop nodes (direct neighbors)), hi2 (aggregation of 2-hop nodes (skip neighbor nodes), and so on to get hi which is the combination of both direct and skip neighborhood aggregation.) and
and optimizing, by the server system, a combination of the direct and skip neighborhood embeddings for obtaining a final node representation associated with the node based, at least in part, on a neural network model. (Xue page 85-86 section “Attention Mechanism” teaches using an attention mechanism to weight different hop-level embeddings and produce a final node representation. Equation 7 teaches getting an attention score between 0-hop and k-hop features, this includes both direct and skip neighbor nodes; equation 9 normalizations the attention scores and equation 10 gives the final node embedding.)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify the teachings of He with that of Xue in order to allow for executing direct and skip neighborhood aggregation as both reference deal with using graph neural networks and performing hop traversals. The benefit of doing so is it allows for integrating information from further multi-hop nodes and to control importance of hop-level features using attention mechanism, creating a more efficient and accurate system.
In regards to claim 3, He in view of Xue disclose the computer-implemented method as claimed in claim 1, wherein the plurality of operations further comprises: combining, by the server system, the direct and skip neighborhood embeddings of the node to generate a comprehensive node embedding associated with the node based, at least in part, on an attention mechanism. (Xue equation 6 shows the comprehension node as it combines both direct and skip neighborhood node embeddings. Also, the abstract teaches the use attention mechanism wherein it cites “Besides, MHGNNs also use attention mechanism during the integrated step, which mines latent relationships among hops and adaptively selects important hop-level features.”)
In regards to claim 4, He in view of Xue disclose the computer-implemented method as claimed in claim 3, wherein the attention mechanism defines attention weights of the direct and skip neighborhood embeddings of the node based, at least in part, on corresponding correlation values of the direct and skip neighborhood embeddings with self-node features of the node. (See Xue page 4 section B “Attention Mechanism” that teaches different hop-level has different important for a current node (self-node), thus it is has learnable weights. This teaches weighs correspond to correlation values of direct and skip node embedding with the self-node. Also see section III that give various aggregation methods and equations along with using weight matrices.)
In regards to claim 10, He in view Xue disclose a server system configured to perform the computer-implemented method as claimed in claim 1. (He page 7 section 4.2 last paragraph teaches experiments conduct on a GPU server.)
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over He et al. (“Bipartite Graph Neural Networks for Efficient Node Representation Learning” – hereinafter He) in view of Xue et al. (“Multi-hop Hierarchical Graph Neural Networks” – hereinafter Xue.) and in further view of Ribeiro et al. (“Sampling Directed Graphs with Random Walks” – hereinafter Ribeiro).
In regards to claim 2, He in view of Xue disclose the computer-implemented method as claimed in claim 1, but fails to explicitly disclose wherein the neighborhood sampling method defines a probability function that is proportional to a strength of weighted edge between a neighboring node and the node.
Ribeiro et al. teaches sampling method defines probability function that is proportional to a strength of weight wedge between nodes. (Ribeiro et al. page 3 section C cites “In a weight graph a walker traverses a given edge with probability proportional to the weight of this edge.”. This means that sampling a node or traverses from one node to a the next is a probability proportional to the edge weight between the two nodes.)
It would have been obvious to the one of ordinary skill in the art before earliest effective filing date to modify the teachings of He in view of Xue with that of Ribeiro et al. in order to allow for neighborhood sampling to a probability function proportional to weighted edged as all the reference deal with using graphs that are nodes connected by edges. The benefit of doing so it allows for sampling the most important and likely connections to be sampled first as they are the most important and most often used.
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (“Bipartite Graph Neural Networks for Efficient Node Representation Learning” – hereinafter He) in view of Xue et al. (“Multi-hop Hierarchical Graph Neural Networks” – hereinafter Xue.) and in further view of Belle et al. (“Representation Learning in Graphs for Credit Card Fraud Detection” -hereinafter Belle.)
In regards to claim 8, He in view of Xue discloses the computer-implemented method as claimed in claim 1, but does not explicitly disclose wherein the first set of entities represents a plurality of merchants and the second set of entities represents a plurality of cardholders who have performed at least one payment transaction with at least one of the plurality of merchants.
Belle discloses wherein the first set of entities represents a plurality of merchants and the second set of entities represents a plurality of cardholders who have performed at least one payment transaction with at least one of the plurality of merchants. (Belle page 4 section 3.1 titled “Graph Structure” cites “Transaction networks, in contrast, contain two distinctive types of entities: cardholders and merchants. Transactions can be represented as edges connecting merchants and cardholders.” And figure 1 on page 1 text cites “Fig. 1. Left: bipartite graph with cardholders and merchants connected through edges representing transactions.”.)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify the teachings of He in view Xue with that of Belle in order to allow for creating a bipartite graph from purchase been merchants and cardholders and both He and Belle deal with the use of bipartite graphs. The benefit of doing so it allows for the discovery of credit card fraud detection through use of the bipartite graph, thus creating a more secure network.
In regards to claim 9, He in view of Xue disclose the computer-implemented method as claimed in claim 1, but does not explicitly disclose wherein the plurality of graph context prediction tasks comprises at least one of: (a) predict fraudulent or non-fraudulent payment transactions, (b) calculate an account intelligence score, and (c) calculate a carbon footprint score.
Belle discloses herein the plurality of graph context prediction tasks comprises at least one of: (a) predict fraudulent or non-fraudulent payment transactions, (b) calculate an account intelligence score, and (c) calculate a carbon footprint score. (Belle page second paragraph cites “In this paper, we scrutinize how representation learning can improve automated, ML-based fraud detection.”, page 2 last paragraph – page 3 first paragraph cites “Machine learning techniques for fraud detection can be categorized as either supervised or unsupervised…, supervised fraud detection starts from a labeled dataset containing transactions which are known to either be genuine or fraudulent.” and page 13 section 6 cites “The major objective of this work was to assess the feasibility for graph representation learning in a credit card fraud detection setting.”. These citation show that the Belle reference teaches predicting fraudulent credit card transactions.)
It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify the teachings of He in view Xue with that of Belle in order to allow for creating a bipartite graph from purchase been merchants and cardholders and using that bipartite graph for predicting credit card fraudulent transaction as both He and Belle deal with the use of bipartite graphs. The benefit of doing so it allows for the discovery of credit card fraud detection through use of the bipartite graph, thus creating a more secure network.
Allowable Subject Matter
Claims 5-7 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: None of the cited prior are references alone or in combination disclose wherein the neural network model includes a decoder that is configure to maximize the mutual information between a comprehensive node embedding and the self-node embedding (features), as such claim 5 and all claims dependent on it have been objected to as being dependent on rejected claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAULINHO E SMITH whose telephone number is (571)270-1358. The examiner can normally be reached Mon-Fri. 10AM-6PM CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at 571-270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAULINHO E SMITH/Primary Examiner, Art Unit 2127