Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claim(s)
Claim(s) 1-20 were previously pending and were rejected in the previous office action. Claim(s) 1, 6-7, 9-10, 15-16, and 18-19 were amended. Claim(s) 2-5, 8,11-14,17, and 20 were left as originally/previously presented. Claim(s) 1-20 are currently pending and have been examined.
Priority
Acknowledgment is made of applicant’s claim for foreign priority filed in China on August 08, 2021, under 35 U.S.C. 119 (a)-(d).
Information Disclosure Statement
The information disclosure statement (IDS) submitted on October 29, 2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement are being considered by the examiner.
Response to Arguments
Claim Objections
Applicant’s arguments, see page 9 of Applicant’s Response, filed January, 21, 20226, with respect to the Claim Objections have been fully considered and are persuasive. The claim objection(s) have been withdrawn.
Claim Rejections - 35 USC § 101
Applicant’s arguments, see page(s) 9-11 Applicant’s Response, filed January 21, 2026, with respect to 35 USC § 101 rejection of Claim(s) 1-20 have been fully considered but they are not persuasive.
First, Applicant argues, on page(s) 9-10, that the amended Independent Claim(s) 1, 10, and 19, do not fall within the revised Step 2A Prong 1 framework under the grouping of “Certain Methods of Organizing Human Activity.” Examiner, respectfully,
disagrees.
As an initial matter, Courts have provided various sub groupings within organizing human activity grouping encompass both activity of a single person (for example, a person following a set of instructions or a person signing a contract online) and activity that involves multiple people (such as a commercial interaction), and thus, certain activity between a person and a computer (for example a method of anonymous loan shopping that a person conducts using a mobile phone) may fall within the "certain methods of organizing human activity" grouping. It is also noted that the number of people involved in the activity is not dispositive as to whether a claim limitation falls within this grouping. Instead, the determination should be based on whether the activity itself falls within one of the sub-groupings, see MPEP 2106.04(a)(2)(II).
Examiner, respectfully, notes that the specific limitation(s) that fall within the subject matter groupings of the abstract idea. Independent Claim(s) 1, 10, and 19, recite(s) “determining node representations of a first graph node and a second graph node based on performing node representation propagation and node representation aggregation starting from the first graph node and the second graph node,” “generating a node relationship representation between the first graph node and the second graph node in graph data based on the node representations,” “wherein the node representation aggregation comprising a plurality of iterations, a node propagation representation of a previous iteration of each source graph node in a source graph node set of a current iteration is propagated to each target graph node set comprises neighboring graph nodes of the source graph node,” “wherein node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration,” and “wherein a node representation of a current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node, the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node, wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using a function,” step(s)/function(s) are merely certain methods of organizing human activity: fundamental economic principles or practices, and/or commercial or legal interactions (e.g., marketing or sales activities or behaviors and/or business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including following rules or instructions).
Similar to, Credit Acceptance Corp v, Westlake Services, where the court found
that that processing a credit application between a customer and dealer, where the
business relation is the relationship between the customer and the dealer during the
vehicle purchase was merely a commercial transaction, which, is a form of certain
methods of organizing human activity. In this case, the claim(s) are similar to an entity that is able to determine a service relationship predication using a graph or chart, which is merely a business relation. Thus, applicant’s claims fall within at least the enumerated grouping of certain methods of organizing human activity.
Furthermore, as an initial matter, the courts do not distinguish between mental processes that are performed by humans and claims that recite mental processes performed on a computer, see MPEP 2106.04(a)(2)(III). As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015).
Similar to, Electric Power Group v. Alstom, S.A., when the court provided that a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps, which, were recited at a high level of generality such that they could practically be performed in the human mind. Here, applicant’s claim limitations are recited at a high level of generality that can be performed in the human mind when the limitations recite determining node representations of a first graph node and a second graph node based on performing node representation propagation and node representation aggregation starting from the first graph node and the second graph node (e.g., analyzing). The system will generate a node relationship representation between the first graph node and the second graph node in graph data based on the node representations (e.g., analyzing). The node representation aggregation comprising a plurality of iterations, a node propagation representation of a previous iteration of each source graph node in a source graph node set of a current iteration is propagated to each target graph node set comprises neighboring graph nodes of the source graph node, wherein node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration (e.g., analyzing). The current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node, the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node, wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node (e.g., analyzing) thus analyzing that information graph node traversal to determine information is merely related to a mental processes that can be performed human mind, or using a pencil and paper. Therefore, the claim(s) recite at least an abstract idea of mental processes. However, even assuming arguendo, that applicant has some merit that the claims cannot be performed mentally. The claims would still fall under certain methods of organizing human activity, see above analysis.
Second, applicant argues, on page(s) 9-12 in applicant’s arguments, that the
application is now integrated into a practical application. Examiner, respectfully,
disagrees with applicant’s arguments.
As an initial matter, it is important to note that first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification\ (e.g., "thereby increasing the bandwidth of the channel"), see MPEP 2106.04(d)(1). An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration.
Here, in this case the specification discloses the system improves accuracy of predicting a service relationship between the graph nodes, see applicant’s specification paragraph(s) 0004, 0047, and 0094-0095. This is at best an improvement to the business process (e.g., abstract idea) itself rather than a technological improvement.
First, the step(s) of accomplishing this desired improvement in the specification is
made in blanket conclusory manner by merely making a bare assertion of the
improvement without any details of how the graph neural network is improves the graph node predictions using non-conventional and non-generic arrangement of components and/or architectures, see applicant’s specification paragraph 0047 and applicant’s argument on page 10-11, thus when the specification states the improvement in a conclusory manner the examiner should not determine the claim improves technology.
Furthermore, while the specification discloses the GNN includes an LSTM aggregators and Attention mechanisms which are specific, complex neural network architectures to focus computational resources, see applicant’s arguments on page 10-11. This is at best an improvement to the abstract idea itself (e.g., making accurate predictions for service relationships) rather than a technological improvement.
Also, see "[P]atents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101." See, Recentive Analytics, Inc. v. Fox Corp. Here, applicant(s) specification nor claim amendments provide how this particular architecture of the graph neural network using the LSTM aggregator and Attention mechanism is improved. In fact, applicant’s specification provides a list of well-known machine-learning graph neural networks such that the graph neural network can include, but are not limited to, a graph convolutional network (GCN), a graph attention network (GAT), etc., see applicant’s specification paragraph(s) 0058, thus any algorithm can be used to achieve such functions/steps.
While applicant argues, see applicants arguments on page 10-11, that the GNN can iteratively refine the neighbor propagation, which in turn provides an improvement to the computers ability to represent latent relationships in sparse graph data. Examiner, respectfully, disagrees.
The court in, Recentive Analytics, Inc. v. Fox Corp., 134 F.4th 1205 (Fed. Cir. 2025), stated “[t]he requirements that the machine learning model be ‘iteratively trained’ or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement” because “[i|terative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning.” Id. at 1212 Here, the GNN function is not an improvement when the GNN is merely iteratively training using previously determined nodes, because merely using a GNN algorithm/function to iteratively train using node changes is inherent to the nature of machine learning and is not, in itself, a technological improvement. Thus, applicant’s argument is not persuasive.
Also, another important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP §2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration.
Similar to, Affinity Labs v. DirecTv., the court has held that the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Here, in this case applicant’s limitations merely determining, generating, iteratively, and aggregating, respectively, information using computer components that operate in their ordinary capacity (e.g., an aggregation function, a non-transitory, computer-readable medium, computer system, computer memory devices, and computers), which are no more than “applying,” the judicial exception.
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In this case, the claims lack the details as to how
the graph neural network’s LSTM aggregator and attention mechanism interacts are improved. The claims lack the details as to how the GNN layers and aggregation functions interact with one another and how the node features are being used to improve the system. Therefore, applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 103
Applicant’s arguments with respect to Claim(s) 1-20 have been considered but are moot because the arguments do not apply to the combination of references being used in the current rejection.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: Independent Claim(s) 1, 10, and 19 recites an entity using a graph to then predict a service relationship. Independent Claim(s) 1, 10, and 19, as a whole recites limitation(s) that are directed to the abstract idea(s) of certain methods of organizing human activity: commercial or legal interactions (e.g., behaviors and/or business relations) and/or managing personal behavior or relationships or interactions between people (e.g., following rules or instructions) and/or mental processes (e.g., observation, evaluation, judgment, and/or opinion).
Independent Claim(s) 1, 10, and 19, recites determining node representations of a first graph node and a second graph node based on performing node representation propagation and node representation aggregation starting from the first graph node and the second graph node,” “generating a node relationship representation between the first graph node and the second graph node in graph data based on the node representations,” “wherein the node representation aggregation comprising a plurality of iterations, a node propagation representation of a previous iteration of each source graph node in a source graph node set of a current iteration is propagated to each target graph node set comprises neighboring graph nodes of the source graph node,” “wherein node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration,” and “wherein a node representation of a current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node, the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node, wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using a function,” step(s)/function(s) are merely certain methods of organizing human activity: commercial or legal interactions (e.g., behaviors and/or business relations) and/or managing personal behavior or relationships or interactions between people (e.g., following rules or instructions) and/or mental processes (e.g., observation, evaluation, judgment, and/or opinion).
Furthermore, as, explained in the MPEP and the October 2019 update, where a series of step(s) recite judicial exceptions, examiners should combine all recited judicial exceptions and treat the claim as containing a single judicial exception for purposes of further eligibility analysis. (See, MPEP 2106.04, 2016.05(II) and October 2019 Update at Section I. B.).For instance, in this case, Independent Claim(s) 1, 10, and 19, are similar to an entity that generates a graph to predict service relationships. The mere recitation of generic computer components (Claim 1: an aggregation function; Claim 10: a non-transitory, computer-readable medium a computer system, and a aggregation function; and Claim 19: one or more computers, one or more computer memory devices, tangible, non-transitory, machine readable media, and an aggregation function) do not take the claims out of the enumerated grouping certain methods of organizing human activity. Therefore, Independent Claim(s) 1, 10, and 19, recites the above abstract idea(s).
Step 2A Prong 2: This judicial exception is not integrated into a practical application because the claims as a whole describes how to generally “apply,” the concept(s) of “determining,” “generating,” “iteration,” and “aggregating,” respectively, information in a computer environment. The limitations that amount to “apply it,” are as follows (Claim 1: an aggregation function; Claim 10: a non-transitory, computer-readable medium a computer system, and an aggregation function; and Claim 19: one or more computers, one or more computer memory devices, tangible, non-transitory, machine readable media, and an aggregation function). Examiner, notes that the non-transitory, computer-readable medium, computer system, one or more computers, one or more computer memory devices, aggregation function, and tangible, non-transitory, machine readable media, respectively, are recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer.
Similar to, Affinity Labs v. DirecTv., the court has held that the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Here, in this case applicant’s limitations merely determining, generating, and aggregating node information using computer components that operate in their ordinary capacity (e.g., non-transitory, computer-readable medium, computer system, one or more computers, one or more computer memory devices, aggregation function, and tangible, non-transitory, machine readable media), which are no more than using instructions to implement the abstract idea using generic computer components wherein the focus of the claim as a whole is directed to a result or effect that itself is the abstract idea thus merely amounting to “applying,” the judicial exception.
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015).
Also, see Recentive Analytics, Inc. v. Fox Corp., 134 F.4th 1205 (Fed. Cir. 2025). In that case, similar to here, “[t]he requirements that the machine learning model be ‘iteratively trained’ or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement” because “[i|terative training using selected training material and dynamic adjustments based on real-time changes are incident to the very nature of machine learning.” Id. at 1212. Each of the above limitations simply implement an abstract idea that is no more than mere instructions to apply the exception using a generic computer component, which, is not a practical application of the abstract idea. Therefore, when viewed in combination these additional elements do not integrate the recited judicial exception into a practical application and the claims are directed to the above abstract idea(s).
Step 2B: The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as noted previously, the claims as a whole merely describe how to generally “apply,” the abstract idea in a computer environment. Thus, even when viewed as a whole, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are ineligible.
Claim(s) 3-4, 6, 9, 12-13, 15, and 18 : The various metrics of Dependent Claim(s) 3-4, 6, 9, 12-13, 15, and 18, merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to Independent Claim(s) 1 and 15, these judicial exceptions are not meaningfully integrated into a practical application, or significantly more than an abstract idea.
Claim(s) 2, 11, and 20: The additional limitation of “generating,” and “splicing,” are further directed to a certain method of organizing human activity and/or mental processes, as described in Independent Claim(s) 1, 10, and 19. The non-transitory, computer readable medium are recited so generically that it represents no more than mere instructions to apply the judicial exception on a computer. Claim(s) 2, 11, and 20 recite “wherein generating the node relationship representation between the first graph node and the second graph node comprises: splicing the node representations of the first graph node and the second graph node to generate the node relationship representation between the first graph node and the second graph node,” step(s)/function(s) falls within the enumerated grouping certain methods of organizing human activity: commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., following rules or instructions) and/or mental processes (e.g., evaluation, judgment, observation, and/or evaluation).
Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)). Here, the above additional elements merely generating and splicing, node information which is no more than “applying,” the judicial exception.
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Therefore, for the reasons described above with respect to Claim(s) 2, 11, and 20 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
Claim(s) 5 and 14: The graph neural network is recited so generically that it represents no more than mere instructions to apply the judicial exception on a computer.
Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)).
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Therefore, for the reasons described above with respect to Claim(s) 5 and 14 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
Claim(s) 7 and 16: The graph neural network and long-short term memory (LSTM) aggregator are recited so generically that it represents no more than mere instructions to apply the judicial exception on a computer.
Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)).
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Therefore, for the reasons described above with respect to Claim(s) 5 and 14 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
Claim(s) 8 and 17: The additional limitation of “aggregating,” and “performing,” are further directed to a certain method of organizing human activity and/or mental processes, as described in Independent Claim(s) 1 and 10. The LSTM aggregator is recited so generically that it represents no more than mere instructions to apply the judicial exception on a computer. Claim(s) 8 and 17 recite “wherein the node propagation representations of the current iteration of the target graph nodes are generated,” “aggregating, by using an Attention operation, the node propagation representations received by the target graph nodes to obtain neighboring-node propagation representations of the target graph nodes,” and “performing, LSTM aggregation on the neighboring-node propagation representations of the target graph nodes and the node propagation representations of the previous iteration of the target graph nodes to generate the node propagation representations of the current iteration of the target graph nodes,” step(s)/function(s) falls within the enumerated grouping certain methods of organizing human activity: commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., following rules or instructions) and/or mental processes (e.g., evaluation, judgment, observation, and/or evaluation).
Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)). Here, the above additional elements merely aggregating and performing, node propagation/aggregation information which is no more than “applying,” the judicial exception.
Also, see the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Therefore, for the reasons described above with respect to Claim(s) 8 and 17 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea.
The dependent claim(s) 2-9, 11-18, and 20, above do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) in the dependent claim(s) above are no more than mere instructions to apply the exception using generic computer component(s), which, does not provide an inventive concept. Therefore, Claim(s) 1-20 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 4-5, 9-11, 13-14, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leskovee et al. (US 11,922,308 B2)(Continuation filed on February 12, 2019) in view of Silva et al. (US 2021/0117978 A1).
Regarding Claim 1, Leskovee et al., teaches a computer-implemented method comprising:
determining node representations of a first graph node and a second graph node based on performing node representation aggregation starting from the first graph node and the second graph node. (Column 11, Lines 54-67) and (Column 12, Lines 1-21)(Leskovee et al. teaches an aggregated embedding vector for a target node, Node J (e.g., first graph node). The system will then generate an aggregated embedding vector to input an aggregation of Node J’s neighbors that includes Nodes G, H, I, L, and P. Leskovee et al., further, teaches an aggregation process will be used for Node G’s (e.g., second graph node) neighbors including Nodes E, J, Q, and S)
generating a node relationship representation between the first graph node and the second graph node in graph data based on the node representations. (Column 5, Lines 37-51)(Leskovee et al. teaches generation of aggregated embedding vectors for nodes in a graph. The system contains connections or edges between nodes (e.g., first graph and second graph node) indicating relationship between the nodes)
the node representation aggregation comprising a plurality of iterations. (Column 12, Lines 63-67) and (Column 13, Lines 1-53)(Leskovee et al. teaches generating the results for nodes in a graph is to iterate (e.g., plurality of iterations) through all the nodes and for each node to generate an aggregated embedding vector)
.
the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node. (Column 11, Lines 35-67)(Leskovee et al. teaches an aggregated embedding vector of the target node’s neighbors, which the embedding vectors are combined/aggregated into neighborhood embedding information)
With respect to the above limitations: while Leskovee et al. teaches a neural network that iteratively aggregates nodes, which will be used to then determine node relationships. However, Leskovee et al., doesn’t explicitly teach a node propagation representation of a previous iteration of the source graph nodes that is propagated to each target graph nodes that includes neighboring graph nodes. Leskovee et al., also, doesn’t explicitly teach node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration. Leskovee et al., also, doesn’t explicitly teach an aggregation graph node that is based on previous iteration of the aggregation graph node and previous iteration of neighboring graph nodes. Leskovee et al., also, doesn’t explicitly teach wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using an aggregation function.
But, Silva et al. in the analogous art of propagating graph nodes, teaches
a node propagation representation of a previous iteration of each source graph node in a source graph node in a source graph node set of a current iteration is propagated to each target graph node in a target graph node set of the source graph node, and the target graph node set comprises neighboring graph nodes of the source graph node. (Paragraph(s) 0099-0107)(Silva et al. teaches previous updated Node 1 can send a message to each of its neighbor nodes, such as Node 2 and Node 4. Node 1 can send both nodes an interest level, which Node 2 and Node 4 can receive update information from Node 1 and the system can determine if the nodes conditions are satisfied and Node 2 and Node 4 can be updated)
node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration. (Paragraph(s) 0061 and 0086-0087)(Silva teaches a first iteration, seeds pass messages to their neighbors and in subsequent iterations the nodes that were updated in a previous iteration pass messages to their neighbors. Silva, further, teaches after all the messages for a current iteration has been exchanged, each node updates its node interest. The interest function will the give a previous node interest a weight using different aggregators)
a node representation of a current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node. (Paragraph(s) 0099-0108)(Silva et al. teaches previous updated Node 1 can send a message to each of its neighbor nodes, such as Node 2 and Node 4. Node 1 can send both nodes an interest level, which Node 2 and Node 4 can receive update information from Node 1 and the system can determine if the nodes conditions are satisfied and Node 2 and Node 4 can be updated. The system can continue the process through various iterations. Silva et al., further, teaches neighboring node 7 can receive previous updated Node 4 and Node 5 messages, which Node 7 can calculate an interest score based on previous neighboring Nodes 4 and Node 5 and Node 7 can then be updated)
wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using an aggregation function. (Paragraph(s) 0043, 0059, and 0099-0108)(Silva et al. teaches previous updated Node 1 can send a message to each of its neighbor nodes, such as Node 2 and Node 4. Node 1 can send both nodes an interest level, which Node 2 and Node 4 can receive update information from Node 1 and the system can determine if the nodes conditions are satisfied and Node 2 and Node 4 can be updated. Further, teaches an aggregation function (e.g., aggregation function) that combines the interest from the previous iterations with the interest received from the neighbors in the current iterations)
It would have been prima facia obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a system that iteratively aggregates nodes of Leskovee et al., by incorporating the teachings of updating nodes based on neighboring nodes and previously updated nodes of Silvia et al., with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to reduce processing time for messaging nodes. (Silvia et al.: Paragraph 0058)
Regarding Claim 2, Leskovee et al./Silva et al., teaches wherein generating the node relationship representation between the first graph node and the second graph node comprises: splicing the node representations of the first graph node and the second graph node to generate the node relationship representation between the first graph node the second graph node. (Column 7, Lines 13-29)(Leskovee teaches the nodes in a graph are directly or indirectly connected (e.g., splicing) to the target node via at least one relationship/edge, that have the greatest relationship and/or impact to the target node)
Regarding Claim 4, Leskovee et al./Silva et al., teaches all the limitations as applied to Claim 1.
However, Leskovee et al., doesn’t explicitly teach wherein the initial node representation of a graph node is generated based on splicing the node propagation representation of the graph node and the original feature of the graph node.
But, Silva et al. in the analogous art of propagating graph nodes, teaches wherein the initial node representation of a graph node is generated based on splicing the node propagation representation of the graph node and the original feature of the graph node. (Paragraph 0059)(Silva et al. teaches the system can propagate interest using a certain amount of iterations that take into account both the node’s interest and the edge interest. The nodes and the nodes interest (e.g., features) can be split (e.g., splicing) among their neighbor nodes and then update the neighbor nodes interest according to the interest aggregation function)
It would have been prima facia obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a system that iteratively aggregates nodes of Leskovee et al., by incorporating the teachings of splitting nodes and then nodes interest of Silva et al., with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to reduce processing time for messaging nodes. (Silvia et al.: Paragraph 0058)
Regarding Claim 5, Leskovee et al./Silva et al., teaches wherein the computer-implemented method is implemented based on a graph neural network. (Column 5, Liens 11-24)(Leskovee teaches a graph convolutional network (e.g., GNN))
Regarding Claim 9, Leskovee et al./Silva et al., teaches wherein the graph data is generated based on service data, and wherein the service data comprises one of:
social data. (Column 4, Lines 48-64)(Leskovee et al. teaches the graphs of content can represent user/people relationships as maintained by social networking services. Examiner, respectfully, notes that based on BRI this claim merely requires one of the service data factors to be taught (i.e., comprises one of))
financial transaction data.
product transaction data.
enterprise supply relationship data.
Regarding Claim 10, Leskovee et al./Silva et al., teaches a non-transitory, computer-readable medium storing one or more instructions executable by a computer system to perform operations comprising:
determining node representations of a first graph node and a second graph node based on performing node representation aggregation starting from the first graph node and the second graph node. (See, relevant rejection of Claim 1(a))
generating a node relationship representation between the first graph node and the second graph node in graph data based on the node representations. (See, relevant rejection of Claim 1(b))
the node representation aggregation comprising a plurality of iterations. (See, relevant rejection of Claim 1(c))
a node propagation representation of a previous iteration of each source graph node in a source graph node in a source graph node set of a current iteration is propagated to each target graph node in a target graph node set of the source graph node, and the target graph node set comprises neighboring graph nodes of the source graph node. (See, relevant rejection of Claim 1(d))
node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration. (See, relevant rejection of Claim 1(e))
a node representation of a current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node. (See, relevant rejection of Claim 1(f))
the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node. (See, relevant rejection of Claim 1(g))
wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using an aggregation function. (See, relevant rejection of Claim 1(h))
Regarding Claim 11, Leskovee et al./Silva et al., teaches all the limitations of Claim 10 and wherein generating the node relationship representation between the first graph node and the second graph node comprises: splicing the node representations of the first graph node and the second graph node to generate the node relationship representation between the first graph node the second graph node. (See, relevant rejection(s) of Claim(s) 2 and 10)
Regarding Claim 13, Leskovee et al./Silva et al., teaches all the limitations as applied to Claim 10 and wherein the initial node representation of a graph node is generated based on splicing the node propagation representation of the graph node and the original feature of the graph node. (See, relevant rejection(s) of Claim(s) 3 and 10)
Regarding Claim 14, Leskovee et al./Silva et al., teaches all the limitations of Claim 10 and wherein the computer-implemented method is implemented based on a graph neural network. (See, relevant rejection of Claim(s) 5 and 10)
Regarding Claim 18, Leskovee et al./Silva et al., teaches all the limitations as applied to Claim 10 and wherein the graph data is generated based on service data, and wherein the service data comprises one of:
social data. (See, relevant rejection(s) of Claim(s) 9(a) and 10)
financial transaction data.
product transaction data. (See, relevant rejection(s) of Claim(s) 9(c) and 10)
enterprise supply relationship data.
Regarding Claim 19, Leskovee et al./Silva et al., teaches a computer implemented system, comprising:
one or more computers.
One or more computer memory devices interoperably coupled with the one or more computers and having tangible, non-transitory, machine-readable media storing one or more instructions that, when executed by the one or more computers, perform one or more operations comprising:
determining node representations of a first graph node and a second graph node based on performing node representation aggregation starting from the first graph node and the second graph node. (See, relevant rejection of Claim 1(a))
generating a node relationship representation between the first graph node and the second graph node in graph data based on the node representations. (See, relevant rejection of Claim 1(b))
the node representation aggregation comprising a plurality of iterations. (See, relevant rejection of Claim 1(c))
a node propagation representation of a previous iteration of each source graph node in a source graph node in a source graph node set of a current iteration is propagated to each target graph node in a target graph node set of the source graph node, and the target graph node set comprises neighboring graph nodes of the source graph node. (See, relevant rejection of Claim 1(d))
node propagation representations of the current iteration of target graph nodes are generated based on node propagation representations received by the target graph nodes and node propagation representations of the target graph nodes of the previous iteration. (See, relevant rejection of Claim 1(e))
a node representation of a current iteration of an aggregation graph node is generated based on a node representation of the previous iteration of the aggregation graph node and a node representation of the previous iteration of a neighboring graph node of the aggregation graph node. (See, relevant rejection of Claim 1(f))
the aggregation graph node comprises the first graph node or the second graph node, and an initial node representation of a graph node is generated based on a node propagation representation of the graph node and an original feature of the graph node. (See, relevant rejection of Claim 1(g))
wherein the node representation of the current iteration of the aggregation graph node is generated based on aggregating the node representation of the previous iteration of the aggregation graph node and the node representation of the previous iteration of then neighboring graph node of the aggregation graph node by using an aggregation function. (See, the relevant rejection of Claim1(h))
Regarding Claim 20, Leskovee et al./Silva et al., teaches all the limitations of Claim 19 and wherein generating the node relationship representation between the first graph node and the second graph node comprises: splicing the node representations of the first graph node and the second graph node to generate the node relationship representation between the first graph node the second graph node. (See, relevant rejection(s) of Claim(s) 2 and 19)
Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leskovee et al. (US 11,922,308 B2) in view of Silva et al. (US 2021/0117978 A1) and further in view of Zhu et al. (US 10,387,788 B2).
Regarding Claim 3, Leskovee et al./Silva et al., teaches all the limitations as applied to Claim 1.
However, Leskovee et al./Silva et al., do not explicitly teach wherein the node propagation representations of the current iteration of the target graph nodes are generated based on node propagation representations received from neighboring graph nodes, edge relationship features between the target graph nodes and the neighboring graph nodes, and then node propagation representations of the target graph nodes of the previous iteration.
But Zhu et al. in the analogous art of predicting results using a node graph, wherein the node propagation representations of the current iteration of the target graph nodes are generated based on node propagation representations received from neighboring graph nodes, edge relationship features between the target graph nodes and the neighboring graph nodes, and then node propagation representations of the target graph nodes of the previous iteration. (Column 3, Lines 38-67); (Column 4, Lines 1-30 and 43-67); (Column 5, Lines 50-67); (Column 6, Lines 1-9); (Column 7, Lines 40-67); (Column 8, Lines 1-57); and (Column 10, Lines 1-31)(Zhu et al. teaches a process for using graph nodes to predict results based on several iterations of propagations. The system can assign values to all the nodes. The system will also determine a relatedness of nodes in the graph nodes such the relationship of customers to sales. The system can connect nodes based on edge distance values between nodes (e.g., edge relationship features between target graph nodes and neighboring graph nodes). Zhu et al., further, teaches after each propagation of neighbor node values the a node will be selected and the values for those nodes can be determined based on previous iterations of neighbor nodes (e.g., node propagation representations of the target graph nodes of the previous iteration). Zhu et al., also, teaches the process will repeat until all nodes and the neighbor nodes are selected, which the system can provide predicted results to a user, see Column 10, Lines 1-31)
It would have been prima facia obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a GCN model that iteratively aggregates nodes of Leskovee et al. and updating Nodes based previously updated nodes iteratively using an aggregation function of Silva et al., by incorporating the teachings of propagating a graph using nodes, neighbor nodes, and edge values to assign values to nodes using previous iterations to then determine predicted results of Zhu et al., with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to increase the accuracy of predicted results. (Zhu et al.: Abstract)
Regarding Claim 12, Leskovee et al./Silva et al./Zhu et al., teaches all the limitations as applied to Claim 10 and wherein the node propagation representations of the current iteration of the target graph nodes are generated based on node propagation representations received from neighboring graph nodes, edge relationship features between the target graph nodes and the neighboring graph nodes, and then node propagation representations of the target graph nodes of the previous iteration. (See, relevant rejection(s) of Claim(s) 3 and 10)
Claim(s) 6-8 and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leskovee et al. (US 11,922,308 B2) in view of Silva et al. (US 2021/0117978 A1) and further in view of Wang (CN-111382843-A).
Regarding Claim 6, Leskovee et al./Silva et al., teaches all the limitations as applied to Claim 1.
However, Leskovee et al./Silva et al., do not explicitly teach wherein the graph neural network comprises a graph neural network having an Attention mechanism.
But, Wang in the analogous art of aggregating nodes using a graph neural network, teaches wherein the graph neural network comprises a graph neural network having an Attention mechanism. (Page 21: “Optionally..,”; and Page 22: “Aggregating…, and Inputting…,”)(Wang teaches a graph neural network model that includes an attention mechanism aggregating)
It would have been prima facia for obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a GCN model that iteratively aggregates nodes of Leskovee et al. and updating Nodes based previously updated nodes iteratively using an aggregation function of Silva et al., by incorporating the teachings of using an graph neural network attention mechanism to aggregate nodes to obtain a next iteration for a neighbor node and inputting vectors into a node and neighboring nodes to obtain feature expressions of the next neighboring nodes using an LSTM operator of Wang, with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to improve GNN parallel processing of nodes. (Wang: Page 25: “The foregoing…,”)
Regarding Claim 7, Leskovee et al./Silva et al.,/Wang teaches all the limitations as applied to Claim 6.
However, Leskovee et al./Silva et al., do not explicitly teach wherein the graph neural network has a long-short term memory (LSTM) memory.
But, Wang in the analogous art of aggregating nodes using a graph neural network, teaches wherein the graph neural network has a long-short term memory (LSTM) memory. (Page 21: “Optionally..,”; and Page 22: “Aggregating…, and Inputting…,”)(Wang teaches a graph neural network model that includes an attention mechanism for aggregating and an LSTM operator)
It would have been prima facia obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a GCN model that iteratively aggregates nodes of Leskovee et al. and updating Nodes based previously updated nodes iteratively using an aggregation function of Silva et al., by incorporating the teachings of using an graph neural network attention mechanism to aggregate nodes to obtain a next iteration for a neighbor node and inputting vectors into a node and neighboring nodes to obtain feature expressions of the next neighboring nodes using an LSTM operator of Wang, with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to improve GNN parallel processing of nodes. (Wang: Page 25: “The foregoing…,”)
Regarding Claim 8, Leskovee et al./Silva et al./Wang, teaches all the limitations as applied to Claim 7.
However, Leskovee et al./Silva et al., do not explicitly teach
aggregating, by using an Attention operation, the node propagation representations received by the target graph nodes to obtain neighboring-node propagation representations of the target graph nodes.
performing, by using the LSTM aggregator, LSTM aggregation on the neighboring-node propagation representations of the target graph nodes and the node propagation representations of the previous iteration of the target graph nodes to generate the node propagation representations of the current iteration of the target graph nodes.
But, Wang in the analogous art of aggregating nodes using a graph neural network, teaches
aggregating, by using an Attention operation, the node propagation representations received by the target graph nodes to obtain neighboring-node propagation representations of the target graph nodes. (Page 9, “Step 204: Perform…,” and “Step 206: Aggregate…,”(Wang teaches performing node embedding vector expressions on a supply enterprise network. The system can aggregate the vector input of the enterprise node based on the an adaptive function of the attention mechanism (e.g., Attention operation). The system can obtain the aggregation feature expression of the next iteration process of the neighbor nodes aggregated by the enterprise node)
performing, by using the LSTM aggregator, LSTM aggregation on the neighboring-node propagation representations of the target graph nodes and the node propagation representations of the previous iteration of the target graph nodes to generate the node propagation representations of the current iteration of the target graph nodes. (Page 10: “Step 208: Input…,”)(Wang teaches inputting the vector of the enterprise node and the aggregate feature expression of the next iteration process of the neighbor nodes aggregated by the enterprise node into the depth adaptation function based on the LSTM operator (e.g., LSTM aggregator) to obtain the feature expression of the then next iteration process of the enterprise node)
It would have been prima facia obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify using a GCN model that iteratively aggregates nodes of Leskovee et al. and updating Nodes based previously updated nodes iteratively using an aggregation function of Silva et al., by incorporating the teachings of using an attention mechanism to aggregate nodes to obtain a next iteration for a neighbor node and inputting vectors into a node and neighboring nodes to obtain feature expressions of the next neighboring nodes of Wang, with the motivation in the prior art that would have led one of ordinary skill to combine the prior art reference teachings to arrive at the claimed invention in order to improve GNN parallel processing of nodes. (Wang: Page 25: “The foregoing…,”)
Regarding Claim 15, Leskovee et al./Silva et al./Wang, teaches all the limitations as applied to Claim 14 and wherein the graph neural network comprises a graph neural network having an Attention mechanism. (See, relevant rejection(s) of Claim(s) 6 and 14)
Regarding Claim 16, Leskovee et al./Silva et al./Wang, teaches all the limitations as applied to Claim 15 and wherein the graph neural network has a long-short term memory (LSTM) memory. (See, relevant rejection(s) of Claim(s) 7 and 15)
Regarding Claim 17, Leskovee et al./Silva et al./Wang, teaches all the limitations as applied to Claim 16 and
aggregating, by using an Attention operation, the node propagation representations received by the target graph nodes to obtain neighboring-node propagation representations of the target graph nodes. (See, relevant rejection(s) of Claim(s) 8(a) and 16)
performing, by using the LSTM aggregator, LSTM aggregation on the neighboring-node propagation representations of the target graph nodes and the node propagation representations of the previous iteration of the target graph nodes to generate the node propagation representations of the current iteration of the target graph nodes. (See, relevant rejection(s) of Claim(s) 8(b) and 16)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shekhar et al. (US 2021/0233080 A1). Shekhar et al. teaches utilizing a graph convolutional neural network to generate similarity probabilities between pairs of digital identities associated with digital transactions based on time dependencies.
Seigel et al. (US 2020/0250231 A1). Seigel et al. teaches generating a graph based upon ingested data. Then odes of the graph may comprise expressions tied to specific data and/or data fields of an organization. The system can aggregate transactional data into the nodes. The nodes can be connected to other nodes based on their relationships.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN A HEFLIN whose telephone number is (571)272-3524. The examiner can normally be reached 7:30 - 5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sarah Monfeldt can be reached at (571) 270-1833. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.A.H./Examiner, Art Unit 3628
/MICHAEL P HARRINGTON/Primary Examiner, Art Unit 3628