Prosecution Insights
Last updated: April 18, 2026
Application No. 17/562,080

Graph Neural Network Ensemble Learning

Non-Final OA §103§112
Filed
Dec 27, 2021
Examiner
BREEN, JAKE TIMOTHY
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+15.0% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
24 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
30.5%
-9.5% vs TC avg
§103
35.2%
-4.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the filing on 12/05/2025. Claims 1-20, are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the training dataset" in line 12. There is insufficient antecedent basis for this limitation in the claim. Claim 1 previously recited “a modified training dataset” (emphasis added) on line 7, but line 12 recites “the training dataset” which is different than “the modified training dataset” (emphasis added). Claim 8 recites the limitation "the training dataset" in line 11. There is insufficient antecedent basis for this limitation in the claim. Claim 8 previously recited “a modified training dataset” (emphasis added) on line 5, but line 11 recites “the training dataset” which is different than “the modified training dataset” (emphasis added). Claim 14 recites the limitation "the training dataset" in lines 7-8. There is insufficient antecedent basis for this limitation in the claim. Claim 14 previously recited “a modified training dataset” (emphasis added) on line 2, but lines 7-8 recite “the training dataset” which is different than “the modified training dataset” (emphasis added). Claims 2-7, 9-13, and 15-20 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre‐AIA ), second paragraph, as being indefinite for depending upon an indefinite parent claim. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 7 and 20 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 7 recites “wherein the sampling of the plurality of subgraphs and a sampling of a subset of nodes from each of the plurality of subgraphs is random”, however, claim 1 has been amended to recite “randomly sampling a plurality of subgraphs from the modified training dataset” and “randomly sampling feature space from the sampled subgraphs”. As such, claim 7 fails to further limit the subject matter of claim 1 from which it depends. Claim 20 recites similar limitations as claim 7 above, and fails to further limit the subject matter of claim 14 for similar reasons. Thus, claim 20 is rejected for the same reasons as claim 7 above. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 8-11, 13-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Molloy et al. (US 2021/0287141 A1, first cited in previous office action filed 03/07/2025), hereinafter Molloy, in view of Chen et al. (US 2021/0067549 A1), hereinafter Chen, and further in view of Gurwicz et al. (US 2019/0042917 A1, first cited in previous office action filed 03/07/2025), hereinafter Gurwicz, and further in view of Zhao et al. (CN 113095592 A, first cited in previous office action filed 03/07/2025), hereinafter Zhao, and further in view of LEWIS et al. (US 2023/0060352 A1), hereinafter Lewis. Regarding claim 1, Molloy teaches A computer system comprising: a processor set; one or more computer-readable storage media; and program instructions stored on the one or more computer-readable storage media to cause the processor set to perform operations comprising: (In one illustrative embodiment, a method is provided, in a data processing system comprising a processor and a memory, the memory comprising instructions which are executed by the processor to specifically configure the processor to implement a hardened ensemble artificial intelligence (AI) model generator. [see Molloy, para. 6]): mitigating an adversarial attack (Molloy discloses mitigating an adversarial attack by preventing an adversarial attack in one AI model from transferring to another model in the ensemble [see Molloy, para. 6]); training two or more neural networks, each trained from the feature space (Molloy discloses co-training at least two AI models [see Molloy, Abstract], that training neural networks is known in the art so any training methodology can be used, and training generally involves modifying weights by various features scored by nodes [see Molloy, para. 58]); building a ensemble with the trained two or more neural networks (The hardened ensemble AI model generator co-trains at least two AI models. [see Molloy, Abstract]); applying a dataset to the ensemble, executing the ensemble and constructing an output from the executed ensemble, wherein the output is a control signal that controls an operatively coupled device, and wherein each NN in the NN ensemble has a respective output and each respective output is aggregated by a voting algorithm (Molloy discloses applying data to the cognitive computing system to construct an output signal to control other systems [see Molloy, para. 76], such that each model in the ensemble constructs an output and the final output is generating through maximum voting [see Molloy, para. 41]). However, Molloy fails to teach an adversarial attack introduced by a modified training dataset; processing the modified training dataset, comprising representing the modified training dataset in a graph, wherein the graph comprises a plurality of nodes and edges, wherein an edge, in the plurality of edges, connecting two nodes represents an affinity between the two nodes, wherein the training dataset includes data in a graph format; randomly sampling a plurality of subgraphs from the modified training dataset; randomly sampling feature space from the sampled subgraphs; training two or more graph neural networks (GNNs), each GNN trained from the sampled feature space; building a GNN ensemble with the trained two or more GNNs; and applying a testing dataset to the GNN ensemble, executing the GNN ensemble and constructing an output from the executed GNN ensemble, wherein the output controls a product dispensing rate of an operatively coupled device. In the same field of endeavor, Chen teaches: an adversarial attack introduced by a modified training dataset (A method for detecting and responding to an intrusion in a computer network includes generating an adversarial training data set that includes original samples and adversarial samples, by perturbing one or more of the original samples with an integrated gradient attack to generate the adversarial samples. [see Chen, para. 4]); processing the modified training dataset, comprising representing the modified training dataset in a graph (A method for detecting and responding to an intrusion in a computer network includes generating an adversarial training data set that includes original samples and adversarial samples, by perturbing one or more of the original samples with an integrated gradient attack to generate the adversarial samples. The original and adversarial samples are encoded to generate respective original and adversarial graph representations, based on node neighborhood aggregation. A graph-based neural network is trained to detect anomalous activity in a computer network, using the adversarial training data set. [see Chen, para. 4]), wherein the graph comprises a plurality of nodes and edges, wherein an edge, in the plurality of edges, connecting two nodes represents an affinity between the two nodes (In general, both graph structure and node features can be represented with binary values, for example characterizing a connection between nodes with a ‘1’, and representing the fact that a node has a particular feature or attribute with a ‘1’. A perturbation can therefore flip either of these representations, from a 1 to a 0, or from 0 to a 1. The present embodiments compute integrated gradients of a prediction score for a target class. The integrated gradients are then used as metrics to measure the priority of perturbing specific features or edges in a graph G. For example, if the perturbation adds or removes an edge from G, then the prediction score for the target class will change. [see Chen, para. 32-33), wherein the training dataset includes data in a graph format (Chen discloses generating the adversarial training dataset by perturbing a data set of original samples to include adversarial samples [see Chen, para. 4], and training on graph data including the speculated adversarial instances as part of the training data [see Chen, para. 30]); the modified training dataset (A method for detecting and responding to an intrusion in a computer network includes generating an adversarial training data set that includes original samples and adversarial samples, by perturbing one or more of the original samples with an integrated gradient attack to generate the adversarial samples. [see Chen, para. 4]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate an adversarial attack introduced by a modified training dataset; processing the modified training dataset, comprising representing the modified training dataset in a graph, wherein the graph comprises a plurality of nodes and edges, wherein an edge, in the plurality of edges, connecting two nodes represents an affinity between the two nodes, wherein the training dataset includes data in a graph format; and the modified training dataset as suggested in Chen into Molloy because both systems protect against adversarial attacks (see Molloy, para. 6; see Chen, para. 20). Incorporating the teaching of Chen into Molloy would provide an effective adversarial defense system to protect against adversarial attacks (see Chen, para. 20). However, the combination of Molloy and Chen fails to teach randomly sampling a plurality of subgraphs from the training dataset; randomly sampling feature space from the sampled subgraphs; training two or more graph neural networks (GNNs), each GNN trained from the sampled feature space; building a GNN ensemble with the trained two or more GNNs; and applying a testing dataset to the GNN ensemble, executing the GNN ensemble and constructing an output from the executed GNN ensemble, wherein the output controls a product dispensing rate of an operatively coupled device. In the same field of endeavor, Gurwicz teaches: randomly sampling a plurality of subgraphs from the training dataset (the logic may determine a collection of sample sets from a dataset. In various such embodiments, each sample set may be drawn randomly for the dataset with replacement between drawings. In some embodiments, the logic may partition a graph into multiple subgraph sets based on each of the sample sets. [see Gurwicz, para. 14]); training two or more graph neural networks (GNNs) (bootstrapping may include a statistical technique that samples several subsets from a single given data set (e.g., sample set collection 104 from dataset 102), and then conducts the task (e.g., training a classifier, partitioning a subgraph) independently for each subset. In several embodiments, this may result in an ensemble of classifiers or subgraphs instead of one. [see Gurwicz, para. 24]); building a GNN ensemble with the trained two or more GNNs (In one or more embodiments, graphs determined based on the graphical model tree may be converted into topologies for neural networks. Several embodiments may produce and/or utilize an ensemble of different neural network topologies, leading to improved classification capabilities. [see Gurwicz, para. 18]); applying a testing dataset to the GNN ensemble, executing the GNN ensemble and constructing an output from the executed GNN ensemble (a test sample may then be evaluated based on each of the classifiers/subgraphs independently, and the final decision may be based on the ensemble. [see Gurwicz, para. 24]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate randomly sampling a plurality of subgraphs from the training dataset as suggested in Gurwicz into the combination of Molloy and Chen to incorporate randomly sampling a plurality of subgraphs from the modified training dataset because Chen teaches the modified training data set being in a graph format (see Chen, para. 4 and para. 30) while Gurwicz teaches randomly sampling subgraphs from the training data set (see Gurwicz, para. 14), thus the combination of art would randomly sample a plurality of subgraphs from a dataset which is a modified training dataset in graph format; it would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to further incorporate training two or more graph neural networks (GNNs); building a GNN ensemble with the trained two or more GNNs; and applying a testing dataset to the GNN ensemble, executing the GNN ensemble and constructing an output from the executed GNN ensemble as suggested in Gurwicz into the combination of Molloy and Chen because both systems employ an ensemble of machine learning models (see Molloy, Abstract; see Gurwicz, para. 18). Incorporating the teaching of Gurwicz into the combination of Molloy and Chen would enable reliable and efficient optimization of neural networks to achieve improved performance and increased accuracy of the neural networks (see Gurwicz, para. 18). It would have been further obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein each NN in the NN ensemble has a respective output and each respective output is aggregated by a voting algorithm as taught by Molloy above with the GNN ensemble as taught by Gurwicz to incorporate wherein each GNN in the GNN ensemble has a respective output and each respective output is aggregated by a voting algorithm. However, the combination of Molloy, Chen, and Gurwicz fails to teach randomly sampling feature space from the sampled subgraphs; each GNN trained from the sampled feature space; and wherein the output controls a product dispensing rate of an operatively coupled device. In the same field of endeavor, Zhao teaches: sampling feature space from the sampled subgraphs (after obtaining multiple subgraphs related to the target node, the sub-feature representations of the target node can be learned based on each subgraph using the GNN corresponding to each subgraph [see Zhao, n0079]); each GNN trained from the sampled feature space (after obtaining multiple subgraphs related to the target node, the sub-feature representations of the target node can be learned based on each subgraph using the GNN corresponding to each subgraph [see Zhao, n0079]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate sampling feature space from the sampled subgraphs as disclosed by Zhao to incorporate randomly sampling feature space from the sampled subgraphs because Zhao discloses sampling sub-feature representations of a target node from each subgraph [see Zhao, n0079] to aggregate them to obtain a feature representation of a target node [see Zhao, n0084]. It would have been obvious to one of ordinary skill in the art before the effective filing date, that the target node can be randomly selected, such that the sub-feature representation is randomly sampled in each subgraph to learn the feature representation of a randomly sampled feature. Similar to how in Gurwicz, samples are randomly drawn from the dataset [see Gurwicz, para. 14 and 27], features can be randomly drawn from the dataset or from the subgraphs such that their sub-feature representations are then randomly sampled. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to further incorporate each GNN trained from the sampled feature space as suggested in Zhao into the combination of Molloy, Chen, and Gurwicz because both systems train a plurality of machine learning models (see Molloy, Abstract; see Zhao, n0005). Incorporating the teaching of Zhao into the combination of Molloy, Chen, and Gurwicz would allow a machine learning model with better model performance can be trained and effectively improve the accuracy of the prediction results (see Zhao, n0050). However, the combination of Molloy, Chen, Gurwicz, and Zhao fails to teach wherein the output controls a product dispensing rate of an operatively coupled device. In the same field of endeavor, Lewis teaches: wherein the output controls a product dispensing rate of an operatively coupled device (Lewis disclosing the dispensing system having operating parameters including the frequency at which fluid volumes are dispensed [see para. 86], and adjusting operating parameters based on the classification output and the input/output relationships (representing the input to the dispensing system and the output of the dispensing system respectively) [see para. 118-119]. Thus, at least the operating parameter of the frequency at which fluid volumes are dispensed may be adjusted by the machine learning tool to control the product dispensing rate of the dispensing system). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the output controls a product dispensing rate of an operatively coupled device as disclosed by Lewis into the combination of Molloy, Chen, Gurwicz, and Zhao because both methods use machine learning to control systems (see Molloy, para. 76; see Lewis, Abstract). Incorporating the teaching of Lewis into the combination of Molloy, Chen, Gurwicz, and Zhao would detect and correct for defects associated with the dispensed portions to improve quality and production efficiency (see Lewis, Abstract). Regarding claim 2, the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis as applied in claim 1 above teaches all the limitations of claim 1 and further teaches: further comprising dynamically configuring and issuing a control signal, the control signal is configured based on the constructed output to the operatively coupled device, wherein the device being a physical hardware device, wherein the control signal selectively controls a physical state of the operatively coupled device (This final classification output is then used as a basis for performing additional cognitive operations by the other mechanisms of the cognitive computing system using artificial intelligence to provide useful results, e.g., medical diagnoses, treatment recommendations, control signals to control other systems, e.g., braking systems of an automobile, collision warning notifications for a vehicle, controlling entry to physical premises based on facial image recognition, biometric authentication, etc., or any of a plethora of other cognitive computing operations. [see Molloy, para. 76]). Regarding claim 3, the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis as applied in claim 1 above teaches all the limitations of claim 1 and further teaches: wherein the execution of the GNN ensemble includes output prediction values from the two or more GNNs, and wherein a combination of the prediction values produces an ensemble value (The n number of AI models may be assembled into an ensemble of AI models in which the outputs of the various AI models all operating on the same input, may be combined to generate a final output of the ensemble. The combining of the outputs of the various AI models in the ensemble may be performed in any suitable manner, such as using an averaging technique, i.e. averaging the outputs from the various models to generate a single output value, maximum vote technique, i.e. determining which output value is supported by the majority of AI models in the ensemble, or any other suitable combination function. [see Molloy, para. 41]). Regarding claim 4, the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis as applied in claim 1 above teaches all the limitations of claim 3 and further teaches: wherein a production of the ensemble value leverages a machine learning voting algorithm to select a value as the ensemble value (Using a maximum vote technique, each of the vector outputs of each of the AI models may be indicative of a single resulting output classification or result, e.g., “dog”, and the maximum vote technique would determine which classification has the most “votes” from the AI models, e.g., the majority of the AI models indicate that “dog” is the right classification for the input image. Thus, the ensemble leverages the processing of multiple AI models to generate a correct classification or output result from processing the input. [see Molloy, para. 42]). Regarding claim 6, the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis as applied in claim 1 above teaches all the limitations of claim 3 and further teaches: wherein the execution of the GNN ensemble assesses a posterior probability for the output prediction value from each GNN in the GNN ensemble and average the posterior probabilities (the AI models may each output a vector output in which the vector output comprises a plurality of vector slots, each vector slot corresponding to a particular classification or output result. Values in each of the vector slots represent a confidence, or probability, score generated by the corresponding AI model indicating the AI model's prediction that the corresponding classification or output result is the correct classification or output result for the input to the AI model. Thus, for example, in an image recognition AI model, the output vector may have a plurality of vector slots, each corresponding to a possible classification of the input image, e.g., cat, dog, horse, etc. The AI model calculates a value for each of these classifications indicative of the AI model's prediction that the corresponding classification applies to the input, e.g., [0.34, 0.66, 0.0] indicating a 34% probability the input image is a cat, 66% probability the input image is a dog, and 0% probability the image is a horse. Each AI model may process the same input data and generate its own vector output and the values for each of the classifications or output results may be averaged to generate an averaged confidence or probability score in a final output vector of the ensemble. [see Molloy, para. 42]). Regarding claim 8, claim 8 contains substantially similar limitations to those found in claim 1. Therefore it is rejected for the same reason as claim 1 above. Additionally, the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis further teaches: a computer program product comprising (In other illustrative embodiments, a computer program product comprising a computer useable or readable medium having a computer readable program is provided. The computer readable program, when executed on a computing device, causes the computing device to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment. [see Molloy, para. 7]). Regarding claim 14, claim 14 contains substantially similar limitations to those found in claim 1 above. Consequently, claim 14 is rejected for the same reasons. Regarding claims 9 and 15, claims 9 and 15 contains substantially similar limitations to those found in claim 2 above. Consequently, claims 9 and 15 are rejected for the same reasons. Regarding claims 10 and 16, claims 10 and 16 contains substantially similar limitations to those found in claim 3 above. Consequently, claims 10 and 16 are rejected for the same reasons. Regarding claims 11 and 17, claims 11 and 17 contains substantially similar limitations to those found in claim 4 above. Consequently, claims 11 and 17 are rejected for the same reasons. Regarding claims 13 and 19, claims 13 and 19 contains substantially similar limitations to those found in claim 6 above. Consequently, claims 13 and 19 are rejected for the same reasons. Claims 5, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Molloy et al. (US 2021/0287141 A1, first cited in previous office action filed 03/07/2025), hereinafter Molloy, in view of Chen et al. (US 2021/0067549 A1), hereinafter Chen, and further in view of Gurwicz et al. (US 2019/0042917 A1, first cited in previous office action filed 03/07/2025), hereinafter Gurwicz, and further in view of Zhao et al. (CN 113095592 A, first cited in previous office action filed 03/07/2025), hereinafter Zhao, and further in view of LEWIS et al. (US 2023/0060352 A1) , hereinafter Lewis, as applied in claim 1 above, and further in view of Li et al. (Ensemble-model-based link prediction of complex networks, published January, 2020, first cited in previous office action filed 03/07/2025). Regarding claim 5, the combination of Molloy, Gurwicz, Zhao, and Lewis as applied in claim 1 above teaches all the limitations of claim 4 and further teaches: wherein the selected value is selected from the group consisting of a node classification (Using a maximum vote technique, each of the vector outputs of each of the AI models may be indicative of a single resulting output classification or result, e.g., “dog”, and the maximum vote technique would determine which classification has the most “votes” from the AI models, e.g., the majority of the AI models indicate that “dog” is the right classification for the input image. Thus, the ensemble leverages the processing of multiple AI models to generate a correct classification or output result from processing the input. [see Molloy, para. 42]). However, the combination of Molloy, Gurwicz, Zhao, and Lewis fails to teach wherein the selected value is selected from the group consisting of a predicted link. In the same field of endeavor, Li teaches: wherein the selected value is selected from the group consisting of a predicted link (Based on the existing similarity index, in this section, we propose an ensemble-model-based link prediction algorithm based on model integration (EMLP algorithm). Because some similarity indexes consider the same structural characteristics of a network and have strong correlations, over-fitting will easily occur if the machine-learning algorithm is used directly. Therefore, we select several similarity indexes that can represent different information about the network structure. Based on this concept, this paper regards the link prediction of complex networks as a binary classification problem, and considers four characteristics of each two nodes, namely, local information, path, random walk and matrix forest index (i.e., CN, LHN-II, COS+, and MFI). First, the similarity index is considered as the feature of any two nodes in the network. Then, several machine-learning algorithms are selected to construct multiple base models. Finally, using the idea of stacking in model integration, the prediction results of the base model are re-optimized and learned to obtain a better model. Fig. 1. displays the flow diagram of the EMLP algorithm. [see Li, Section 3. EMLP algorithm of complex networks, para. 1]). It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate wherein the selected value is selected from the group consisting of a predicted link as suggested in Li into the combination of Molloy, Gurwicz, Zhao, and Lewis to teach wherein the selected value is selected from the group consisting of a predicted link and a node classification because the Molloy selects the value consisting of classification (see Molloy, para. 42) and Li selects the value consisting of a predicted link (see Li, Section 3. EMLP algorithm of complex networks, para. 1) so the combination would select the value from the group consisting of a predicted link or classification. It would have been obvious to one of ordinary skill, in the art at the time before the effective filing date of the invention to incorporate Li into the combination of Molloy, Chen, Gurwicz, Zhao, and Lewis because both systems use ensemble machine learning to predict classifications (see Molloy, para. 42; see Li, Section 3. EMLP algorithm of complex networks, para. 1). Incorporating the teaching of Li into the combination of Molloy, Gurwicz, and Zhao would have better stability and accuracy (see Li, Abstract). Regarding claims 12 and 18, claims 12 and 18 contains substantially similar limitations to those found in claim 5 above. Consequently, claims 12 and 18 are rejected for the same reasons. Response to Amendment The amendment to the specification, filed 11/14/2025, has been fully considered as is accepted, the objection to the specification is respectfully withdrawn. Response to Arguments Applicant’s arguments, filed 11/14/2025, with respect to indefiniteness of claims 1-20 under 35 U.S.C. 112(b) have been fully considered and are persuasive, however, the rejections are partially maintained. Claims 1, 8, and 14 still contain a lack of antecedent basis with respect to “the training dataset” as outlined above in the 35 U.S.C. 112(b) section above. Applicant’s arguments, filed 11/14/2025, with respect to claims 1-20 under 35 U.S.C. 101 have been fully considered and are persuasive, and the rejections are respectfully withdrawn. Applicant’s arguments, filed 11/14/2025, with respect to the rejections of claim 1-20 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Molloy, Chen, Gurwicz, Zhao, and Lewis as indicated above in the 35 U.S.C. 103 section. Applicant argues, on pg. 15-17, that none of Molloy, Chen, Gurwicz, Zhao, and Li, taken alone or in combination teach "the output is a control signal that controls a product dispensing rate of an operatively coupled device". Examiner agrees. Molloy teaches "the output is a control signal that controls an operatively coupled device", and Lewis et al. (US 2023/0060352 A1), teaches "the output controls a product dispensing rate of an operatively coupled device" as indicated in the 35 U.S.C. 103 section above. Thus, the combination of Molloy and Lewis teaches "the output is a control signal that controls a product dispensing rate of an operatively coupled device". Applicant argues, on pg. 15-17, that Molloy, Chen, Gurwicz, Zhao, and Li, taken alone or in combination fail to teach "wherein each GNN in the GNN ensemble has a respective output and each respective output is aggregated by a voting algorithm". Examiner respectfully disagrees. As indicated above in the 35 U.S.C. 103 section, Molloy teaches “wherein each NN in the NN ensemble has a respective output and each respective output is aggregated by a voting algorithm”, and Gurwicz teaches a GNN ensemble. Thus, the combination of Molloy and Gurwicz teaches "wherein each GNN in the GNN ensemble has a respective output and each respective output is aggregated by a voting algorithm". For at least the aforementioned reasons, the rejections of claims 1-20 under 35 U.S.C. 103 are respectfully maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. KIM et al. (US 20210142806 A1) teaches an AI dispensing system that contains a machine learning model trained to control water dispensing information (water dispensing amount). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKE BREEN whose telephone number is (571)272-0456. The examiner can normally be reached Monday - Friday, 7:00 AM - 3:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.B./Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Dec 27, 2021
Application Filed
Mar 06, 2025
Non-Final Rejection — §103, §112
Jun 02, 2025
Interview Requested
Jun 10, 2025
Examiner Interview Summary
Jun 10, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Response Filed
Sep 08, 2025
Final Rejection — §103, §112
Nov 14, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 02, 2026
Non-Final Rejection — §103, §112
Mar 03, 2026
Interview Requested
Mar 17, 2026
Examiner Interview Summary
Mar 17, 2026
Applicant Interview (Telephonic)
Mar 26, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602577
NEURON CORE WITH TIME-EMBEDDED FLOATING POINT ARITHMETIC
2y 5m to grant Granted Apr 14, 2026
Patent 12555650
SYSTEM AND METHOD FOR MOLECULAR PROPERTY PREDICTION USING EDGE-CONDITIONED GRAPH ATTENTION NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12518136
INFERENCE EXECUTION METHOD FOR CANDIDATE NEURAL NETWORKS AND SWITCHING NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+75.0%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month