Prosecution Insights
Last updated: April 19, 2026
Application No. 17/656,644

Computer Security Systems and Methods Using Self-Supervised Consensus-Building Machine Learning

Final Rejection §101§103
Filed
Mar 26, 2022
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Bitdefender Ipr Management Ltd.
OA Round
2 (Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§101 §103
CDETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the communication filed on 30 December 2025. Claims 1-21 are being considered on the merits. Claim Rejections - 35 USC § 101 Claims 1, 10, and 19: Step 1: Independent claims 1, 10, and 19 recite a method, computer system, and non-transitory system, respectively a computer-implemented method and therefore falls under one of the four statutory categories of patent-eligible subject matter. Step 2A Prong 1: See the rejection of claim 1 above. The same rationale applies to this dependent claim. determining a plurality of values of the selected attribute, each value determined by a NN module associated with a distinct edge of the plurality of incoming edges, and (Mental process: determining values is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting a “NN Module”, nothing in this claim element precludes the step from practically being performed in the mind. For example, a person can look at nodes edges and determine a value based on graph). Step 2A Prong 2: This judicial exception is not integrated into practical application A computer security method comprising employing at least one hardware processor of a computer system to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) A computer system comprising at least one hardware processor configured to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) A non-transitory computer readable medium storing instructions which, when executed by at least one hardware processor of a computer system, cause the computer system to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Carry out a consensus building training of a plurality of artificial neural networks (NN) configured to evaluate a plurality of attributes of a set of training data, (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) wherein each NN of the plurality of artificial NNs is configured to evaluate one attribute of the plurality of attributes according to another attribute of the plurality of attributes, (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) wherein each of a selected subset of the plurality of NNs is configured to evaluate a selected attribute of the plurality of attributes according to a distinct attribute of the plurality of attributes, and wherein the consensus-building training comprises: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) adjusting a set of parameters of the selected subset of NNs according to a measure of consensus of the plurality of values; and (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Executing the selected subset of NNs to determine a plurality of values of the selected attribute of the training data, each value of the plurality of values computed by a distinct NN of the selected subset of NNs, and (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) in response to carrying out the consensus-building training, transmit adjusted values of the set of parameters to a threat detector configured to employ trained instances of the plurality of artificial NNs to determine whether a set of target data is indicative of a computer security threat. (Insignificant extra-solution activity to the judicial exception: Receiving or transmitting data over a network – See MPEP § 2106.05(g)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception A computer security method comprising employing at least one hardware processor of a computer system to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) A computer system comprising at least one hardware processor configured to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) A non-transitory computer readable medium storing instructions which, when executed by at least one hardware processor of a computer system, cause the computer system to: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Carry out a consensus building training of a plurality of artificial neural networks (NN) configured to evaluate a plurality of attributes of a set of training data, (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) wherein each NN of the plurality of artificial NNs is configured to evaluate one attribute of the plurality of attributes according to another attribute of the plurality of attributes, (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) wherein each of a selected subset of the plurality of NNs is configured to evaluate a selected attribute of the plurality of attributes according to a distinct attribute of the plurality of attributes, and wherein the consensus-building training comprises: (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) adjusting a set of parameters of the selected subset of NNs according to a measure of consensus of the plurality of values; and (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Executing the selected subset of NNs to determine a plurality of values of the selected attribute of the training data, each value of the plurality of values computed by a distinct NN of the selected subset of NNs, and (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) in response to carrying out the consensus-building training, transmit adjusted values of the set of parameters to a threat detector configured to employ trained instances of the plurality of artificial NNs to determine whether a set of target data is indicative of a computer security threat. (Insignificant Extra Solution Activity: Receiving or transmitting data over a network is well-understood, routine, conventional activity – see Berkheimer evidence MPEP § 2106.05(d)) Claim 2 and 11: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application wherein the consensus-building training comprises adjusting the set of parameters to bring the plurality of values of the selected attribute closer together. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception wherein the consensus-building training comprises adjusting the set of parameters to bring the plurality of values of the selected attribute closer together. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claims 3 and 12: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application wherein the measure of consensus is determined according to a distance between each value of the plurality of values and a reference value of the selected attribute. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception wherein the measure of consensus is determined according to a distance between each value of the plurality of values and a reference value of the selected attribute. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claims 4 and 13: Step 2A Prong 1: See the rejection of claims 3 and 12 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application wherein the reference value of the selected attribute is determined by an expert model according to the training data, the expert model distinct from the selected subset of NNs. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception wherein the reference value of the selected attribute is determined by an expert model according to the training data, the expert model distinct from the selected subset of NNs. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claims 5 and 14: Step 2A Prong 1: See the rejection of claims 3 and 12 above. The same rationale applies to this dependent claim. wherein the reference value comprises a selected value of the plurality of values. (Mental process: Determining a measure of consensus is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind; nothing in this claim element precludes the step from practically being performed in the mind. For example, a person can determine a measure of consensus after looking at distances between values.) Step 2A Prong 2 and Step 2B: The claim does not include additional elements Claims 6 and 15: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. wherein the reference value of the selected attribute comprises an average of the plurality of values. (Mental process: Determining a measure of consensus is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind; nothing in this claim element precludes the step from practically being performed in the mind. For example, a person can determine a measure of consensus after looking at distances between values.) Step 2A Prong 2 and Step 2B: The claim does not include additional elements Claims 7 and 16: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application further comprising employing at least one hardware processor of the computer system to execute the threat detector. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception further comprising employing at least one hardware processor of the computer system to execute the threat detector. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claims 8 and 17: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. wherein the set of target data comprises a web page, and wherein the threat detector is configured to determine whether the web page comprises malicious content. (Mental process: Determining whether a web page comprises malicious content is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting a “threat detector” and a “web page”, nothing in this claim element precludes the step from practically being performed in the mind. For example, a person can review a web page and making a determination about whether a web page comprises malicious content) Step 2A Prong 2 and Step 2B: The claim does not include additional elements. Claims 9 and 18: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application wherein the set of target data comprises an indicator of a computer process, and wherein the threat detector is configured to determine whether the computer process comprises malware. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception wherein the set of target data comprises an indicator of a computer process, and wherein the threat detector is configured to determine whether the computer process comprises malware. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claims 20 and 21: Step 2A Prong 1: See the rejection of claims 1 and 10 above. The same rationale applies to this dependent claim. Step 2A Prong 2: This judicial exception is not integrated into practical application wherein the plurality of artificial NNs are interconnected to form a graph having a plurality of nodes and a plurality of edges connecting the plurality of nodes, wherein each node of the plurality of nodes represents a distinct attribute of the plurality of attributes, wherein each of the plurality of edges represents a distinct NN of the plurality of artificial NNs. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception wherein the plurality of artificial NNs are interconnected to form a graph having a plurality of nodes and a plurality of edges connecting the plurality of nodes, wherein each node of the plurality of nodes represents a distinct attribute of the plurality of attributes, wherein each of the plurality of edges represents a distinct NN of the plurality of artificial NNs. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 9-16, and 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over Chen, et. al., (US 2021/0067549 A1; hereinafter, “Chen”) in view of Beser, et. al. (US 2019/0012595 A1; hereinafter, “Beser”) Claims 1, 10, and 19, Chen and Beser discloses: A computer security method comprising employing at least one hardware processor of a computer system to: (Chen, para. 0005: “A system for detecting and responding to an intrusion in a computer network includes a hardware processor and a memory.”) Carry out a consensus building training of a plurality of artificial neural networks (NN) configured to evaluate a plurality of attributes of a set of training data, (Beser, para. 0005 and 0022: “Systems, methods, and computer readable mediums (collectively, the “system”) are disclosed for the consensus and updating of computing models for artificial neural networks.” “In various embodiments, computing model updates and divergences may not all come from the same location or computing node. Merging of updates to a computing model may be based on the input from multiple distributed observations. The updates may be shared as training data or as differences to the computing model based on problematic or new data. One or more processing nodes may receive the updates in different orders”) wherein each NN of the plurality of artificial NNs is configured to evaluate one attribute of the plurality of attributes according to another attribute of the plurality of attributes, (Beser, para. 0048: “In various embodiments, each node 110-1, 110-2, 110-3, and/or validation node 130-1, 130-2 may each be assigned cryptographic keys (e.g., asymmetric keys) used to digitally sign and/or encrypt transmissions in system 100.” Examiner notes Beser teaches a cryptographic key as an attribute assigned to one node being evaluated in accordance with an asymmetric key of another node). wherein each of a selected subset of the plurality of NNs is configured to evaluate a selected attribute of the plurality of attributes according to a distinct attribute of the plurality of attributes, and (Beser, para 0005, 0019, and 0027: “Systems, methods, and computer readable mediums (collectively, the “system”) are disclosed for the consensus and updating of computing models for artificial neural networks. The neural network may comprise one or more nodes and one or more validation nodes.” “In various embodiments, not all nodes in the ANN need to participate in the validation. The nodes participating in the validation accept or reject the new computing model (e.g., full model, difference set, etc.), The proposed model would consist of a combination of the received updates from peer validation nodes.” “In various embodiments, system 100 may comprise one or more computing nodes 110 (e.g., a first node 110-1, a second node 110-2, a third node 110-3, etc.) and one or more validation nodes 130 (e.g., a first validation node 130-1, a second validation node 130-2, etc.) Examiner notes Beser teaches a subset of neural networks being validation nodes which are configured to evaluate computing neural networks’ update events i.e. attributes.). wherein the consensus-building training comprises: (Beser, para. 0005 and 0022 above teaches a consensus-building training of a neural networks) Executing the selected subset of NNs to determine a plurality of values of the selected attribute of the training data, (Beser, para. 0019: “In various embodiments, not all nodes in the ANN need to participate in the validation. The nodes participating in the validation accept or reject the new computing model (e.g., full model, difference set, etc.), The proposed model would consist of a combination of the received updates from peer validation nodes.” Beser teaches a set of validation nodes determining a plurality of values comprising a new computing model), each value of the plurality of values computed by a distinct NN of the selected subset of NNs, and (Beser, para. 0035: “In various embodiments, and with reference to FIG. 2, a system 200 may comprise a plurality of validation networks, with each validation network comprising (and/or sharing) one or more validation nodes. In that respect, each validation network may be configured to validate and update different computing models and/or computing sub-models.” Examiner notes Beser teaches teach validation node validating different computing models or sub-models). adjusting a set of parameters of the selected subset of NNs according to a measure of consensus of the plurality of values; and (Beser, para. 0005 and 0057: “The neural network may comprise one or more nodes and one or more validation nodes. The nodes may each comprise a computing model. The validation node may receive model update data corresponding to the computing model. The validation node may validate the model update data by establishing consensus with at least a second validation node in the neural network. The validation node may write the model update data to a model blockchain. The validation node may generate an updated computing model based on the model update data. The validation node may broadcast the updated computing model to a first node in the neural network.” “In various embodiments, validation node 130 generates an updated computing model (step 314) based on the model update data. For example, validation node 130 may be configured to generate the update computing model by merging the preexisting computing model with the model update data. For example, the preexisting computing model may be recomputed or retrained using the new training data, or parameters of the preexisting computing model may be changed based on the model update data.” Examiner notes Beser teaches only sending an update for parameters validated by validation nodes to computing nodes and Beser teaches validation nodes validating by consensus). in response to carrying out the consensus-building training (Beser, para. 0005 and 0022 above teaches a consensus-building training of a neural networks), transmit adjusted values of the set of parameters to a threat detector configured to employ trained instances of the plurality of artificial NNs (Beser, para. 003 teaches a plurality of artificial neural networks) to determine whether a set of target data is indicative of a computer security threat (Chen, para. 0025 and 0035: “GNN-based intrusion detection 44, which uses a trained GNN to detect anomalous behavior in network gathered network information” “In some embodiments, given a graph g(0)=(X(0), A(0)), with node features X(0), corresponding node labels y(0) and A, a classifier f(0) can be trained to distinguish between benign and anomalous nodes. Adversarial samples can be generated, and a perturbed graph G1=(X(1), A(1)) can be constructed. The labels of an attacked node are changed from benign to anomaly.” Examiner notes that each NN module is a node and Chen teaches nodes updated and can be changed to determine whether the features i.e. data is an anomaly i.e. threat). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Beser into Chen. Chen teaches detecting and responding to an intrusion in a computer network including generating an adversarial training data set that includes original samples and adversarial samples, by perturbing one or more of the original samples with an integrated gradient attack to generate the adversarial samples; Beser teaches consensus and updating of computing models for artificial neural networks. One of ordinary skill would have been motivated to combine the teachings of Beser into Chen in order to prevent a malicious actor or third party from inserting malicious code or incorrect models that may be detrimental to the overall operation of the models and/or system (Beser, para. 0024). Claim 2 and 11, Chen, as modified, teaches claims 1 and 10 above. Chen, as modified, further discloses: wherein the consensus-building training (Beser, para. 0005 and 0022 above teaches a consensus-building training of a neural networks) comprises adjusting the set of parameters to bring the plurality of values of the selected attribute closer together. (Chen, para. 0042: “This formulation includes a discriminator function S and an adversarial sample generator function G. Contrasted to a generative adversarial network (GAN), which generates adversarial samples that are close to the original samples, the present generator function G may generate adversarial samples as hard negative samples. These hard negative samples can help the system to learn a better discriminator for distinguishing the positive and negative pairs. The functions S and G are trained in a joint manner, adjusting the parameters of G to maximize log(1−S(G(x+|y−))), and adjusting the parameters of S to minimize log S(x, y).”) Claims 3 and 12, Chen, as modified, teaches claims 1 and 10 above. Chen further discloses: wherein the measure of consensus is determined according to a distance between each value of the plurality of values and a reference value of the selected attribute. (Chen, para. 0042: “This formulation includes a discriminator function S and an adversarial sample generator function G. Contrasted to a generative adversarial network (GAN), which generates adversarial samples that are close to the original samples, the present generator function G may generate adversarial samples as hard negative samples. These hard negative samples can help the system to learn a better discriminator for distinguishing the positive and negative pairs. The functions S and G are trained in a joint manner, adjusting the parameters of G to maximize log(1−S(G(x+|y−))), and adjusting the parameters of S to minimize log S(x, y).” Examiner notes that Chen teaches the distance between the positive sample and a generated sample to teach a GAN and where such distance can be applied to the measure of consensus via validation taught by Beser by determining whether the distance is small enough to be valid). Claims 4 and 13, Chen, as modified, teaches claims 3 and 12 above. Chen further discloses: wherein the reference value of the selected attribute is determined by an expert model according to the training data, the expert model distinct from the selected subset of NNs. (Chen, para. 0087: “Given a sequence of attributed graphs 1100 for a set of nodes 1104, where each node 1104 has a unique class label over a period of time, the present embodiments predict the labels of unlabeled nodes 1102 by learning from the labeled ones. In some embodiments, this can be used to detect anomalous network traffic. Given a network's historical records, a sequence of communication graphs can be constructed, where each node is a computational device and each edge indicates a communication. Each node can be associated with characteristic features, such as a network address (e.g., an IP address or MAC address) and a device type. The present embodiments can then classify the nodes into an anomalous class and a normal class, using the labeled historical data. The labeled network graph can then be used to identify anomalous behavior that may, for example, be indicative of a network failure or intrusion.” Examiner notes that Chen teaches a graph comprised of labeled historical records, i.e. the expert model where unlabeled nodes are labeled i.e. values determined according to the historical records which expert model is distinct from a consensus validation model as taught by Beser). Claims 5 and 14, Chen, as modified, teaches claims 3 and 12 above. Chen further discloses: wherein the reference value comprises a selected value of the plurality of values. (Chen, para. 0087: “Given a sequence of attributed graphs 1100 for a set of nodes 1104, where each node 1104 has a unique class label over a period of time, the present embodiments predict the labels of unlabeled nodes 1102 by learning from the labeled ones. In some embodiments, this can be used to detect anomalous network traffic. Given a network's historical records, a sequence of communication graphs can be constructed, where each node is a computational device and each edge indicates a communication. Each node can be associated with characteristic features, such as a network address (e.g., an IP address or MAC address) and a device type. The present embodiments can then classify the nodes into an anomalous class and a normal class, using the labeled historical data. The labeled network graph can then be used to identify anomalous behavior that may, for example, be indicative of a network failure or intrusion.” Examiner notes that Chen teaches a graph comprised of labeled historical records wherein each node can be any type of feature of a plurality of features i.e. can be any label of a plurality of labels) Claims 6 and 15, Chen, as modified, teaches claims 3 and 12 above. Chen further discloses: wherein the reference value of the selected attribute comprises an average of the plurality of values. (Chen, para. 0048: “Neighborhood aggregators of GNNs may include sum, max, and mean functions. The sum aggregator sums up the features within the neighborhood N0 to capture the full neighborhood. The max aggregator generates the aggregated representation by element-wise max-pooling. It captures neither the exact structure, nor the distribution of the neighborhood Nv. As the size of the neighborhood Nv can vary from one node to the next, the mean aggregator averages out individual element features. As contrasted to the sum and max aggregators, the mean aggregator can capture the distribution of the features in the neighborhood Nv.”) Claims 7 and 16, Chen, as modified, teaches claims 1 and 10 above. Chen further discloses: further comprising employing at least one hardware processor (Chen, para. 0060: “A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.”) of the computer system to execute the threat detector. (Chen, para. 0017 and Fig. 10: “FIG. 10 is a block diagram of an intrusion detection system that uses a graph neural network intrusion detection model, trained using adversarial samples, to identify and respond to anomalous network nodes in a manner that is resistant to adversarial training attacks;”) Claims 9 and 18, Chen, as modified, teaches claims 1 and 10 above. Chen further discloses: wherein the set of target data comprises an indicator of a computer process, and wherein the threat detector (Chen, para. 0025: “GNN-based intrusion detection 44, which uses a trained GNN to detect anomalous behavior in network gathered network information”) is configured to determine whether the computer process comprises malware. (Chen, para. 0026: “For example, network detectors monitor the topology of network connections and report an alert if a suspicious client suddenly connects to a stable server. Meanwhile, process-file detectors may generate an alert if an unseen process accesses a sensitive file.”) Claims 20 and 21, Chen, as modified, teaches claims 1 and 10 above. Chen further discloses: wherein the plurality of artificial NNs (Beser, para. 0005: “Systems, methods, and computer readable mediums (collectively, the “system”) are disclosed for the consensus and updating of computing models for artificial neural networks.”) are interconnected to form a graph having a plurality of nodes and a plurality of edges connecting the plurality of nodes, wherein each node of the plurality of nodes represents a distinct attribute of the plurality of attributes, wherein each of the plurality of edges represents a distinct NN of the plurality of artificial NNs. (Chen, fig. 11 and para. 0082: “The graph 1100 captures the topological structure of a dynamic network of objects, represented as nodes 1104. As noted above, in some embodiments, such objects may represent physical objects in, e.g., a physical system. In some embodiments, the objects 1104 may represent atoms or ions in a molecule. In yet other embodiments, the objects 1104 may represent computing systems within a communications network.” Examiner notes Chen teaches a graph structure consisting of nodes and edges comprised of any objects, including computing systems, where Beser teaches neural networks on a computing system). Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen, in view of Beser and further in view of L. Ouyang and Y. Zhang ("Phishing Web Page Detection with HTML-Level Graph Neural Network," 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Shenyang, China, 2021, pp. 952-958, doi: 10.1109/TrustCom53373.2021.00133.; hereinafter, “Ouyang”). Claims 8 and 17, Chen, as modified, teaches claims 1 and 10 above. Chen further discloses: wherein the set of target data comprises a web page, and wherein the threat detector (Chen, para. 0025: “GNN-based intrusion detection 44, which uses a trained GNN to detect anomalous behavior in network gathered network information”) is configured to determine whether the web page comprises malicious content. (Ouyang, Sec. I: “We propose a novel graph neural network-based phishing web page detection method. Our proposed method effectively utilize the inherent tree structure of HTML to improve detection accuracy by combining the advantages of RNN in extracting local features and the advantages of GNN in capturing long-range semantics.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ouyang into Chen, as modified. Ouyang teaches a GNN to model the long-range relations between nodes based on these local features and the graph structure. One of ordinary skill would have been motivated to combine the teachings of Ouyang into Chen, as modified, in order to achieve a significant accuracy improvement in network-based phishing web page detection (Ouyang, sec. I). Response to Applicant Remarks/Arguments Starting on page 11 of applicant’s remarks, applicant argues that the claims traverse the 35 USC § 101 rejection because they represent an improvement to a computer technology, and particularly computer security. Applicant argues that the system claimed provides a multi-faceted view of the training data and removes the burden of annotating the training data. However, the claim limitations do not teach anything in particular regarding supervised or unsupervised learning i.e. annotation of training data. Similarly, the claims do not recite any limitations particular to computer security. Instead the limitations largely recite generic computer processes. At the bottom of page 12 of applicant’s remarks, applicant further argues that the consensus-building training substantially reduces computational costs of training. However, none of the claim limitations recites particular processes that reduce computational costs of training but rather largely recite generic computer processes. At the top of page 12 of applicant’s remarks, applicant argues none of the claim limitations can be practically performed in the human mind. However, some of the claim limitations, including steps such as selecting a value, can be performed within the human mind. Toward the bottom of page 13 of applicant’s remarks, applicant argues that the claims as amended traverse the 35 U.S.C § 103 rejection. In particular, applicant assets that the prior art, Chen in view of Beser does not teach, “multiple neural networks computing the same end attribute according to distinct start attributes” however, as set forth above, Beser does teach at paragraph 0035, validation nodes each validating different computing models or sub-models such that each different sub-model combines to the same whole model. Toward the bottom of page 15 of applicant’s remarks, applicant further argues that Beser does not teach using consensus in training: “adjusting NN parameters according to the consensus of values”. However, the broadest reasonable interpretation of “consensus” is simply general agreement. Moreover, The concept of adjustment neural network parameters is the process of training a neural network. Applicant argues that Beser does not teach consensus in training or adjusting neural network parameters. However, Beser teaches both consensus and parameter adjustment in accordance with the claim’s broadest reasonable interpretations. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Mar 26, 2022
Application Filed
Aug 04, 2025
Non-Final Rejection — §101, §103
Dec 30, 2025
Response Filed
Jan 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month