Prosecution Insights
Last updated: April 19, 2026
Application No. 16/710,266

ROBUST RECURRENT ARTIFICIAL NEURAL NETWORKS

Non-Final OA §103
Filed
Dec 11, 2019
Examiner
NGUYEN, HENRY K
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Inait SA
OA Round
7 (Non-Final)
57%
Grant Probability
Moderate
7-8
OA Rounds
4y 7m
To Grant
88%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
90 granted / 158 resolved
+2.0% vs TC avg
Strong +31% interview lift
Without
With
+31.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
26 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 158 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/24/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 1-2, 4-7, 14-19, 21-24, 26-31, and 33-36 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues: Pogorelik does not include background signal transmission activity that is not responsive to input data. Examiner response: Examiner respectfully disagrees. Pogorelik discloses randomly firing neurons (i.e., background transmission activity) without needing the input data (para [0397] “Accordingly, the present disclosure provides an inference system that enables efficient activation without requiring computation to be done on the input data during inference time, leading to a data agnostic system.” Para [0402] “The remining (second parameter−first parameter) activation percent may be fulfilled by randomly firing the required number of neurons. This method can enable efficient activations, yet no computation is done during inference time on the data itself (hence data agnostic).”). Arguments are not persuasive. Applicant argues: Pogorelik’s random firing nodes do not introduce a degree of variability into transmissions of information between deterministically defined nodes. Examiner response: Examiner respectfully disagrees. Randomly firing a neuron introduces a degree of variability as it is not known whether the neuron will transmit a signal or not, therefore the transmissions between the nodes have a degree of variability. Arguments are not persuasive. Allowable Subject Matter Claims 14-19, 21-23, and 33 are allowed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “an output configured to” in claim 1. “an output configured to” in claim 14. “an output configured to” in claim 24. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Sufficient structural support and corresponding function appears to be supported on pg. 10 lines 10-16, pg. 11 lines 16-27, and pg. 25 lines 28-30 of Applicant’s specification filed 12/11/2019. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Rawal et al. (US-20190180187-A1) in view of Pogorelik et al. (US-20210319098-A1) and Hoffmann (US-20190286074-A1). Regarding Claim 1, Rawal teaches a system comprising: a plurality of nodes and links arranged in a trained recurrent artificial neural network (para [0132] “For example in case of recurrent networks, the training time would be cut down to one validation loss instead of 40th.”), wherein during inference the trained recurrent artificial neural network is configured to transmit information between individual nodes in response to an input of new input data into one or more layers of nodes that are distributed throughout the trained recurrent artificial neural network (para “As illustrated in FIG. 10C, a recurrent node usually has two types of outputs. The first output is the main recurrent output at a given time, represented as h(t) and the second output is a native memory cell output at a given time c(t). The value of h(t) is weighted and fed (propagated) to three locations: (1) to a higher layer of the network at the same time step (e.g., RNN layer 2), (2) to other nodes in the network (e.g., RNN layer 1) at the next time step, and (3) to the node itself at the next time step. Before propagation, h(t) is combined with weighted activations from the previous layer, such as input word embeddings in language modeling, to generate eight node inputs (termed as base eight by Zoph et al.).”), wherein i) the plurality of nodes and links arranged in the trained recurrent artificial neural network (para [0130] “Standard recurrent networks consist of layers formed by repetition of a single type of node.” And para [0132] “In both node and network architecture search, it can about two hours to fully train a network until 40 epochs.” Network is trained.) are deterministically defined (para [0060] “The fully-connected neural network module 4 has the following local topology hyperparameters—number of neurons in each neuron layer, number of neuron layers (e.g., L1, L2, Ln), and interconnections and interconnection weights between the neurons.” Number of neurons (i.e. nodes) and interconnections (i.e. links) are predefined in the hyperparameter (i.e. deterministic).), and Rawal does not explicitly disclose ii) the trained recurrent artificial neural network includes background signal transmission activity that is not responsive to the input of the new input data into the one or more layers of nodes that are distributed throughout the trained recurrent artificial neural network, wherein the background signal transmission activity is present prior to the input of the new input data in an absence of the new input data, wherein the background signal transmission activity introduces a degree of variability into transmissions of information between the deterministically defined nodes and causes either transmissions of information along the deterministically-defined links to be non-deterministic or decisions at the deterministically-defined nodes to be non- deterministic; and an output configured to output indications of occurrences of topological patterns of signal transmission activity in the trained recurrent artificial neural network that are responsive to new input data, wherein the topological patterns represent a result of information processing by the trained recurrent artificial neural network in response to the new input data and the output indications are non-deterministic. However, Pogorelik (US 20210319098 A1) teaches ii) the trained recurrent artificial neural network (para [0402] “Also, this method may be applied post-hoc to already trained models and doesn't require fine tuning.” And para [0561] discloses the embodiments may be applied to a recurrent neural network.) includes background signal transmission activity that is not responsive to the input of the new input data into the one or more layers of nodes that are distributed throughout the trained recurrent artificial neural network (para [0402] “For inference at runtime, circuitry 6210 may execute instructions 6228 to analyze the activation statistics 6223. The actual activation operation at inference time may be composed of passing all neurons that are in the corresponding saved locations map (e.g., activation maps 6231) for the processed layer. The remining (second parameter−first parameter) activation percent may be fulfilled by randomly firing the required number of neurons.”), wherein the background signal transmission activity is present prior to the input of the new input data in an absence of the new input data (para [0397] “Accordingly, the present disclosure provides an inference system that enables efficient activation without requiring computation to be done on the input data during inference time, leading to a data agnostic system.”), wherein the background signal transmission activity introduces a degree of variability into transmissions of information between the deterministically defined nodes (para [0324] “In general, nodes 4891 receive input(s) via connections 4893 and derive an output based on an activation function. Any of a variety of activation functions (e.g., identity, binary step, tangent, arc-tangent, sigmoid, logistic or soft sigmoid, gaussian, or the like) can be used.” The nodes are deterministically defined by an activation function.) and causes either transmissions of information along the deterministically-defined links to be non-deterministic or decisions at the deterministically-defined nodes to be non-deterministic (para [0402] “The remining (second parameter−first parameter) activation percent may be fulfilled by randomly firing the required number of neurons.” Decision to randomly fire (i.e. non-deterministic transmission of information).); and …wherein the topological patterns represent a result of information processing by the trained recurrent artificial neural network in response to the input data (para [0400] “For example, first, circuitry 6210 can execute instructions 6228 to take inference model 6222 (e.g., a trained deep neural network (DNN) model) and the input data 6224 (e.g., a training dataset with samples) and initializing an empty activation matrix 6229 that corresponds with each output feature map 6225 for all the layers. Typically, each training sample in the input data 6224 may be forward-propagated while all the activations for all the layers are accumulated in the corresponding activation matrix 6229 to produce activation maps 6231. In various examples, activation statistics 6223 may include the completed activation matrices 6229 and/or activation maps.” Activation map (i.e. topological pattern).) and the output indications are non-deterministic (para [0402] randomly firing neurons (i.e. non-deterministic).). Rawal and Pogorelik are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the random activations of Pogorelik. Doing so would allow for efficient activations requiring little overhead storage while preserving accuracy and may run orders of magnitude faster than approximation-based activations (Pogorelik para [0402]). Hoffmann teaches an output configured to output indications of occurrences of topological patterns of signal transmission activity in the trained recurrent artificial neural network that are responsive to new input data (para [0063] describes firing neurons (i.e. signal transmission activity) resulting in a pattern of activated neurons.), wherein the topological patterns represent a result of information processing by the trained recurrent artificial neural network in response to the new input data (para [0059] “As a preparatory step, signatures of known objects are stored in a database for later use For identification, a new signature of an object is presented, which is processed 306 to generate neural activation patterns 308 using, for example, a neural network with Gaussian activations functions (tuning curves as described below) to map the input data provided in several real-valued variables onto binary neural activation patterns.” Activation pattern (i.e. output configured to output indications of topological patterns).) and the output indications are non-deterministic (para [0063] “With a given probability, p.sub.S, e.g., p.sub.S=0.1, a directed connection is formed from an active neuron 600 in the input layer 400 to a hidden neuron 402: for each pair of active neuron and element in the set of h hidden neurons, a connection 602 is formed if a uniform random number in the interval [0,1) is smaller than p.sub.S.” random (i.e., non-deterministic.). Rawal and Hoffmann are analogous because they are directed towards the field of recurrent neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the RNN of Rawal with the method of storing topological activation patterns of Hoffmann. Doing so would allow for efficiently storing and retrieving neural network activation patterns for object recognition tasks (Hoffmann para [0069]). Regarding Claim 2, Hoffmann, Rawal, and Pogorelik teach the system of claim 1. Hoffman further teaches wherein decision thresholds of the nodes have a degree of randomness (para [0063] With a given probability, p.sub.S, e.g., p.sub.S=0.1, a directed connection is formed from an active neuron 600 in the input layer 400 to a hidden neuron 402: for each pair of active neuron and element in the set of h hidden neurons, a connection 602 is formed if a uniform random number in the interval [0,1) is smaller than p.sub.S. A connection is formed between neurons (i.e. nodes) if a random number is less than a threshold.). Rawal and Hoffmann are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the random activations of Pogorelik. Doing so would allow for efficient activations requiring little overhead storage while preserving accuracy and may run orders of magnitude faster than approximation-based activations (Pogorelik para [0402]). Regarding Claim 6, Rawal, Pogorelik, and Hoffmann teach the system of claim 1. Hoffmann further teaches comprising an application trained to process the indications of the occurrences of topological patterns of signal transmission activity (para [0059] For identification, a new signature of an object is presented, which is processed 306 to generate neural activation patterns 308 using, for example, a neural network with Gaussian activations functions (tuning curves as described below) to map the input data provided in several real-valued variables onto binary neural activation patterns.), Rawal, Pogorelik, and Hoffmann are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the random activations of Pogorelik. Doing so would allow for efficient activations requiring little overhead storage while preserving accuracy and may run orders of magnitude faster than approximation-based activations (Pogorelik para [0402]). Pogorelik further teaches an application trained to process the indications of the occurrences of topological patterns of signal transmission activity (para [0244] Discloses that the inference model (i.e. application) is used to process activation maps/network activations maps which indicates patterns of signal transmission activity. para [0400] “For example, first, circuitry 6210 can execute instructions 6228 to take inference model 6222 (e.g., a trained deep neural network (DNN) model) and the input data 6224 (e.g., a training dataset with samples) and initializing an empty activation matrix 6229 that corresponds with each output feature map 6225 for all the layers. Typically, each training sample in the input data 6224 may be forward-propagated while all the activations for all the layers are accumulated in the corresponding activation matrix 6229 to produce activation maps 6231. In various examples, activation statistics 6223 may include the completed activation matrices 6229 and/or activation maps.” Activation map (i.e. topological pattern).), wherein the application is trained using the non-deterministic output indications from the trained recurrent artificial neural network (para [0413] The present disclosure provides using additional inference models (referred to as “adversarial models” or “adversaries”) during training where the main inference model is trained based on feedback from the adversarial models. The main inference model (i.e. application) is trained based on the feedback resulting from the non-deterministic outputs of the adversarial models (i.e. trained recurrent neural network). para [0400]-[0401] The inference model randomly fires its neuron creating a non-deterministic output. The main inference model (i.e. application) is trained using the adversarial model which has non-deterministic outputs (i.e. trained recurrent neural network).). Rawal, Pogorelik, and Hoffmann are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the machine learning model of Hoffmann with the adversarial networks of Pogorelik. Doing so would allow for hardening the neural network by creating adversarial inputs designed to “fool” the neural network. This would allow for the network to recognize such attacks making the neural network more resilient to malicious input (Pogorelik para [0098]). Regarding Claim 24, Rawal teaches a system comprising: a plurality of deterministically defined nodes and deterministically defined links arranged in a trained recurrent artificial neural network (para [0130] “Standard recurrent networks consist of layers formed by repetition of a single type of node.” And para [0132] “In both node and network architecture search, it can about two hours to fully train a network until 40 epochs.” Network is trained.) are deterministically defined (para [0060] “The fully-connected neural network module 4 has the following local topology hyperparameters—number of neurons in each neuron layer, number of neuron layers (e.g., L1, L2, Ln), and interconnections and interconnection weights between the neurons.” Number of neurons (i.e. nodes) and interconnections (i.e. links) are predefined in the hyperparameter (i.e. deterministic).), wherein during inference the trained recurrent artificial neural network (para [0132] “For example in case of recurrent networks, the training time would be cut down to one validation loss instead of 40th.”) is configured to transmit information between individual nodes in response to an input of new input data into the trained recurrent artificial neural network (para “As illustrated in FIG. 10C, a recurrent node usually has two types of outputs. The first output is the main recurrent output at a given time, represented as h(t) and the second output is a native memory cell output at a given time c(t). The value of h(t) is weighted and fed (propagated) to three locations: (1) to a higher layer of the network at the same time step (e.g., RNN layer 2), (2) to other nodes in the network (e.g., RNN layer 1) at the next time step, and (3) to the node itself at the next time step. Before propagation, h(t) is combined with weighted activations from the previous layer, such as input word embeddings in language modeling, to generate eight node inputs (termed as base eight by Zoph et al.).”) and Rawal does not explicitly disclose the trained recurrent artificial neural network includes background signal transmission activity (para [0402] “For inference at runtime, circuitry 6210 may execute instructions 6228 to analyze the activation statistics 6223. The actual activation operation at inference time may be composed of passing all neurons that are in the corresponding saved locations map (e.g., activation maps 6231) for the processed layer. The remining (second parameter−first parameter) activation percent may be fulfilled by randomly firing the required number of neurons.”) that is not responsive to the input of the new input data and is present in the trained recurrent artificial neural network prior to the input of the new input data in the absence of the new input data, wherein the background signal transmission activity introduces a degree of variability into transmissions of information between the deterministically defined nodes and causes either transmissions of information along the deterministically-defined links to be non-deterministic or decisions at the deterministically-defined nodes to be non-deterministic; and an output configured to output indications of occurrences of topological patterns of signal transmission activity in the trained recurrent artificial neural network that are responsive to new input data, wherein the topological patterns represent a result of information processing by the trained recurrent artificial neural network in response to the new input data and the output indications are non-deterministic. However, Pogorelik (US 20210319098 A1) teaches the trained recurrent artificial neural network (para [0402] “Also, this method may be applied post-hoc to already trained models and doesn't require fine tuning.” And para [0561] discloses the embodiments may be applied to a recurrent neural network.) includes background signal transmission activity that is not responsive to the input of the new input data and is present in the trained recurrent artificial neural network prior to the input of the new input data in the absence of the new input data (para [0397] “Accordingly, the present disclosure provides an inference system that enables efficient activation without requiring computation to be done on the input data during inference time, leading to a data agnostic system.”), wherein the background signal transmission activity introduces a degree of variability into transmissions of information between the deterministically defined nodes (para [0324] “In general, nodes 4891 receive input(s) via connections 4893 and derive an output based on an activation function. Any of a variety of activation functions (e.g., identity, binary step, tangent, arc-tangent, sigmoid, logistic or soft sigmoid, gaussian, or the like) can be used.” The nodes are deterministically defined by an activation function.) and causes either transmissions of information along the deterministically-defined links to be non-deterministic or decisions at the deterministically-defined nodes to be non-deterministic (para [0402] “The remining (second parameter−first parameter) activation percent may be fulfilled by randomly firing the required number of neurons.” Decision to randomly fire (i.e. non-deterministic transmission of information).); Rawal and Pogorelik are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the random activations of Pogorelik. Doing so would allow for efficient activations requiring little overhead storage while preserving accuracy and may run orders of magnitude faster than approximation-based activations (Pogorelik para [0402]). Hoffmann teaches an output configured to output indications of occurrences of topological patterns of signal transmission activity in the trained recurrent artificial neural network that are responsive to new input data (para [0063] describes firing neurons (i.e. signal transmission activity) resulting in a pattern of activated neurons.), wherein the topological patterns represent a result of information processing by the trained recurrent artificial neural network in response to the new input data (para [0059] “As a preparatory step, signatures of known objects are stored in a database for later use For identification, a new signature of an object is presented, which is processed 306 to generate neural activation patterns 308 using, for example, a neural network with Gaussian activations functions (tuning curves as described below) to map the input data provided in several real-valued variables onto binary neural activation patterns.” Activation pattern (i.e. output configured to output indications of topological patterns).) and the output indications are non-deterministic (para [0063] “With a given probability, p.sub.S, e.g., p.sub.S=0.1, a directed connection is formed from an active neuron 600 in the input layer 400 to a hidden neuron 402: for each pair of active neuron and element in the set of h hidden neurons, a connection 602 is formed if a uniform random number in the interval [0,1) is smaller than p.sub.S.” random (i.e., non-deterministic.). Rawal and Hoffmann are analogous because they are directed towards the field of recurrent neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the RNN of Rawal with the method of storing topological activation patterns of Hoffmann. Doing so would allow for efficiently storing and retrieving neural network activation patterns for object recognition tasks (Hoffmann para [0069]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann, as applied above, and further in view of Paik et al. (US-20180197076-A1). Regarding Claim 4, Rawal, Pogorelik, and Hoffmann teach the system of claim 1. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein either a timing of signal arrival at a destination node or a signal amplitude at the destination node has the degree of randomness. Paik (US 20180197076 A1) teaches wherein either a timing of signal arrival at a destination node or a signal amplitude at the destination node has the degree of randomness (para [0060] “Initially, all of the input neurons form a temporal pattern with a predetermined length, for example, 100 ms, such that every input neuron fires once with a random timing.”). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Rawal, Pogorelik, and Hoffmann with the neuron spikes of Paik Doing so would allow for measuring memory efficiency of the neural network (Paik para [0060]). Claims 5 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Hoffmann/Rawal/Pogorelik, as applied above, and further in view of Hoffmann et al. (US-9336239-B1; hereinafter Hoffmann2). Regarding Claim 5, Rawal, Pogorelik, and Hoffmann teach the system of claim 1. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein at least some pairs of nodes are linked by multiple links. However, Hoffmann2 teaches wherein at least some pairs of nodes are linked by multiple links (col. 8 lines 46-50; “FIG. 3B depicts a realization of a MagicNet in a neural network of two layers: an input layer and an output layer. The connections between input and output neurons have variable delays, and, as in this example, multiple connections are allowed between two neurons. For each element in the alphabet of an input stream, the network has one input neuron. To store a new pattern, a new output neuron is added and connections to this neuron are created and their delays set depending on the sequence of characters within the pattern.” Figure 3B shows multiple connections between a pair of nodes.). Rawal, Pogorelik, Hoffmann, and Hoffmann2 are analogous because they are directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings of Hoffmann with the neural network connections of Hoffmann2. Doing so would allow for improvements in power usage through implementing neuromorphic hardware, such as use of physical connections instead of multiplexed connections (Hoffmann2 col. 12 lines 51-57;) Regarding Claim 26, Rawal, Pogorelik, and Hoffmann teach the system of claim 24. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein at least some pairs of nodes are linked by multiple connections from a source node to a destination node. However, Hoffmann2 teaches wherein at least some pairs of nodes are linked by multiple connections from a source node to a destination node (col. 8 lines 46-50; “FIG. 3B depicts a realization of a MagicNet in a neural network of two layers: an input layer and an output layer. The connections between input and output neurons have variable delays, and, as in this example, multiple connections are allowed between two neurons.”); Rawal, Pogorelik, Hoffmann, and Hoffmann2 are analogous because they are directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the teachings of Rawal, Pogorelik, and Hoffmann with the neural network connections of Hoffmann2. Doing so would allow for improvements in power usage through implementing neuromorphic hardware, one would expect that optimizations, such as use of physical connections instead of multiplexed connections (Hoffmann2 col. 12 lines 51-57;) Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann/Hoffmann2, as applied above, and further in view of Nugent et al. (US-20090043722-A1). Regarding Claim 27, Rawal, Pogorelik, Hoffmann, and Hoffmann2 teach the system of claim 26. Hoffmann, Rawal, Pogorelik, Hoffmann, and Hoffmann2 do not explicitly disclose wherein the multiple connections comprise between 3 and 10 links excitatory links. However, Nugent teaches wherein the multiple connections comprise between 3 and 10 excitatory links (para [0184] “For example, connections 1914, 1923, 1936, 1945, 1951, and 1962 are excitatory, and connections 1912, 1934, 1956, and 1961 are inhibitory.” There are 6 excitatory connections connection nodes 1910 and 1940.). Rawal, Pogorelik, Hoffmann, Hoffmann2, and Nugent are analogous because they are directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network with the neural network of Rawal, Pogorelik, Hoffmann, and Hoffmann2 with neuronal connections of Nugent. Doing so would allow for connecting neuron circuits together to form certain topologies that result in desired properties, such as maximizing internal feed-back and memory retention (Pogorelik para [0222]). Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann/Hoffmann2, as applied above, and further in view of Cherubini et al. (US-20190392303-A1). Regarding Claim 28, Rawal, Pogorelik, Hoffmann, and Hoffmann2 teach the system of claim 26. Rawal, Pogorelik, Hoffmann, and Hoffmann2 do not explicitly disclose wherein the multiple connections comprise between 10 and 30 inhibitory links. However, Cherubini (US 20190392303 A1) teaches wherein the multiple connections comprise between 10 and 30 inhibitory links (para [0041] “In FIG. 4, output neurons are connected to each other via all-to-all lateral inhibitory connections 36, while input neurons 31 are connected to output neurons 32 via all-to-all excitatory connections 35.” Fig.4 shows at least 11 and less than 30 inhibitory connections between neuron 27 and 32.). Rawal, Pogorelik, Hoffmann, Hoffmann2, and Cherubini are analogous because they are directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Rawal, Pogorelik, Hoffmann, and Hoffmann2 with the inhibitory connection of Cherubini. Doing so would allow for improving class representation (Cherubini para [0048]). Claim 29 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann, as applied above, and further in view of Furber (US-20080267188-A1). Regarding Claim 29, Rawal, Pogorelik, and Hoffmann teach the system of claim 24. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein each node is coupled to output signals to between 10^3 and 10^5 other nodes and to receive signals from between 10^3 and 10^5 other nodes. However, Furber teaches wherein each node is coupled to output signals to between 10^3 and 10^5 other nodes and to receive signals from between 10^3 and 10^5 other nodes (para [0003] In neural systems of interconnected neurons, neurons typically have very high connectivity with one another. It is often the case that a given neuron accepts data from between 1,000 and 10,000 other neurons, and outputs data to a similar number of neurons.). Rawal, Pogorelik, Hoffmann, and Furber are analogous because they are directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Rawal, Pogorelik, and Hoffmann with the neuron connections of Furber. Doing so would allow for a multi-cast communication in the neural network wherein data can be transmitted from a single neuron to multiple neurons but not all of the neurons allowing for a reduced bandwidth in communication of the neural network (Furber para [0004]). Claims 7, 30-31 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann, as applied above, and further in view of Reimann et al. (US-20180197069-A1). Regarding Claim 7, Rawal, Pogorelik, and Hoffmann teach the system of claim 1. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein the topological patterns of signal transmission activity are clique patterns of signal transmission activity. However, Reimann teaches wherein the topological patterns of signal transmission activity are clique patterns of signal transmission activity (para [0037] “In particular, the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons.”). Rawal, Pogorelik, Hoffmann, and Riemann are analogous because they are both directed towards neural network activation patterns. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of identifying activation patterns of Rawal, Pogorelik, and Hoffmann with the network subgraph of Reimann. Doing so would allow for structural characterizations of neural networks in the construction or reconstruction of a neural network. Reconstruction can allow for mimicking at least some of the structure of the first neural network in order to create a simpler version of the first network (Reimann para [0035]). Regarding Claim 30, Rawal, Pogorelik, and Hoffmann teach the system of claim 24. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein the topological patterns of signal transmission activity are clique patterns of signal transmission activity. However, Reimann teaches wherein the topological patterns of signal transmission activity are clique patterns of signal transmission activity (para [0037] “In particular, the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons.”). Rawal, Pogorelik, Hoffmann, and Riemann are analogous because they are both directed towards neural network activation patterns. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of identifying activation patterns of Rawal, Pogorelik, and Hoffmann with the network subgraph of Reimann. Doing so would allow for structural characterizations of neural networks in the construction or reconstruction of a neural network. Reconstruction can allow for mimicking at least some of the structure of the first neural network in order to create a simpler version of the first network (Reimann para [0035].). Regarding Claim 31, Rawal, Pogorelik, and Hoffmann teach the system of claim 1. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein the topological patterns of signal transmission activity are directed clique patterns. However, Reimann (US 20180197069 A1) teaches wherein the topological patterns of signal transmission activity are directed clique patterns (para [0037] “In particular, the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons.”). Rawal, Pogorelik, Hoffmann, and Riemann are analogous because they are both directed towards neural network activation patterns. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of identifying activation patterns of Rawal, Pogorelik, and Hoffmann with the network subgraph of Reimann. Doing so would allow for structural characterizations of neural networks in the construction or reconstruction of a neural network. Reconstruction can allow for mimicking at least some of the structure of the first neural network in order to create a simpler version of the first network (Reimann para [0035].). Regarding Claim 34, Rawal, Pogorelik, and Hoffmann teach the system of claim 24. Rawal, Pogorelik, and Hoffmann do not explicitly disclose wherein the topological patterns of signal transmission activity are directed clique patterns. However, Reimann further teaches wherein the topological patterns of signal transmission activity are directed clique patterns (para [0037] “In particular, the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons.”). Rawal, Pogorelik, Hoffmann, and Riemann are analogous because they are both directed towards neural network activation patterns. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the method of identifying activation patterns of Rawal, Pogorelik, and Hoffmann with the network subgraph of Reimann. Doing so would allow for structural characterizations of neural networks in the construction or reconstruction of a neural network. Reconstruction can allow for mimicking at least some of the structure of the first neural network in order to create a simpler version of the first network (Reimann para [0035].). Claims 35-36 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rawal/Pogorelik/Hoffmann, as applied above, and further in view of Chen et al. (US-20030140020-A1). Regarding Claim 35, Hoffmann, Rawal, Pogorelik, and Chen teach the method of claim 1. Chen further teaches wherein a deterministically defined node comprises one or more of a predetermined decision threshold (para [0043] “4. E-step: compute the action potentials of hidden neurons and normalize into [0,1] (Step 304). If the activation level of a neuron is larger than threshold, .alpha., then it fires (Step 306).”) or a predetermined time constant and a deterministically defined link comprises one or more of a predetermined transmission time or signal amplitude attenuation (para [0063] “When a neuron receives current input from other neurons, its membrane potential will increase. The neurons in an assembling group compete for action, if membrane potential passes a threshold, a spike will generate. The neuron that receives stronger signals will fire first, and produce stronger amplitude.”). Hoffmann and Chen are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the neural network architecture of Chen. Doing so would allow for performing inference and learning while retaining many characteristics of a biological neural networks (Chen Abs.). Regarding Claim 36, Hoffmann, Rawal, Pogorelik, and Chen teach the system of claim 24. Chen further teaches wherein a deterministically defined node comprises one or more of a predetermined decision threshold (para [0043] “4. E-step: compute the action potentials of hidden neurons and normalize into [0,1] (Step 304). If the activation level of a neuron is larger than threshold, .alpha., then it fires (Step 306).”) or a predetermined time constant and a deterministically defined link comprises one or more of a predetermined transmission time or signal amplitude attenuation (para [0063] “When a neuron receives current input from other neurons, its membrane potential will increase. The neurons in an assembling group compete for action, if membrane potential passes a threshold, a spike will generate. The neuron that receives stronger signals will fire first, and produce stronger amplitude.”). Hoffmann and Chen are analogous because they are both directed to the same field of endeavor of neural networks. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the neural network of Hoffmann with the neural network architecture of Chen. Doing so would allow for performing inference and learning while retaining many characteristics of a biological neural networks (Chen Abs.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY K NGUYEN whose telephone number is (571)272-0217. The examiner can normally be reached Mon - Fri 7:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 5712723768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY NGUYEN/Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Dec 11, 2019
Application Filed
Nov 10, 2022
Non-Final Rejection — §103
Feb 17, 2023
Response Filed
Apr 07, 2023
Final Rejection — §103
Jul 13, 2023
Response after Non-Final Action
Oct 12, 2023
Request for Continued Examination
Oct 18, 2023
Response after Non-Final Action
Jan 16, 2024
Non-Final Rejection — §103
Apr 25, 2024
Response Filed
Jul 30, 2024
Final Rejection — §103
Nov 07, 2024
Request for Continued Examination
Nov 13, 2024
Response after Non-Final Action
Feb 08, 2025
Non-Final Rejection — §103
May 13, 2025
Response Filed
Jul 21, 2025
Final Rejection — §103
Sep 23, 2025
Response after Non-Final Action
Oct 24, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585933
TRANSFER LEARNING WITH AUGMENTED NEURAL NETWORKS
2y 5m to grant Granted Mar 24, 2026
Patent 12572776
Method, System, and Computer Program Product for Universal Depth Graph Neural Networks
2y 5m to grant Granted Mar 10, 2026
Patent 12547484
Methods and Systems for Modifying Diagnostic Flowcharts Based on Flowchart Performances
2y 5m to grant Granted Feb 10, 2026
Patent 12541676
NEUROMETRIC AUTHENTICATION SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12505470
SYSTEMS, METHODS, AND STORAGE MEDIA FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
57%
Grant Probability
88%
With Interview (+31.4%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 158 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month