Prosecution Insights
Last updated: April 19, 2026
Application No. 17/294,430

Spiking Neural Network

Final Rejection §103
Filed
May 17, 2021
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Innatera Nanosystems B V
OA Round
4 (Final)
15%
Grant Probability
At Risk
5-6
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the communication filed on 02 Dec 2025. Claims 1-13, 19-21, 25, 27, and 31-46 are being considered on the merits. Information Disclosure Statement The information disclosure statement (IDS) submitted on 2025-12-02 has been considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, initialed and dated copies of Applicant's IDS form 1499 is attached to the instant Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8, 19-21, 25, 27, 31-34, 36-37, and 40-44 are rejected under 35 U.S.C. 103 as being unpatentable over Van Der Made, Peter AJ (US 2018/0225562 A1; hereinafter “Van Der Made”) in view of Gottfried, et. al. (US 2018/0075345 A1; hereinafter “Gottfried) Regarding claim 1, Van Der Made teaches: A spiking neural network, (Van Der Made, para. 0001: “The present invention generally relates to an improved method for machine learning and automated pattern recognition using neural networks and more particularly to a dynamic machine learning and feature extraction system using a multilayer Spiking Neural Network with a Spike Timing Dependent Plasticity learning rule capable of spontaneous learning.”) comprising a plurality of spiking neurons and a plurality of synaptic elements interconnecting the spiking neurons to form the network, (Van Der Made, para. 0006: “The first artificial neural network and the second artificial neural network are single layered or a multilayered hierarchical spiking neural network. The labeling of repeating patterns by the second artificial neural network is carried out by mapping temporally and spatially distributed spikes, generated by the first neural network and representing learned features to output labels within a predetermined knowledge domain...The first artificial neural network and the second artificial neural network comprises a plurality of digital neuron circuits interconnected by a plurality of dynamic synapse circuits.”) wherein the plurality of spiking neurons and the plurality of synaptic elements are implemented using analog circuit elements or digital hardwired logic circuits; (Van Der Made, para. 0041: “Each of the first digital Spiking Neural Network 104 and the second digital spiking neural network 106 comprises a plurality of digital artificial neurons connected to each other through digital synapses connected as a hierarchical Artificial Neural Network. Further, each of the plurality of digital artificial neuron comprises of binary logic gates.”) wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, (Van Der Made, para. 0005: “Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs.”) the synaptic elements being configurable to adjust the weight applied by each synaptic element, (Van Der Made, para. 0038: “Each of the first spiking neural network 104 and the second spiking neural network 106 is composed of the first plurality of artificial neurons that are connected to other artificial neurons via a second plurality of configurable synapse circuits. Both the connectivity and the strength of synapses are configurable through digital registers that can be accessed externally.”) and wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals, (Van Der Made, para. 0006: “The first artificial neural network and the second artificial neural network are single layered or a multilayered hierarchical spiking neural network. The labeling of repeating patterns by the second artificial neural network is carried out by mapping temporally and spatially distributed spikes, generated by the first neural network and representing learned features to output labels within a predetermined knowledge domain. The system further comprises an input unit having a network of artificial sensory neurons connected to the first artificial neural network, said input unit converts the data captured in form of changes in contrast, specific frequency domains or digital or analog values by a sensor array into temporally and spatially distributed spikes.”) wherein the first sub-network is adapted to generate a sub-network output pattern signal from the first sub-set of spiking neurons, in response to a sub-network input pattern signal applied to the first sub-set of synaptic elements, (Van Der Made, para. 0005: “The system comprising: a hierarchical arrangement of a first artificial neural network and a second artificial neural network, said first artificial neural network spontaneously learns to recognize any repeating pattern in an input stream and the second artificial neural network is trained to interpret and label the response from the first artificial neural network. The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron”) wherein the weights of the first sub-set of synaptic elements are configured by training the sub-network on a training set of sub-network input pattern signals, so that the sub-network output pattern signal is unique for every unique sub-network input pattern signal of the training set, wherein the degree of uniqueness is controllable through operating parameters of the first sub-set of spiking neurons and synaptic elements. (Van Der Made, para. 0005 and 0029: “The system comprising: a hierarchical arrangement of a first artificial neural network and a second artificial neural network, said first artificial neural network spontaneously learns to recognize any repeating pattern in an input stream and the second artificial neural network is trained to interpret and label the response from the first artificial neural network. The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron…The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.” “The first spiking neural network 104 learns the repeating features in the input data stream 306 that characterizes an applied dataset through a function known as Spike Timing Dependent Plasticity (commonly abbreviated as STDP). STDP modifies the characteristics of the synapses depending on the timing of pre-synaptic spikes to post-synaptic spikes. The first spiking neural network utilizes lateral inhibition. In lateral inhibition, the first neuron, to respond to a specific pattern, inhibits other neurons within the same lateral layer in order for these neurons to learn different features.” Examiner notes that Van Der Made teaches neurons that, in response to a specific pattern, inhibits other neurons so that other neurons can learn different features such that each neuron responds i.e. outputs to a unique pattern). Van Der Made does not explicitly disclose: wherein the spiking neural network comprises a first sub-network comprising a first sub-set of the spiking neurons connected to receive synaptic output signals from a first sub-set of the synaptic elements, the first sub-set of the spiking neurons comprising a plurality of the spiking neurons and the first sub-set of the synaptic elements comprising a plurality of the synaptic elements. However, Gottfried teaches: wherein the spiking neural network comprises a first sub-network comprising a first sub-set of the spiking neurons connected to receive synaptic output signals from a first sub-set of the synaptic elements. (Gottfried, paras. 0026, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that first sub-network is a first layer of the three layer feed-forward network as taught by Gottfried) the first sub-set of the spiking neurons comprising a plurality of the spiking neurons and the first sub-set of the synaptic elements comprising a plurality of the synaptic elements. (Gottfried, paras. 0026, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that the first set of spiking neurons is contained in a first layer and synaptic elements are taught by Van Der Made as set forth above in the form of pre- and post- synaptic neurons). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made. Van Der Made teaches artificial neural networks to simulate a biological nervous system; Gottfried teaches implementing temporal and spatio-temporal spiking neural networks (SNNs) using neuromorphic hardware devices. One of ordinary skill would have been motivated to combine the teachings of Gottfried into Van Der Made in order to enable local generalization and fast learning in a robust manner (Gottfried, para. 0021). Regarding claim 2, Van Der Made, as modified, teaches claim 1 above. Gottfried further teaches: wherein a respective distance in an output pattern metric between each unique sub-network output pattern signal is larger than a predetermined threshold value. (Gottfried, para. 0037 and 0038: “The transformation f 162 can realize a particular non-linear quantization of the input space. For example, every input s can be mapped to a binary vector a, where aj=1 indicates that the input lies within the receptive field of a cell j, and vice versa. The width of the receptive fields can quantify the local generalization of the network and can correspond to the number of active input state space detectors. The number of active state space detectors is denoted herein by the parameter G. In at least some embodiments, the connections between the input state detectors and the state space detectors can be regular and sparse. In such an embodiment, every input state detector leads to an activation of exactly L state space detectors in A, one in each of the organized L subsets 121-123. Alternative definitions of f 162 are also possible.” “In such a case, as distance in the input space increases, a number of jointly activated state space detectors can decrease and, ultimately reach zero when the distance exceeds G.” Examiner notes that the broadest reasonable interpretation of “output pattern metric” means some measurement of an output pattern including a measurement of distance in the input space) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 3, Van Der Made, as modified, teaches claim 1 above. Gottfried further teaches: wherein the spiking neurons and synaptic elements are configured such that respective distances, as measured by an output pattern metric, between two output pattern signals of the sub-network generated in response to two respective different sub-network input pattern signals are maximized for all sub-network input pattern signals of the training set. (Gottfried, para. 0035 and 0038: “In at least some embodiments, input state detectors that are located close to one another can be connected to at least some of the same state space detectors. In such an embodiment, the connections can be configured to produce local generalizations, i.e., similar inputs can produce similar outputs while distant inputs can produce (nearly) independent outputs. This can lead to very efficient and fast learning processes that allow for real-time applications, such as adaptive control.” “In at least some cases, the described mapping can imply nice characteristics. Firstly, the connectivity can be static and does not need to change after it is initialized. Secondly, neighboring inputs can lead to similar activation patterns in the association layer A. In such a case, as distance in the input space increases, a number of jointly activated state space detectors can decrease and, ultimately reach zero when the distance exceeds G.” Examiner notes that for examination purposes only, this claim is interpreted as referring to the distance between any two signals such as taught by Gottfried). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 4, Van Der Made, as modified, teaches claim 3 above. Gottfried further teaches: Wherein each respective distance is maximized until the output pattern signals meet at least a first minimum sensitivity threshold required for distinguishing between features of the input pattern signals. (Gottfried, para. 0004: “The spiking neural network comprises a state space detection layer comprising multiple neuron circuits configured to transmit spiking signals to connected synapse circuits; the synapse circuits comprising stored weight values and configured to: apply the stored weights to the spiking signals, transmit the weighted spiking signals to connected output neuron circuits, and dynamically adjust the stored weights when a connected state space detection layer neuron circuit transmits a spiking signal or when a connected output neuron circuit transmits an output spiking signal; an output layer comprising the output neuron circuits, wherein the output neuron circuits comprise membrane potentials and are configured to: accumulate the received weighted spiking signals from the synapse circuits at the membrane potentials, and transmit output spiking signals when the values of the membrane potentials are greater than specified thresholds;” Examiner notes that the broadest reasonable interpretation of “features of an input pattern signal” includes a signal strength wherein the strength of a signal at any given time is a feature of that signal). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 5, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: by inducing spikes at a desired output neuron at a specific time and at other neurons at a time concomitant with the application of the sub-network input pattern signals, the sub-network is configured to respond with the desired sub-network output pattern signal. (Van Der Made, para. 0006: “A spike is defined as a short burst of electrical energy that has precise timing relative to other such spikes. Data is represented in the timing and distribution of spikes. The sensor array comprises a plurality of sensors for capturing data of interest and generating the input stream. The first artificial neural network and the second artificial neural network comprises a plurality of digital neuron circuits interconnected by a plurality of dynamic synapse circuits.”) Van Der Made does not specifically disclose: wherein the weights of the first sub-set of synaptic elements are configured by training the subnetwork on a training set of sub-network input pattern signals using a semi-supervised training methodology However, Gottfried teaches: wherein the weights of the first sub-set of synaptic elements are configured by training the subnetwork on a training set of sub-network input pattern signals using a semi-supervised training methodology (Gottfried, para. 0097: “A training data set comprising pairs of one or more training inputs and associated training outputs can be used to train a temporal or spatio-temporal neural network as described herein. One or more training inputs can be converted to input signals and provided to input state detectors and an associated training output can be provided to the neural network as a target signal. An output signal generated by the network can be used in combination with the target signal to adjust weights associated with temporal synapse circuits, as described herein. After training, input signals can be provided to the network and the generated output signal can be used as a predicted value. For at least some classification scenarios, one or more features can be provided as input signals to the input state detectors and an associated class can be provided as a target signal. For at least some regression scenarios, one or more independent variable values can be provided as input signals and an associated dependent variable value can be provided as the target signal.” Examiner notes that Gottfried teaches initial use of training sets converted into signals to train a network where an output signal generated by the trained network is used to further adjust weights, implementing a semi-supervised training methodology) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 6, Van Der Made, as modified, teaches claim 5 above. Gottfried further teaches: wherein a desired spike is artificially induced at a specific time to obtain a unique response to a certain input pattern. (Van Der Made, para. 0005: “The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs. The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.) Regarding claim 8, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: wherein the weights of the first sub-set of synaptic elements are configured using a causal-chain spike-timing-dependent plasticity (CC-STDP) learning rule, which enables identification of causal relationships between desired output neurons and neurons in preceding layers of the sub-network, and causes the weights of intervening synaptic elements along this path of causation to be adjusted. (Van Der Made, para. 0005 and 0040: “The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced” “The digital neurons are connected to each other via synapses and receive a corresponding synaptic input through a number of synaptic circuits via a synapse input event bus. The output of the plurality of synapses is integrated by dendrite circuits and the soma circuit. The output of the soma circuit is applied to the input of the axon circuit. Each of the digital neuron present in the array of artificial digital neurons consists of the axon circuit. The axon circuit emits one or more output spikes governed by the strength of the soma output value. From the axon circuit, events are generated for the next layer of digital neurons, or to output neurons in case the last neuron layer is an output layer, via status output bus. The output spike of the axon circuit is transmitted to the plurality of connected synapses using a proprietary communication protocol in the next layer.”) Regarding claim 19, Van Der Made, as modified, teaches claim 1 above. Gottfried further teaches: wherein the network comprises a second sub-network comprising a second sub-set of the spiking neurons connected to receive synaptic outputs from a second sub-set of the synaptic elements, (Gottfried, para. 0006, 0036 and Figure 1: “The spatio-temporal spiking neural network comprises multiple neuron circuits organized into two or more subsets, wherein the multiple neuron circuits are configured to: receive multiple input signals, for at least one of the subsets, select a neuron circuit within the subset that received a greatest number of input signals with respect to other neuron circuits within the subset, and transmit one or more spiking signals from the at least one selected neuron circuit to at least one synapse circuit connected to the at least one selected neuron circuit” “FIG. 1 is a block diagram depicting an example system 100 comprising an example spatio-temporal spiking neural network. In the example, multiple input state detectors (e.g., 112, 114, 116, and 118) are configured to transmit one or more input signals to multiple state space detectors (e.g., 124, 126, 127, and 128) when certain input stimuli (not shown) are detected.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that the second sub-network is a second layer of the three layer feed-forward network as taught by Gottfried) wherein the second sub-network is adapted to receive a second sub-network input pattern signal applied to the second sub-set of synaptic elements (Gottfried, para. 0006 and Figure 1: “The spatio-temporal spiking neural network comprises multiple neuron circuits organized into two or more subsets, wherein the multiple neuron circuits are configured to: receive multiple input signals, for at least one of the subsets, select a neuron circuit within the subset that received a greatest number of input signals with respect to other neuron circuits within the subset, and transmit one or more spiking signals from the at least one selected neuron circuit to at least one synapse circuit connected to the at least one selected neuron circuit” “FIG. 1 is a block diagram depicting an example system 100 comprising an example spatio-temporal spiking neural network. In the example, multiple input state detectors (e.g., 112, 114, 116, and 118) are configured to transmit one or more input signals to multiple state space detectors (e.g., 124, 126, 127, and 128) when certain input stimuli (not shown) are detected.”), and generate a corresponding second sub-network output pattern signal from the second sub-set of neuron, and (Gottfried, para. 0032: “The temporal synapse circuits transmitting the spiking signals to the output nodes can be associated with different weights that are applied to the different spiking signals. For example, the different weights can be applied by the temporal synapse circuits to amplify the spiking signals by different amounts. This can cause one output node receiving spiking signals from one temporal synapse circuit to transmit an output spiking signal at a different point in time than another output node receiving a different spiking signal from a different temporal synapse circuit.” Examiner notes that Gottfried teaches different spiking signals depending on different weights). wherein the configurations of the second sub-set of synaptic elements are adjusted so that the second sub-network output pattern signal is unique for every unique feature in the second sub-network input pattern signals, (Gottfried, para. 0032: “The temporal synapse circuits transmitting the spiking signals to the output nodes can be associated with different weights that are applied to the different spiking signals. For example, the different weights can be applied by the temporal synapse circuits to amplify the spiking signals by different amounts. This can cause one output node receiving spiking signals from one temporal synapse circuit to transmit an output spiking signal at a different point in time than another output node receiving a different spiking signal from a different temporal synapse circuit.” Examiner notes that Gottfried teaches different spiking signals depending on different weights (i.e. parameters)). wherein the network comprises a third sub-network comprising a third sub-set of the spiking neurons connected to receive synaptic outputs from a third sub-set of the synaptic elements, (Gottfried, para. 0006, 0036 and Figure 1: “The spatio-temporal spiking neural network comprises multiple neuron circuits organized into two or more subsets, wherein the multiple neuron circuits are configured to: receive multiple input signals, for at least one of the subsets, select a neuron circuit within the subset that received a greatest number of input signals with respect to other neuron circuits within the subset, and transmit one or more spiking signals from the at least one selected neuron circuit to at least one synapse circuit connected to the at least one selected neuron circuit” “FIG. 1 is a block diagram depicting an example system 100 comprising an example spatio-temporal spiking neural network. In the example, multiple input state detectors (e.g., 112, 114, 116, and 118) are configured to transmit one or more input signals to multiple state space detectors (e.g., 124, 126, 127, and 128) when certain input stimuli (not shown) are detected.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that first sub-network is a third layer of the three layer feed-forward network as taught by Gottfried) wherein the first and second sub-network output pattern signals are input pattern signals of the third sub-network, and (Gottfried, para. 0006, 0036 and Figure 1: “The spatio-temporal spiking neural network comprises multiple neuron circuits organized into two or more subsets, wherein the multiple neuron circuits are configured to: receive multiple input signals, for at least one of the subsets, select a neuron circuit within the subset that received a greatest number of input signals with respect to other neuron circuits within the subset, and transmit one or more spiking signals from the at least one selected neuron circuit to at least one synapse circuit connected to the at least one selected neuron circuit” “FIG. 1 is a block diagram depicting an example system 100 comprising an example spatio-temporal spiking neural network. In the example, multiple input state detectors (e.g., 112, 114, 116, and 118) are configured to transmit one or more input signals to multiple state space detectors (e.g., 124, 126, 127, and 128) when certain input stimuli (not shown) are detected.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that the third layer is an output layer receiving input pattern signals from the first and second layers (i.e. sub-networks)). wherein the configurations of the third sub-set of synaptic elements are adjusted such that the third sub-network output pattern signal is unique for every unique feature in the input pattern signal from both the first and second sub-network and unique combinations of them, such that the features that are present in the input pattern signals from both the first and second sub-network are encoded by the third sub-network. (Gottfried, para. 0032 and 0036: “The temporal synapse circuits transmitting the spiking signals to the output nodes can be associated with different weights that are applied to the different spiking signals. For example, the different weights can be applied by the temporal synapse circuits to amplify the spiking signals by different amounts. This can cause one output node receiving spiking signals from one temporal synapse circuit to transmit an output spiking signal at a different point in time than another output node receiving a different spiking signal from a different temporal synapse circuit.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that Gottfried teaches different spiking signals depending on different weights (i.e. parameters); Examiner further notes that the third sub-network of the three layer feed-forward network includes the reception and converting (i.e. encoding) of signals from the first and second sub-networks into an output). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 20, Van Der Made, as modified, teaches claim 19 above. Gottfried further teaches: wherein the synaptic elements of the third sub-network are configured such that the input pattern signals from the first and second sub-network are weighted according to importance of the specific features in the input pattern signals. (Gottfried, para. 0006, 0036 and Figure 1: “The spatio-temporal spiking neural network comprises multiple neuron circuits organized into two or more subsets, wherein the multiple neuron circuits are configured to: receive multiple input signals, for at least one of the subsets, select a neuron circuit within the subset that received a greatest number of input signals with respect to other neuron circuits within the subset, and transmit one or more spiking signals from the at least one selected neuron circuit to at least one synapse circuit connected to the at least one selected neuron circuit” “FIG. 1 is a block diagram depicting an example system 100 comprising an example spatio-temporal spiking neural network. In the example, multiple input state detectors (e.g., 112, 114, 116, and 118) are configured to transmit one or more input signals to multiple state space detectors (e.g., 124, 126, 127, and 128) when certain input stimuli (not shown) are detected.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that a person of ordinary skill in the art would recognize that the weights and processing of previous layers in a neural network affects later layers). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 21, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: wherein the network comprises multiple sub-networks of synaptic elements and spiking neurons, for which the sub-network output pattern signal is unique for every unique feature in the sub-network input pattern signals, (Van Der Made, sec. 0029: “The temporally and spatially distributed output spikes of the artificial sensory neurons 304 are then forwarded as an input to the first spiking neural network 104 that performs the spontaneous learning and the feature extraction functions. Feature extraction is also known as unsupervised feature learning. The first spiking neural network 104 learns the repeating features in the input data stream 306 that characterizes an applied dataset through a function known as Spike Timing Dependent Plasticity (commonly abbreviated as STDP). STDP modifies the characteristics of the synapses depending on the timing of pre-synaptic spikes to post-synaptic spikes. The first spiking neural network utilizes lateral inhibition. In lateral inhibition, the first neuron, to respond to a specific pattern, inhibits other neurons within the same lateral layer in order for these neurons to learn different features. In the present embodiment, the applied dataset may contain the features of handwritten characters. Thus, the feature extraction module, which is the first neural network 104, learns the features of letters, digits, patterns, and sequences of data.”). Van Der Made does not explicitly disclose: wherein the network can be divided in multiple layers having a particular sequential order in the network, and However, Gottfried teaches: wherein the network can be divided in multiple layers having a particular sequential order in the network, and (Gottfried, paras. 0026, 0035, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “In at least some embodiments, the state space detectors (e.g., 124, 126, 127, and 128) can comprise membrane potentials. In such an embodiment, a state space detector can be configured to generate a spiking signal when a value of its potential is greater than or equal to a spiking threshold associated with the state space detector.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.”). wherein the multiple sub-networks are instantiated in the particular sequential order of the multiple layers each respective sub-network belongs to. (Gottfried, paras. 0026, 0035, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “In at least some embodiments, the state space detectors (e.g., 124, 126, 127, and 128) can comprise membrane potentials. In such an embodiment, a state space detector can be configured to generate a spiking signal when a value of its potential is greater than or equal to a spiking threshold associated with the state space detector.” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that first sub-network is a first layer of the three layer feed-forward network as taught by Gottfried). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 25, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: wherein the neural network is configured to take as input one or multiple sampled analog or digital input signals (Van Der Made, para. 0006: “The system further comprises an input unit having a network of artificial sensory neurons connected to the first artificial neural network, said input unit converts the data captured in form of changes in contrast, specific frequency domains or digital or analog values by a sensor array into temporally and spatially distributed spikes”) and convert the input signals into a representative set of spiking neural network input pattern signals. (Van Der Made, para. 0005: “Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs.”) Regarding claim 27, Van Der Made teaches: A method for configuring a spiking neural network (Van Der Made, para. 0001: “The present invention generally relates to an improved method for machine learning and automated pattern recognition using neural networks and more particularly to a dynamic machine learning and feature extraction system using a multilayer Spiking Neural Network with a Spike Timing Dependent Plasticity learning rule capable of spontaneous learning.”) comprising a plurality of spiking neurons implemented in hardware or a combination of hardware and software, (Van Der Made, para. 0020: “ In an embodiment of the present invention, a system for feature extraction using two or more artificial neural networks implemented in a digital hardware is provided.”) and a plurality of synaptic elements interconnecting the spiking neurons to form the network, (Van Der Made, para. 0006: “The first artificial neural network and the second artificial neural network are single layered or a multilayered hierarchical spiking neural network. The labeling of repeating patterns by the second artificial neural network is carried out by mapping temporally and spatially distributed spikes, generated by the first neural network and representing learned features to output labels within a predetermined knowledge domain...The first artificial neural network and the second artificial neural network comprises a plurality of digital neuron circuits interconnected by a plurality of dynamic synapse circuits.”) wherein the plurality of spiking neurons and the plurality of synaptic elements are implemented using analog circuit elements or digital hardwired logic circuits; (Van Der Made, para. 0041: “Each of the first digital Spiking Neural Network 104 and the second digital spiking neural network 106 comprises a plurality of digital artificial neurons connected to each other through digital synapses connected as a hierarchical Artificial Neural Network. Further, each of the plurality of digital artificial neuron comprises of binary logic gates.”) wherein each synaptic element is adapted to receive a synaptic input signal and apply a weight to the synaptic input signal to generate a synaptic output signal, (Van Der Made, para. 0005: “Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs.”) the synaptic elements being configurable to adjust the weight applied by each synaptic element, (Van Der Made, para. 0038: “Each of the first spiking neural network 104 and the second spiking neural network 106 is composed of the first plurality of artificial neurons that are connected to other artificial neurons via a second plurality of configurable synapse circuits. Both the connectivity and the strength of synapses are configurable through digital registers that can be accessed externally.”) and wherein each of the spiking neurons is adapted to receive one or more of the synaptic output signals from one or more of the synaptic elements, and generate a spatio-temporal spike train output signal in response to the received one or more synaptic output signals, (Van Der Made, para. 0006: “The first artificial neural network and the second artificial neural network are single layered or a multilayered hierarchical spiking neural network. The labeling of repeating patterns by the second artificial neural network is carried out by mapping temporally and spatially distributed spikes, generated by the first neural network and representing learned features to output labels within a predetermined knowledge domain. The system further comprises an input unit having a network of artificial sensory neurons connected to the first artificial neural network, said input unit converts the data captured in form of changes in contrast, specific frequency domains or digital or analog values by a sensor array into temporally and spatially distributed spikes.”) configuring the weights of the first sub-set of synaptic elements by training the sub-network on a training set of sub-network input pattern signals, so that the sub-network output pattern signal is unique for every unique sub-network input pattern signal of the training set, wherein the degree of uniqueness is controllable through operating parameters of the first sub-set of spiking neurons and synaptic elements. (Van Der Made, para. 0005: “The system comprising: a hierarchical arrangement of a first artificial neural network and a second artificial neural network, said first artificial neural network spontaneously learns to recognize any repeating pattern in an input stream and the second artificial neural network is trained to interpret and label the response from the first artificial neural network. The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron…The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.”) Van Der Made does not explicitly disclose: the method comprising: defining a first sub-network of the spiking neural network, the first sub-network comprising a first sub-set of the spiking neurons connected to receive synaptic output signals from a first sub-set of the synaptic elements, the first sub-set of the spiking neurons comprising a plurality of the spiking neurons and the first sub-set of the synaptic elements comprising a plurality of the synaptic elements; However, Gottfried teaches: the method comprising: defining a first sub-network of the spiking neural network, the first sub-network comprising a first sub-set of the spiking neurons connected to receive synaptic output signals from a first sub-set of the synaptic elements; (Gottfried, paras. 0026, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that first sub-network is a first layer of the three layer feed-forward network as taught by Gottfried) the first sub-set of the spiking neurons comprising a plurality of the spiking neurons and the first sub-set of the synaptic elements comprising a plurality of the synaptic elements;(Gottfried, paras. 0026, 0036 and Fig 1: “The state space detectors are configured to receive signals from the input state detectors and transmit spiking signals to one or more output nodes (e.g., 142, 144, and 145) over one or more temporal synapse circuits 130 when certain activation conditions are present. The state space detectors can be neuron circuits in one or more neuromorphic hardware devices” “The example system 100 can be organized in a three-layer feed-forward embodiment. Such an embodiment is depicted in FIG. 1. Such a three-layer feed-forward network can comprise two consecutive transformations f 162: S→A and g 164: A→P. The transformation f 162 can map an input vector sεS (e.g., 114 and 118) to a binary hidden (association) cell vector aεA (e.g., 126, 127, and 128), while g 164 can compute the network response pεP (e.g. p 150) from the activated cells a.” Examiner notes that the first set of spiking neurons is contained in a first layer and synaptic elements are taught by Van Der Made as set forth above in the form of pre- and post- synaptic neurons). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made. Van Der Made teaches artificial neural networks to simulate a biological nervous system; Gottfried teaches implementing temporal and spatio-temporal spiking neural networks (SNNs) using neuromorphic hardware devices. One of ordinary skill would have been motivated to combine the teachings of Gottfried into Van Der Made in order to enable local generalization and fast learning in a robust manner (Gottfried, para. 0021). Regarding claim 31, Van Der Made, as modified, teaches claim 1 above. Gottfried further teaches: The spiking neural network of claim 1, wherein the uniqueness is at least a function of by the number of spikes generated by each output neuron of the cell. (Gottfried, para. 0032 and 0052: “The output spiking signal accumulator p 150 can be configured to detect the different times at which output signals are received from different output nodes and to generate an output signal based on the different times. For example, a timeline 151 is depicted in FIG. 1. In this example, an output spiking signal is received from the output node P1 145 at time 152, a second output spiking signal is received from the output node P2 144 at time 154, and a third output spiking signal is received from the output node PL 142 at time 156. The output spiking signal accumulator p 150 is configured to generate an output signal 158 based on the times at which the output spiking signals were received. For example, the output signal can be a weighted average of the different times. The output signal 158 can be transmitted as an output of the network. The output spiking signal accumulator p 150 can comprise one or more hardware and/or software components.” “An output neuron circuit can receive more than one adjusted spiking signal. For example, in the presence of input stimuli, a neuron circuit in the state space detection layer may transmit multiple spiking signals to an output neuron circuit over a period of time.” Examiner notes that Gottfried teaches output nodes (P1, P2, P3) each outputting spiking signal at a different times as well as a transmission of spiking signal over a period of time such that the timings over a fixed period of time equals the number of spikes). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 32, Van Der Made, as modified, teaches claim 1 above. Gottfried further teaches: The method of claim 27, wherein the uniqueness is at least a function of by the number of spikes generated by each output neuron of the cell. (Gottfried, para. 0032 and 0052: “The output spiking signal accumulator p 150 can be configured to detect the different times at which output signals are received from different output nodes and to generate an output signal based on the different times. For example, a timeline 151 is depicted in FIG. 1. In this example, an output spiking signal is received from the output node P1 145 at time 152, a second output spiking signal is received from the output node P2 144 at time 154, and a third output spiking signal is received from the output node PL 142 at time 156. The output spiking signal accumulator p 150 is configured to generate an output signal 158 based on the times at which the output spiking signals were received. For example, the output signal can be a weighted average of the different times. The output signal 158 can be transmitted as an output of the network. The output spiking signal accumulator p 150 can comprise one or more hardware and/or software components.” “An output neuron circuit can receive more than one adjusted spiking signal. For example, in the presence of input stimuli, a neuron circuit in the state space detection layer may transmit multiple spiking signals to an output neuron circuit over a period of time.” Examiner notes that Gottfried teaches output nodes (P1, P2, P3) each outputting spiking signal at a different times as well as a transmission of spiking signal over a period of time such that the timings over a fixed period of time equals the number of spikes). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 33, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: wherein the analog circuit element or the digital hardwired logic circuit that implements a particular spiking neuron or a particular synaptic element is only activated when the synaptic output signal or the synaptic input signal is respectively received at the particular spiking neuron or the particular synaptic element, such that the spiking neural network performs event-driven processing. (Van Der Made, para. 004 and 005: “A digital neuron consists of dendrites that receive one or more synaptic inputs and the axon that shapes an output spike signal. Neurons are connected through synapse that receives feedback from the post-synaptic neuron which causes the efficacy of the connection to be modified. The output of the plurality of synapses is integrated by dendrite circuits and the soma circuit. The output of the soma circuit is applied to the input of an axon circuit. The axon circuit emits one or more output spikes governed by the soma output value. The output spike of the axon circuit is transmitted to the plurality of synapses in the next layer.” “The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced.” Examiner notes Van Der Made teaches a synapse receiving feedback only from pre-synaptic neurons that contributed to an output event) Regarding claim 34, Van Der Made, as modified, teaches claim 1 above. Van Der Made further teaches: wherein the sub-network output pattern signal is unique for every unique sub-network input pattern signal of the training set by being encoded as a distinct spiking response of the first sub-set of spiking neurons, the distinct spiking response comprising a distinct number of spikes and/or distinct spike timing generated by output neurons of the first sub-network. (Van Der Made, para. 0005 and 0029: “The system comprising: a hierarchical arrangement of a first artificial neural network and a second artificial neural network, said first artificial neural network spontaneously learns to recognize any repeating pattern in an input stream and the second artificial neural network is trained to interpret and label the response from the first artificial neural network. The first artificial neural network spontaneously learns the repeating pattern through a combination of Spike Timing Dependent Plasticity (STDP) in dynamic synapses and lateral inhibition between neurons. Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron…The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.” “The first spiking neural network 104 learns the repeating features in the input data stream 306 that characterizes an applied dataset through a function known as Spike Timing Dependent Plasticity (commonly abbreviated as STDP). STDP modifies the characteristics of the synapses depending on the timing of pre-synaptic spikes to post-synaptic spikes. The first spiking neural network utilizes lateral inhibition. In lateral inhibition, the first neuron, to respond to a specific pattern, inhibits other neurons within the same lateral layer in order for these neurons to learn different features.” Examiner notes that Van Der Made teaches neurons that, in response to a specific pattern, inhibits other neurons so that other neurons can learn different features such that each neuron responds i.e. outputs to a unique pattern). Regarding claim 36, Van Der Made, as modified, teaches claim 35 above. Van Der Made further teaches: wherein testing the uniqueness criterion comprises computing a distance between the spiking response generated for a given sub-network input pattern signal and spiking responses generated for one or more other sub-network input pattern signals. (Van Der Made, para. 0003: “The neural encoder generates a temporal-based pulse coded representation of spikes in the neural signal based on integrate-and-fire coding of the received neural signal and can include spike detection and encode features of the spikes as timing between pulses such that the timing between pulses represents features of the spikes.” Examiner notes Van Der Made teaches distance between spikes in terms of timing where Ahmed teaches uniqueness) Regarding claim 37, Van Der Made, as modified, teaches claim 36 above. Gottfried further teaches: The spiking neural network of claim 36, wherein the distance is computed using a spike-train distance metric. (Gottfried, para. 0004: “an output spiking signal accumulator configured to: receive the output spiking signals, determine times at which the output spiking signals are received, and generate a network output signal based on the times at which the output spiking signals are transmitted by the output neuron circuits.” Examiner notes Gottfried teaches computing distance as accumulating spiking signals based on output spiking signals transmitted by output neuron circuits). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made as set forth above with respect to claim 1. Regarding claim 40, Van Der Made, as modified, teaches claim 27 above. Van Der Made further teaches: wherein the analog circuit element or the digital hardwired logic circuit that implements a particular spiking neuron or a particular synaptic element is only activated when the synaptic output signal or the synaptic input signal is respectively received at the particular spiking neuron or the particular synaptic element, such that the spiking neural network performs event-driven processing. (Van Der Made, para. 0005: “Synapses receive inputs from the pre-synaptic neuron and the post-synaptic neuron. During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs. The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.”) Regarding claim 41, Van Der Made, as modified, teaches claim 27 above. Van Der Made further teaches: Wherein the sub-network output pattern signal is unique for every unique sub-network input pattern signal of the training set by being encoded as a distinct spiking response of the first sub-set of spiking neurons, the distinct spiking response comprising a distinct number of spikes and/or distinct spike timing generated by output neurons of the first sub-network. (Van Der Made, para. 0005: “During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs. The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.”) Regarding claim 43, Van Der Made, as modified, teaches claim 42 above. Gottfried further teaches: wherein testing the uniqueness criterion comprises computing a distance between spiking responses generated for different sub-network input pattern signals. (Gottfried, para. 0004: “an output spiking signal accumulator configured to: receive the output spiking signals, determine times at which the output spiking signals are received, and generate a network output signal based on the times at which the output spiking signals are transmitted by the output neuron circuits.” Examiner notes Gottfried teaches computing distance as accumulating spiking signals based on output spiking signals transmitted by output neuron circuits). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made as set forth above with respect to claim 1. Regarding claim 44, Van Der Made, as modified, teaches claim 43 above. Gottfried further teaches: wherein the distance is computed using a spike-train distance metric. (Gottfried, para. 0004: “an output spiking signal accumulator configured to: receive the output spiking signals, determine times at which the output spiking signals are received, and generate a network output signal based on the times at which the output spiking signals are transmitted by the output neuron circuits.” Examiner notes Gottfried teaches computing distance as accumulating spiking signals based on output spiking signals transmitted by output neuron circuits). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made as set forth above with respect to claim 1. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Van Der Made, in view of Gottfried, and further in view of Walter, F., Röhrbein, F. & Knoll, A. (Computation by Time. Neural Process Lett 44, 103–124, 17 Nov 2015; hereinafter “Walter”). Regarding claim 7, Van Der Made, as modified, teaches claim 5 above. Van Der Made, as modified, does not explicitly disclose: wherein by inducing a spike artificially at a neuron n of the sub-network at a desired time, the nature of the relationship between neurons in preceding layers of the sub-network and the neuron n is established as causal, anti-causal, or non-causal. However, Walter teaches: wherein by inducing a spike artificially at a neuron n of the sub-network at a desired time, the nature of the relationship between neurons in preceding layers of the sub-network and the neuron n is established as causal, anti-causal, or non-causal. (Walter, sec. 2.2.1, 2.2.3, and Figure 4: “STDP adapts the efficacy of a synapse based on the relative timing of spike emission by its two adjacent neurons. This is illustrated in Fig. 4. If the postsynaptic neuron fires a spike after the presynaptic neuron within a certain time window, i.e. Δ𝑡>0, the synaptic connection is strengthened. Otherwise, if Δ𝑡<0, there is no causal relationship between the two spikes and the synapse is depressed. Based on the original learning window from Fig. 4, a huge number of different variations has been proposed [49]. For example, in anti-STDP non-causal spikes yield potentiation and causal spikes elicit synaptic depression” “The synaptic weight change dd𝑡𝑤𝑖𝑗(𝑡) between the postsynaptic neuron i and the presynaptic neuron j is computed as a sum of an STDP process 𝑋𝑑𝑗(𝑡) and an anti-STDP process 𝑋𝑖𝑗(𝑡). While the former models the correlation between the desired output and the input spike train provided by the teacher, the latter models the anti-causal correlation between the presynaptic and postsynaptic spikes.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Walter into Van Der Made, as modified. Walter teaches methods using spiking neurons modeled to approximate the complex dynamic behavior of biological neurons. One of ordinary skill would have been motivated to combine the teachings of Walter into Gottfried, as modified, in order to more accurately model short-term or long-term synaptic plasticity of biological neurons (Walter, sec. 2.2.1). Claims 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Van Der Made in view of Gottfried, and further in view of Imam, et. al. (US 2018/0174041 A1; hereinafter “Imam”). Regarding claim 9, Van Der Made, as modified, teaches claim 8 above. Imam further teaches: wherein using the CC-STDP learning rule, the sub-network output pattern signal can be steered towards a different population of the sub-network's output neurons that fired in response to a particular sub-network input pattern signal, and their precise firing times, in order to reach a particular sub-network output pattern signal. (Imam, paras. 0050 and 0051: ‘Continuing with the example of FIGS. 4A-4B, a neuromorphic computing device may support implementation of an SNN modeling STDP. In some implementations, a programmer of the SNN may programmatically define attributes governing how STDP will be implemented in a given SNN. For instance, STDP behavior may be defined by setting one or more of the parameters δ1, δ2 and d as utilized in the example of Equation (9). In the example of FIG. 4A, as shown in the block diagram 410, STDP modeling for the SNN may be defined such that the synaptic weights of those synapses which carried a spike in the sequence of spike messages 415 will decrease, while the other synapses, which did not carry a spike message in the sequence will have their respective synaptic weights increased, as represented in 410 For instance, this behavior can be achieved by setting δ1 to a negative value and δ2 to a positive value.” “Turning to FIG. 4B, a set of block diagrams are shown illustrating one example implementation of the STDP mechanism introduced in FIG. 4A. In one example, a STDP learning rule may be configured for a particular SNN such that the synapses changes the synaptic weight of a given synapse increases if a presynaptic spike is sent on that synapse by a neuron within 1 unit of time (e.g., 1 time step, 1 ms, etc.) of that neuron receiving a postsynaptic spike (e.g., from another neuron).”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Imam into Van Der Made, as modified. Imam teaches a neuromorphic computing system 105, which may accept as inputs, data from one or a variety of sources to generate outputs. One of ordinary skill would have been motivated to combine the teachings of Imam into Gottfried, as modified, in order to enable a neural network to adjust the strength of connections between biological neurons based on the relative timing of a particular neuron's output and input action potentials (or spikes) received by the particular neuron from other neurons (Imam, para. 0029). Regarding claim 10, Van Der Made, as modified, teaches claim 8 above. Imam further teaches: wherein the CC-STDP learning rule adjusts weights of the first sub-set of synaptic elements on the basis of a firing event only if the firing event contributes to firing of neurons in subsequent layers of the subnetwork. (Imam, para. 0050 and FIGS. 4A-4B: “Continuing with the example of FIGS. 4A-4B, a neuromorphic computing device may support implementation of an SNN modeling STDP. In some implementations, a programmer of the SNN may programmatically define attributes governing how STDP will be implemented in a given SNN. For instance, STDP behavior may be defined by setting one or more of the parameters δ1, δ2 and d as utilized in the example of Equation (9). In the example of FIG. 4A, as shown in the block diagram 410, STDP modeling for the SNN may be defined such that the synaptic weights of those synapses which carried a spike in the sequence of spike messages 415 will decrease, while the other synapses, which did not carry a spike message in the sequence will have their respective synaptic weights increased, as represented in 410 For instance, this behavior can be achieved by setting δ1 to a negative value and δ2 to a positive value. Such a configuration of the STDP modeling in this SNN may provide for synaptic weights to be increased based on a negative correlation between input and output spikes. Using such a configuration, STDP may be leveraged to induce weight asymmetry in one of the two directions connecting any two neurons in this portion of an example SNN. Accordingly, should a spike or activation occur at a neuron (e.g., 401-405) upstream from the neuron (e.g., 406) that triggered the initial sequence of spike messages 415, the weight asymmetry may bias the sending neurons to propagate a sequence of spikes back up the chain in a direction opposite that of the initial sequence of spike messages 415.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Imam into Van Der Made, as modified, as set forth above with respect to claim 9. Regarding claim 11, Van Der Made, as modified, teaches claim 8 above. Imam further teaches: wherein the training using the CC-STDP learning rule comprises inducing and/or inhibiting spike generation at neurons of the first sub-set of spiking neurons at specific times. (Imam, paras. 0050 and 0051: ‘Continuing with the example of FIGS. 4A-4B, a neuromorphic computing device may support implementation of an SNN modeling STDP. In some implementations, a programmer of the SNN may programmatically define attributes governing how STDP will be implemented in a given SNN. For instance, STDP behavior may be defined by setting one or more of the parameters δ1, δ2 and d as utilized in the example of Equation (9). In the example of FIG. 4A, as shown in the block diagram 410, STDP modeling for the SNN may be defined such that the synaptic weights of those synapses which carried a spike in the sequence of spike messages 415 will decrease, while the other synapses, which did not carry a spike message in the sequence will have their respective synaptic weights increased, as represented in 410 For instance, this behavior can be achieved by setting δ1 to a negative value and δ2 to a positive value.” “Turning to FIG. 4B, a set of block diagrams are shown illustrating one example implementation of the STDP mechanism introduced in FIG. 4A. In one example, a STDP learning rule may be configured for a particular SNN such that the synapses changes the synaptic weight of a given synapse increases if a presynaptic spike is sent on that synapse by a neuron within 1 unit of time (e.g., 1 time step, 1 ms, etc.) of that neuron receiving a postsynaptic spike (e.g., from another neuron).”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Imam into Van Der Made, as modified, as set forth above with respect to claim 9. Regarding claim 12, Van Der Made, as modified, teaches claim 11 above. Gottfried further teaches: wherein inducing spike generation in a neuron comprises driving a membrane of the neuron to a voltage that exceeds the firing threshold of the membrane, by means of a bias voltage input, thereby inducing the generation of a spike, or injecting an artificial spike into the neuron's output at the specific time. (Gottfried, para. 0052: “At 230, the one or more adjusted spiking signals are received by the one or more output neuron circuits and used to adjust capacities of the one or more output neuron circuits. An output neuron circuit can comprise a capacity, such as a charge capacity or potential, to which an adjusted spiking signal can be added. When an adjusted capacity of an output neuron circuit is greater than or equal to a specified threshold, at 240 the output neuron circuit transmits an output spiking signal.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Regarding claim 13, Van Der Made, as modified, teaches claim 11 above. Gottfried further teaches: wherein inhibiting spike generation in a neuron comprises driving a membrane of the neuron to its refractory voltage, or lowest possible voltage, preventing the generation of a spike, or disabling spike generation at the neuron. (Gottfried, para. 0039: “Neurons (e.g., neuron circuits) in an SNN can be connected by either inhibitory or excitatory synapses (e.g., synapse circuits, such as temporal synapse circuits). A spike can leave a decaying trace at a synapse, over which the neuron can integrate its capacity state, which itself can decay back to a predefined rest-capacity.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Gottfried into Van Der Made, as modified, as set forth above with respect to claim 1. Claims 35, 38-39, 42, and 45-46 are rejected under 35 U.S.C. 103 as being unpatentable over Van Der Made, in view of Gottfried, and further in view of Ahmed, et. al. (“Simulation of bayesian learning and inference on distributed stochastic spiking neural networks," 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 2016, pp. 1044-1051, doi: 10.1109/IJCNN.2016.7727313; hereinafter, “Ahmed”). Regarding claim 35, Van Der Made, as modified, teaches claim 1 above. Ahmed further teaches: wherein, during training of the first sub-network, (Ahmed, sec. IV: “SpNSim has the ability to simultaneously simulate and train heterogeneous neural networks, i.e., networks consisting of different spiking neuron models with different behaviors including activation functions and STDP rules. This is a key feature in implementing complex neural networks with distinct subnetworks.”) a spiking response generated in response to each presented sub-network input pattern signal is tested against a uniqueness criterion (Ahmed, sec. IV(A): “A ReLU neuron based inhibition layer is attached to the output layer which realizes hard WTA function to ensure that only one feature will be activated for each kernel so that each Bayesian neuron learns a unique feature.” Examiner notes a uniqueness criterion is a interpreted as a standard for determining uniqueness such as a winner-take-all standard taught by Ahmed for the purposes of ensuring unique feature), and the weights of the first sub-set of synaptic elements are iteratively adapted until the uniqueness criterion is satisfied. (Van Der Made, para. 0005: “During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs. The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.” Examiner notes Ahmed explicitly teaches subnetworks and a uniqueness standard and Van Der Made teaches iterative learning and modification of weights such that neurons become selected to a specific—i.e. unique pattern as taught by Ahmed—after some iterations) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made as modified. Ahmed teaches an efficient, scalable and flexible spiking neural network simulator, which supports learning through spike-timing dependent plasticity. One of ordinary skill would have been motivated to combine the teachings of Ahmed into Van Der Made as modified in order to enhance unsupervised learning and decision making as well as increasing the fault tolerance and noise resilience of a spiking neural network system (Ahmed, sec. I). Regarding claim 38, Van Der Made, as modified, teaches claim 1 above. Ahmed further teaches: wherein training of the first sub-network comprises inducing one or more spikes at selected output neurons at specified times in order to steer the spiking response toward a desired unique spiking response. (Ahmed, sec V(A): “The neurons in the input layer fire, facilitating the Bayesian neurons to fire. Based on their relative spike-timing, the weight of the synapse is updated. A ReLU neuron based inhibition layer is attached to the output layer which realizes hard WTA function to ensure that only one feature will be activated for each kernel so that each Bayesian neuron learns a unique feature”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made, as modified, as set forth above with respect to claim 35. Regarding claim 39, Van Der Made, as modified, teaches claim 1 above. Ahmed further teaches: wherein the distinct spiking response is defined within a discrete time bin corresponding to a temporal window extending from an onset of an input stimulus to a final spike generated by an output neuron of the first sub-network. (Ahmed, sec. V(A): “The learning rate is fixed at 0.01, and the STDP period is 30 ticks for the experiments. The duration of STDP window is in the range of 10ms in a biological system”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made, as modified, as set forth above with respect to claim 35. Regarding claim 42, Van Der Made, as modified, teaches claim 27 above. Ahmed further teaches: further comprising, during training of the first sub- network (Ahmed, sec. IV: “SpNSim has the ability to simultaneously simulate and train heterogeneous neural networks, i.e., networks consisting of different spiking neuron models with different behaviors including activation functions and STDP rules. This is a key feature in implementing complex neural networks with distinct subnetworks.”), testing a spiking response generated in response to each presented sub-network input pattern signal against a uniqueness criterion (Ahmed, sec. IV(A): “A ReLU neuron based inhibition layer is attached to the output layer which realizes hard WTA function to ensure that only one feature will be activated for each kernel so that each Bayesian neuron learns a unique feature.” Examiner notes a uniqueness criterion is a interpreted as a standard for determining uniqueness such as a winner-take-all standard taught by Ahmed for the purposes of ensuring unique feature), and iteratively adapting weights of the first sub- set of synaptic elements until the uniqueness criterion is satisfied (Van Der Made, para. 0005: “During each cycle of the Neural Network, the synapses that received spikes from the pre-synaptic neuron that contributed to an output event in the post-synaptic neuron have their weight values increased, while all other synapses, connected to the same post-synaptic neuron have their weights reduced. This changes the response of the neuron, increasing the likelihood that the neuron activates, and thus produces an output spike, when the same pattern reoccurs. The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern. Multiple neurons may activate on patterns that in combination constitute a feature.” Examiner notes Ahmed explicitly teaches subnetworks and a uniqueness standard and Van Der Made teaches iterative learning and modification of weights such that neurons become selected to a specific—i.e. unique pattern as taught by Ahmed—after some iterations). It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made, as modified, as set forth above with respect to claim 35. Regarding claim 45, Van Der Made, as modified, teaches claim 27 above. Ahmed further teaches: wherein training the first sub-network comprises inducing one or more spikes at selected output neurons at specified times to steer the spiking response toward a desired unique spiking response. (Ahmed, sec V(A): “The neurons in the input layer fire, facilitating the Bayesian neurons to fire. Based on their relative spike-timing, the weight of the synapse is updated. A ReLU neuron based inhibition layer is attached to the output layer which realizes hard WTA function to ensure that only one feature will be activated for each kernel so that each Bayesian neuron learns a unique feature”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made, as modified, as set forth above with respect to claim 35. Regarding claim 46, Van Der Made, as modified, teaches claim 27 above. Ahmed further teaches: wherein the distinct spiking response is defined within a discrete time bin corresponding to a temporal window extending from an onset of an input stimulus to a final spike generated by an output neuron of the first sub-network. (Ahmed, sec. V(A): “The learning rate is fixed at 0.01, and the STDP period is 30 ticks for the experiments. The duration of STDP window is in the range of 10ms in a biological system”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Ahmed into Van Der Made, as modified, as set forth above with respect to claim 35. Response to Applicant Remarks/Argument 35 USC § 101 In light of applicant’s amendments, the previously asserted 35 USC § 101 has been withdrawn. 35 USC § 103 Applicant argues that amendments made to current claims traverse the previously asserted 35 U.S.C. § 103 rejections. However, such amended claims continue to stand rejected for the reasons set forth above. Applicant explains that claim 1 requires weights of a first sub-network are configured by training such that the sub-network output pattern signal is unique for every unique sub-network input pattern signal and wherein the degree of uniqueness is controllable through operating parameters of spiking neurons and synaptic elements. For examination purposes, “degree of uniqueness” does not convey or claim any concept other than simply uniqueness. Something is unique or not. A degree of difference may convey how different two things are but something cannot be more unique than something else where unique is defined as one-of-a-kind. If applicant intends to claim a degree of difference rather than uniqueness, further clarity of claims is needed. Applicant further argues that Van Der Made does not teach a “sub-network” because Van Der Made does not describe a neuron as a sub-network nor a multi-neuron output representation. Applicant’s argument is unclear. The first sub-network and sub-set is taught by Gottfried as a first layer of a three layer feed-forward network and the first sub-set of synaptic elements is taught by Van Der Made a set of pre and post-synaptic neurons. Where applicant defines a sub-network as “a first sub-set of spiking neurons connected to receive synaptic signals from a first subset of synaptic elements”, the combination of Gottfried and Van Der Made teaches such definition. Towards the bottom of page 18 of applicant’s remarks, applicant further argues that Gottfried does not teach configuring weights through training until output spike patterns are unique for each unique input pattern. However, a person of reasonable skill in the art would understand training a neural network necessarily results in an overall modification of weights. Moreover, Van Der Made teaches such training and updating of weights to learn a unique pattern at paragraph 0005: “The synaptic weights of all neurons that did not activate are not updated. This causes the neurons to become selective to a specific pattern after only three to five repetitions. On each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern” On page 18, applicant argues that Van Der Made specifically does not teach spike patterns outputs that are unique for each unique input, stating that “Van Der Made expressly allows that multiple neurons may activate on patterns that in combination constitute a feature, which is inconsistent with the claimed requirement of uniqueness”. However, that multiple neurons may activate does not mean that any combination of specific neurons or the patterns of activation of any particular neurons is not unique. In fact, as set forth above, Van Der Made teaches that “on each activation, the neighboring neurons are inhibited so that each neuron learns a unique pattern”. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

May 17, 2021
Application Filed
Aug 19, 2024
Non-Final Rejection — §103
Feb 20, 2025
Response Filed
May 01, 2025
Final Rejection — §103
Jun 24, 2025
Request for Continued Examination
Jun 25, 2025
Response after Non-Final Action
Jul 21, 2025
Non-Final Rejection — §103
Jan 27, 2026
Response Filed
Feb 15, 2026
Final Rejection — §103
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month