Prosecution Insights
Last updated: April 19, 2026
Application No. 17/446,685

SPIKING NEURAL NETWORK CIRCUIT

Non-Final OA §103
Filed
Sep 01, 2021
Examiner
HONORE, EVEL NMN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
3 (Non-Final)
39%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
85%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
7 granted / 18 resolved
-16.1% vs TC avg
Strong +46% interview lift
Without
With
+46.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
1.1%
-38.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to the Application filed on 09/05/2025 Claims 1-16 are pending in the case. Claims 1 and 10 are independent claims. Claims 1 and 10 have been currently amended. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/05/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Akopyan et al. (Pub No.: 20130073497 A1), hereinafter referred to as Akopyan, in view of LAM et al (Pub No.: 20210406650 A1), hereinafter referred to as LAM and further in view of CRUZ-ALBRECHT et al. (Pub No.: 20160364643 A1), hereinafter referred to as CRUZ-ALBRECHT. With respect to claim 1, Akopyan disclose: A spiking neural network circuit comprising: an axon circuit which generates an input spike signal configured to transmit via an input line having a predetermined length (In paragraph [0076], Akopyan discloses that the invention includes a neural network made up of many connected core circuits. Each core circuit has a group of electronic synapses that link several digital neurons. A synapse connects the axon of one neuron to the dendrite of another. When a neuron gets enough input signals, it creates a signal called a spike. The neural network also has a scheduler that gets this spike and sends it to a chosen axon in the synapse array at a specific time.) a first synapse zone and a second synapse zone each including one or more synapses, wherein each of the synapses performs an operation based on the input spike signal and each weight (In paragraph [0029], Akopyan discloses that each synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. Each neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. In paragraph 5 and paragraph [0052], Akopyan discloses the synaptic weight matrix state of pre-synaptic (Pre) neurons and post-synaptic (Post) neurons.) a neuron circuit which generates an output spike signal based on operation results of the synapses (In paragraph [0082], Akopyan discloses that each neuron gets one input signal, checks if it has a spike, and sends one spike output to the AER system. Each neuron has its own settings for how it reacts and how it responds over time. Outputs from the crossbar come together for each axon and are combined with a shared indicator for excitement or inhibition. All input signals must finish before a neuron decides to spike. The spike sent to the AER system and back to the crossbar is handled before the neuron can create the next spike (within one clock cycle).) wherein each of branch nodes of the tree structure includes a driving buffer (In paragraph [0066], Akopyan discloses that neurons can perform their spike computation at the start of a timestep, or they can perform it continuously, spiking whenever their input drives them above a threshold. An axon requires buffering using a buffering circuit (such as memory), such that it can hold events from two timesteps.) wherein the input spike signal is transferred to each of the synapses through a hierarchical structure of each driving buffer where the input spike signal may be selectively transferred to each of the synapses based on each weight of each of the synapses (In paragraph [0048], Akopyan discloses that when a neuron sends a signal, an axon 15 sends a message to the crossbar 12. The crossbar checks the weight matrix W and sends messages to certain neurons 11 based on these weights. In one case, the crossbar also sends messages back from the neurons 11 to the axons 15. When a neuron sends a message, the crossbar reads the weight matrix W and sends messages to all axons connected to those neurons. This back-and-forth communication uses a weight matrix that can be switched.) wherein each driving buffer is activated or deactivated based on each weight of each of the synapses (In paragraph [0070], Akopyan discloses that the controller 6 turns on its X port based on whether the X_internal is even or odd. This helps axon selector 21 choose the right axon bank 15 (Axon. Even or Axon. Odd).) wherein a quantity of each driving buffer inserted into the input line is determined based on synapses connected to the axon circuit (In paragraph [0067], Akopyan discloses that computation and communication may be implemented in parallel using axon circuits that provide buffering for two events. During each cycle of the clock, each axon buffers events it receives from any neurons that spiked in timestep t (max of one) in a buffer) wherein when all weights of the synapses included in the first synapse zone are zero, the first synapse zone is deactivated, and are not zero, the first synapse zone is activated (In paragraph [0047], Akopyan discloses that the synapses are binary memory devices, wherein each synapse can have a weight "0" indicating it is non-conducting, or a weight "1" indicating it is conducting.) With respect to claim 1, Akopyan do not explicitly disclose: wherein the input spike signal is transferred to the first synapse zone and the second synapse zone through a tree structure wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight However, LAM is known to disclose: Wherein the input spike signal is transferred to the first synapse zone and the second synapse zone through a tree structure (In Fig. 1 and paragraph [0038], LAM discloses the artificial neuromorphic circuit where the axon driver of the pre-neuron sends a spike, which is transmitted to the dendrites of the post-neuron via the synapse circuit to stimulate the post-neuron.) Akopyan and LAM are analogous pieces of art because both references concern a neuron integrates input spikes and generates a spike event in response to the integrated input spikes. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Akopyan, with each synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron as taught by Akopyan, with the post-neuron circuit comprises a first control circuit, and generating the firing signal comprises as taught by LAM. The motivation for doing so would have been to improve spike computation and communication to run in parallel (See[0066] of Akopyan.) With respect to claim 1, Akopyan and LAM do not explicitly disclose: wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight However, CRUZ-ALBRECHT is known to disclose: Wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight (In paragraph [0067], CRUZ-ALBRECHT discloses weight memory, the synaptic weights were stored in memory. In the network there are synapses between the input neurons and the output neuron.) Akopyan, in view of LAM and CRUZ-ALBRECHT, are analogous pieces of art because both references concern a neuron integrates input spikes and generates a spike event in response to the integrated input spikes. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify CRUZ-ALBRECHT, with a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit, an interconnect fabric for interconnections to and from and between the neuron circuit as taught by CRUZ-ALBRECHT. The motivation for doing so would have been to improve the neural circuits and synapses with spiking timing dependent plasticity (See [0011] of CRUZ-ALBRECHT). With respect to claim 10, Akopyan disclose: A spiking neural network circuit comprising: an axon circuit which generates an input spike signal configured to transmit via an input line having a predetermined length (In paragraph [0076], Akopyan discloses that the invention includes a neural network made up of many connected core circuits. Each core circuit has a group of electronic synapses that link several digital neurons. A synapse connects the axon of one neuron to the dendrite of another. When a neuron gets enough input signals, it creates a signal called a spike. The neural network also has a scheduler that gets this spike and sends it to a chosen axon in the synapse array at a specific time.) Wherein each of the synapses performs an operation based on the input spike signal and each weight (In paragraph [0029], Akopyan discloses that each synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. Each neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. In paragraph 5 and paragraph [0052], Akopyan discloses the synaptic weight matrix state of pre-synaptic (Pre) neurons and post-synaptic (Post) neurons.) A neuron circuit which generates an output spike signal based on operation results of the synapses (In paragraph [0082], Akopyan discloses that each neuron gets one input signal, checks if it has a spike, and sends one spike output to the AER system. Each neuron has its own settings for how it reacts and how it responds over time. Outputs from the crossbar come together for each axon and are combined with a shared indicator for excitement or inhibition. All input signals must finish before a neuron decides to spike. The spike sent to the AER system and back to the crossbar is handled before the neuron can create the next spike (within one clock cycle).) wherein the input spike signal is transferred to each of the synapses through a hierarchical structure of each driving buffer where the input spike signal may be selectively transferred to each of the synapses based on each weight of each of the synapses (In paragraph [0048], Akopyan discloses that when a neuron sends a signal, an axon 15 sends a message to the crossbar 12. The crossbar checks the weight matrix W and sends messages to certain neurons 11 based on these weights. In one case, the crossbar also sends messages back from the neurons 11 to the axons 15. When a neuron sends a message, the crossbar reads the weight matrix W and sends messages to all axons connected to those neurons. This back-and-forth communication uses a weight matrix that can be switched.) wherein each driving buffer is activated or deactivated based on each weight of each of the synapses (In paragraph [0070], Akopyan discloses that the controller 6 turns on its X port based on whether the X_internal is even or odd. This helps axon selector 21 choose the right axon bank 15 (Axon. Even or Axon. Odd).) wherein a quantity of each driving buffer inserted into the input line is determined based on the predetermined length of the input line connected to the axon circuit or the number of synapses connected to the axon circuit (In paragraph [0067], Akopyan discloses that computation and communication may be implemented in parallel using axon circuits that provide buffering for two events. During each cycle of the clock, each axon buffers events it receives from any neurons that spiked in timestep t (max of one) in a buffer) wherein when all weights of the synapses included in each of the synapse zones are zero, each of the synapse zones is deactivated (In paragraph [0047], Akopyan discloses that the synapses are binary memory devices, wherein each synapse can have a weight "0" indicating it is non-conducting, or a weight "1" indicating it is conducting.) wherein when at least one of the weights of the synapses included in each of the synapse zones are not zero, each of the synapse zones is activated (In paragraph [0049], Akopyan disclose wherein in one implementation the synapses are changed, the weight from 0 to 1 (or 1 to 0) depending on the time difference). ) With respect to claim 10, Akopyan do not explicitly disclose: synapse zones each including one or more synapses wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight However, LAM is known to disclose: Synapse zones each including one or more synapses (In Fig. 1 and paragraph [0038], LAM discloses the artificial neuromorphic circuit where the axon driver of the pre-neuron sends a spike, which is transmitted to the dendrites of the post-neuron via the synapse circuit to stimulate the post-neuron.) Akopyan and LAM are analogous pieces of art because both references concern a neuron integrates input spikes and generates a spike event in response to the integrated input spikes. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Akopyan, with each synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron as taught by Akopyan, with the post-neuron circuit comprises a first control circuit, and generating the firing signal comprises as taught by LAM. The motivation for doing so would have been to improve spike computation and communication to run in parallel (See[0066] of Akopyan.) With respect to claim 10, Akopyan and LAM do not explicitly disclose: wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight However, CRUZ-ALBRECHT is known to disclose: Wherein each synapse includes a current source, a transistor and a weight memory where the weight memory is configured to store a weight bit corresponding to each weight (In paragraph [0067], CRUZ-ALBRECHT discloses weight memory, the synaptic weights were stored in memory. In the network there are synapses between the input neurons and the output neuron.) Akopyan, in view of LAM and CRUZ-ALBRECHT, are analogous pieces of art because both references concern a neuron integrates input spikes and generates a spike event in response to the integrated input spikes. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify CRUZ-ALBRECHT, with a weight memory for storing synaptic weights, the weight memory coupled to the synapse circuit, an interconnect fabric for interconnections to and from and between the neuron circuit as taught by CRUZ-ALBRECHT. The motivation for doing so would have been to improve the neural circuits and synapses with spiking timing dependent plasticity (See [0011] of CRUZ-ALBRECHT). Claim(s) 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Akopyan in view of LAM, CRUZ-ALBRECHT and further in view of Liu et al. (US Patent No. 9,887,698 B2), hereinafter referred to as Liu. Regarding claim 2, Akopyan, in view of LAM and CRUZ-ALBRECHT disclose the elements of claim 1. Akopyan, in view of LAM and CRUZ-ALBRECHT do not explicitly disclose: The spiking neural network circuit of claim 1, wherein the tree structure includes OR gates which outputs an enable signal to a corresponding driving buffer However, Liu disclose the limitation (In FIG. 2 and Col. 3, lines 20–31, Liu discloses a circuit diagram of the latch that includes a pair of logic gates. The compound logic gates are OR-AND-Invert (OAI) logic gates.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM and CRUZ-ALBRECHT, to include Liu’s, with OAI logic gates is at high low level and inputted to the NAND gate of the OAI logic gate as taught by Liu. The motivation for doing so would have been reduce the dynamic power consumption of the clock distribution networks (See (Col.2, lines 25-27) of Liu ). Regarding claim 11, Akopyan, in view of LAM and CRUZ-ALBRECHT disclose the elements of claim 10. Akopyan, in view of LAM and CRUZ-ALBRECHT do not explicitly disclose: The spiking neural network circuit of claim 10, wherein each of branch nodes of the tree structure includes a driving buffer which receives the input spike signal from a driving buffer of an upper layer and transfers the input spike signal to driving buffers of a lower layer in response to an enable signal, and wherein the tree structure includes OR gates which generate a corresponding enable signal to a corresponding driving buffer However, Liu disclose the limitation (In Col. 4, lines 6-20, Liu discloses that when the clock signal goes from high to low and the latch-enabled signal goes from low to high, the output of the first OR gate is high and goes to the NAND gate. The output of the second OR gate is low and goes to its own NAND gate. Since one input is low, the second NAND gate outputs a high signal called QN. With the first OR gate's output being high and QN also being high, the first NAND gate outputs a low control signal called Q.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM and CRUZ-ALBRECHT, to include Liu’s, with OAI logic gates is at high low level and inputted to the NAND gate of the OAI logic gate as taught by Liu. The motivation for doing so would have been reduce the dynamic power consumption of the clock distribution networks (See (Col.2, lines 25-27) of Liu ). Claim(s) 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Akopyan in view of LAM, CRUZ-ALBRECHT and further in view of Brezzo et al. (Pub No.: 20120259804 A1), hereinafter referred to as Brezzo. Regarding claim 3, Akopyan, in view of LAM and CRUZ-ALBRECHT disclose the elements of claim 1. Akopyan, in view of LAM and CRUZ-ALBRECHT do not explicitly disclose: The spiking neural network circuit of claim 1, wherein the tree structure includes a first layer, a second layer, and a first OR gate, wherein the first layer includes a first driving buffer which receives the input spike signal, and wherein the first OR gate outputs a first enable signal to the first driving buffer based on enable signals output from the second layer However, Brezzo disclose the limitation (In paragraph [0056], Brezzo discloses that the synapse crossbar array has a grid of 31 synapses for N neurons 5, with two layers E1 and E2 of electronic neurons, which include excitatory neurons (Ne) and inhibitory neurons (Ni). The global finite state machine 102 has a setting that puts the chip in either E1-E2 mode or fully connected mode. When starting up, the weights of the synapses 31 in a diagonal section are set to 0 (as shown in the top part of FIG.11), and they do not change. Each neuron 5 has a bit that shows if it is an E1 neuron or an E2 neuron. When a neuron fires, a signal is set in the priority encoder 101 to show if the firing neuron is E1 or E2. This information helps other neurons update their synapses.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM and CRUZ-ALBRECHT, to include Brezzo, with input spikes and generates a signal when the integrated inputs exceed a threshold as taught by Brezzo. The motivation for doing so would have been to reduce read/write latency to the synapse array from each neuron (See[0063] of Brezzo). Regarding claim 12, Akopyan, in view of LAM and CRUZ-ALBRECHT disclose the elements of claim 10. Akopyan, in view of LAM and CRUZ-ALBRECHT do not explicitly disclose: The spiking neural network circuit of claim 10, wherein the tree structure includes a first layer and a second layer, wherein the second layer includes a first branch node corresponding to a first synapse zone of the synapse zones and a second branch node corresponding to a second synapse zone of the synapse zones, wherein the first branch node includes a first driving buffer which transfers the input spike signal transferred from the first layer to the first synapse zone in response to a first enable signal, and wherein the first enable signal is based on weights of synapses of the first synapse zone However, Brezzo disclose the limitation (In paragraph [0056], Brezzo discloses that the synapse crossbar array has a grid of 31 synapses for N neurons 5, with two layers E1 and E2 of electronic neurons, which include excitatory neurons (Ne) and inhibitory neurons (Ni). The global finite state machine 102 has a setting that puts the chip in either E1-E2 mode or fully connected mode. When starting up, the weights of the synapses 31 in a diagonal section are set to 0 (as shown in the top part of FIG.11), and they do not change. Each neuron 5 has a bit that shows if it is an E1 neuron or an E2 neuron. When a neuron fires, a signal is set in the priority encoder 101 to show if the firing neuron is E1 or E2. This information helps other neurons update their synapses.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM and CRUZ-ALBRECHT, to include Brezzo, with input spikes and generates a signal when the integrated inputs exceed a threshold as taught by Brezzo. The motivation for doing so would have been to reduce read/write latency to the synapse array from each neuron (See[0063] of Brezzo). Claim 13 rejected under 35 U.S.C. 103 as being unpatentable over Akopyan in view of LAM, CRUZ-ALBRECHT, Brezzo and further in view of Liu et al. (US Patent No. 9,887,698 B2), hereinafter referred to as Liu. Regarding claim 13, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 12. Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo do not explicitly disclose: The spiking neural network circuit of claim 12, wherein the first enable signal corresponds to a logic low in response to weights of all synapses in the first synapse zone being '0', and corresponds to a logic high in response to at least one of the weights of the synapses in the first synapse zone being non-zero However, Liu disclose the limitation (In Col. 6, lines 40–44, Liu discloses that the logic high level or high voltage level of the signals and nodes is referred to as logic “1,” and logic low level or low voltage level of the signals and nodes is referred to as logic “0.”) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo, to include Liu’s, with OAI logic gates is at high low level and inputted to the NAND gate of the OAI logic gate as taught by Liu. The motivation for doing so would have been reduce the dynamic power consumption of the clock distribution networks (See (Col.2, lines 25-27) of Liu ). Claim(s) 4-8 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Akopyan in view of LAM, CRUZ-ALBRECHT, Brezzo and further in view of LEE et al. (Pub No.: 20180300612 A1), hereinafter referred to as LEE. Regarding claim 4, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 3. Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo do not explicitly disclose: The spiking neural network circuit of claim 3, wherein the second layer includes: a second driving buffer including an input terminal connected to an output terminal of the first driving buffer and an output terminal connected to the first synapse zone and a third driving buffer including an input terminal connected to the output terminal of the first driving buffer and an output terminal connected to the second synapse zone, and wherein the tree structure includes: a second OR gate which outputs a second enable signal to the second driving buffer and a third OR gate which outputs a third enable signal to the third driving buffer However, Lee disclose the limitation: The spiking neural network circuit of claim 3, wherein the second layer includes: a second driving buffer including an input terminal connected to an output terminal of the first driving buffer and an output terminal connected to the first synapse zone (In paragraph [0083], LEE discloses that the first post-synaptic neuron signals may be provided to the second synapse layer.) and a third driving buffer including an input terminal connected to the output terminal of the first driving buffer and an output terminal connected to the second synapse zone, and wherein the tree structure includes: a second OR gate which outputs a second enable signal to the second driving buffer (In paragraph [0083], LEE discloses that each of the first post-synaptic neurons 20_1 may provide one of the first post-synaptic neuron signals to a corresponding one of the second pre-synaptic neurons 10_2 of the second synapse layer. L2) and a third OR gate which outputs a third enable signal to the third driving buffer (In paragraph [0083], LEE discloses that the third synapse layer L3 may include third row lines R3 and third column lines C3. A total number of the third column lines C3 may be less than the total number of the third row lines R3. The total number of the second column lines C2 may be equal to the total number of the third row lines R3.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo, to include LEE, with a neuromorphic device including a post-synaptic neuron having a subtracting circuit and a transfer function circuit as taught by LEE. The motivation for doing so would have been to include post-synaptic neuron having the subtracting circuit and the transfer function circuit (See [0002] of LEE). Regarding claim 5, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 4. In addition, Akopyan disclose: The spiking neural network circuit of claim 4, wherein the second OR gate receives weights of synapses of the first synapse zone, and outputs the second enable signal based on the weights of the synapses of the first synapse zone (In paragraph [0052], Akopyan discloses that the synaptic weights can be represented as a matrix W corresponding to the synapses 31. FIG. 5 shows the synaptic weight matrix state for pre-synaptic (Pre) neurons and post-synaptic (Post) neurons 11, wherein the matrix W is set and reset by said neurons 11.) Regarding claim 6, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 4. In addition, Akopyan disclose: The spiking neural network circuit of claim 4, wherein the second driving buffer is activated or deactivated in response to the second enable signal, wherein, when the second driving buffer is activated, the second driving buffer transfers the input spike signal received from the first driving buffer to the synapses of the first synapse zone, and wherein, when the second driving buffer is deactivated, the second driving buffer transfers a signal corresponding to a first logic to the synapses of the first synapse zone (In paragraph [0070], Akopyan discloses that the controller 6 changes its state by flipping a variable called X_internal. At the same time, the neurons are working on their spikes (not shown). When all the spikes are ready, the controller gets a signal called compute_spk. Depending on whether the X_internal is even or odd, the controller 6 turns on its X port to choose the right axon bank (Axon). Even or Axon. Odd). After the axon selector 21 finishes its choice and the neurons have sent out their spikes, the controller 6 marks the end of that time step (clk).) Regarding claim 7, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 4. In addition, Akopyan disclose: The spiking neural network circuit of claim 4, wherein a first synapse of the first synapse zone includes a current source which outputs a current signal based on a weight of the first synapse and a transistor which receives the current signal, and wherein an output terminal of the second driving buffer is connected to a gate of the transistor of the first synapse (In paragraph [0048], Akopyan discloses that when a neuron sends a signal, an axon 15 sends a message to the crossbar 12. The crossbar checks the weight matrix W and sends messages to certain neurons 11 based on these weights. In one case, the crossbar also sends messages back from the neurons 11 to the axons 15. When a neuron sends a message, the crossbar reads the weight matrix W and sends messages to all axons connected to those neurons. This back-and-forth communication uses a weight matrix that can be switched.) Regarding claim 8, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 7. Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo do not explicitly disclose: The spiking neural network circuit of claim 7, wherein the second driving buffer transfers the input spike signal to the gate of the transistor of the first synapse in response to the second enable signal, and wherein the transistor is turned on in response to the input spike signal and outputs the current signal to the neuron circuit However, LEE disclose the limitation (In paragraph [0057], LEE discloses that the pre-synaptic neurons 10 may transmit electrical pulses to the synapses 30 through the row lines R in a learning mode, a reset mode, or a read-out mode. The post-synaptic neurons 20 may transmit electrical pulses to the synapses 30 through the column lines C in the learning mode or the reset mode, and may receive electrical pulses from the synapses 30 through the column lines C in the read-out mode. Each of the synapses 30 may include a 2-terminal device such as a variable resistive device. For example, each of the synapses 30 may include a first electrode, which is electrically connected with one of the pre-synaptic neurons 10, and a second electrode, which is electrically connected with one of the post-synaptic neurons 20.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo, to include LEE, with a neuromorphic device including a post-synaptic neuron having a subtracting circuit and a transfer function circuit as taught by LEE. The motivation for doing so would have been to include post-synaptic neuron having the subtracting circuit and the transfer function circuit (See [0002] of LEE). Regarding claim 14 Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 13. In addition, Akopyan disclose: The spiking neural network circuit of claim 13, wherein the first driving buffer is deactivated in response to the first enable signal corresponding to the logic low, and transfers the input spike signal to the first synapse zone in response to the first enable signal corresponding to the logic high (In paragraph [0070], Akopyan discloses that the controller 6 changes its state by flipping a variable called X_internal. At the same time, the neurons are working on their spikes (not shown). When all the spikes are ready, the controller gets a signal called compute_spk. Depending on whether the X_internal is even or odd, the controller 6 turns on its X port to choose the right axon bank (Axon). Even or Axon. Odd). After the axon selector 21 finishes its choice and the neurons have sent out their spikes, the controller 6 marks the end of that time step (clk).) Regarding claim 15, Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo disclose the elements of claim 12. Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo do not explicitly disclose: The spiking neural network circuit of claim 12, wherein the second branch node includes a second driving buffer which transfers the input spike signal transferred from the first layer to the second synapse zone in response to a second enable signal, wherein the second enable signal is based on weights of synapses in the second synapse zone, wherein the first layer includes a third branch node connected to the first branch node and the second branch node, wherein the third branch node includes a third driving buffer which transfers the input spike signal to the first driving buffer and the second driving buffer in response to a third enable signal, and wherein the third enable signal is based on the first enable signal and the second enable signal However, LEE disclose the limitation (In paragraph [0083], LEE discloses that the first post-synaptic neuron signals may be provided to the second synapse layer. In paragraph [0083], LEE discloses each of the first post-synaptic neurons. 20_1 may provide one of the first post-synaptic neuron signals to a corresponding one of the second pre-synaptic neurons 10_2 of the second synapse layer L2. In paragraph [0083], LEE disclose the third synapse layer L3 may include third row lines R3 and third column lines C3. A total number of the third column lines C3 may be less than a total number of the third row lines R3. The total number of the second column lines C2 may be equal to the total number of the third row lines R3.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT and Brezzo, to include LEE, with a neuromorphic device including a post-synaptic neuron having a subtracting circuit and a transfer function circuit as taught by LEE. The motivation for doing so would have been to include post-synaptic neuron having the subtracting circuit and the transfer function circuit (See [0002] of LEE). Claim(s) 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Akopyan in view of LAM, CRUZ-ALBRECHT, Brezzo, LEE and further in view of Kim et al. (Pub No.: 20210142847 A1), hereinafter referred to as Kim. Regarding claim 9, Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo and Kim disclose the elements of claim 4. Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo and Kim do not explicitly disclose: The spiking neural network circuit of claim 4, wherein the first OR gate outputs the first enable signal to the first driving buffer, based on the second enable signal and the third enable signal, wherein, the first driving buffer is activated or deactivated in response to the first enable signal, wherein, when the first driving buffer is activated, the first driving buffer transfers the input spike signal to the second driving buffer and the third driving buffer, and wherein, when the first driving buffer is deactivated, the first driving buffer transfers a signal corresponding to a first logic to the second driving buffer and the third driving buffer However, Kim disclose the limitation (In paragraph [0084], Kim discloses that the synapse element SE in the second row may be connected with a word line WL2a and an inverted word line /WL2a and may be connected with a word line WL2b and an inverted word line /WL2b. The synapse element SE in the third row may be connected with a word line WL3a and an inverted word line /WL3a and may be connected with a word line WL3b and an inverted word line /WL3b. The synapse elements SE in the m-th row may be connected with an m-th word line WLma and an m-th inverted word line /WLma and may be connected with an m-th word line WLmb and an m-th inverted word line /WLmb. However, the example embodiments are not limited thereto, and other arrangements and/or configurations of the synapse elements may be utilized.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo, LEE to include Kim, with synapse circuits disposed at intersections of the first direction lines and the second direction lines as taught by Kim. The motivation for doing so would have been to reduce the size of circuit for realizing the neuromorphic processor (See (Col. 14, lines 1-4) of Kim). Regarding claim 16, Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo and Kim disclose the elements of claim 15. Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo and Kim do not explicitly disclose: The spiking neural network circuit of claim 15, wherein the third enable signal corresponds to a logic high in response to that at least one of the first enable signal and the second enable signal corresponds to the logic high, and corresponds to a logic low in response to that both the first enable signal and the second enable signal correspond to the logic low, and wherein the third driving buffer is deactivated in response to the third enable signal corresponding to the logic low, and transfers the input spike signal to the second branch node and the third branch node in response to the third enable signal corresponding to the logic high However, Kim disclose the limitation (In paragraph [0084], Kim discloses that the synapse element SE in the second row may be connected with a word line WL2a and an inverted word line /WL2a and may be connected with a word line WL2b and an inverted word line /WL2b. The synapse element SE in the third row may be connected with a word line WL3a and an inverted word line /WL3a and may be connected with a word line WL3b and an inverted word line /WL3b. The synapse elements SE in the m-th row may be connected with an m-th word line WLma and an m-th inverted word line /WLma and may be connected with an m-th word line WLmb and an m-th inverted word line /WLmb. However, the example embodiments are not limited thereto, and other arrangements and/or configurations of the synapse elements may be utilized.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Akopyan, in view of LAM, CRUZ-ALBRECHT, Brezzo, LEE to include Kim, with synapse circuits disposed at intersections of the first direction lines and the second direction lines as taught by Kim. The motivation for doing so would have been to reduce the size of circuit for realizing the neuromorphic processor (See (Col. 14, lines 1-4) of Kim). Response to Arguments Applicant's arguments filed 09/05/2025 have been fully considered but were not persuasive Pertaining to the rejection under 101 Rejections for claims 1-16 are withdrawn under 35 USC § 101 Pertaining to Rejection under 103 Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EVEL HONORE Examiner Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Sep 01, 2021
Application Filed
Sep 24, 2024
Non-Final Rejection — §103
Dec 20, 2024
Response Filed
Apr 29, 2025
Final Rejection — §103
Aug 26, 2025
Response after Non-Final Action
Sep 12, 2025
Request for Continued Examination
Oct 06, 2025
Response after Non-Final Action
Jan 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566942
System and Method For Generating Parametric Activation Functions
2y 5m to grant Granted Mar 03, 2026
Patent 12547946
SYSTEMS AND METHODS FOR FIELD EXTRACTION FROM UNLABELED DATA
2y 5m to grant Granted Feb 10, 2026
Patent 12547906
METHOD, DEVICE, AND PROGRAM PRODUCT FOR TRAINING MODEL
2y 5m to grant Granted Feb 10, 2026
Patent 12536156
UPDATING METADATA ASSOCIATED WITH HISTORIC DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12406483
ONLINE CLASS-INCREMENTAL CONTINUAL LEARNING WITH ADVERSARIAL SHAPLEY VALUE
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
39%
Grant Probability
85%
With Interview (+46.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month