Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1-21-2025 has been entered.
Response to Amendment
In the previous Office Action issued 10-23-2025 (hereinafter “the previous Office Action”), claims 1-25 were pending.
This action is in response to the amendment and remarks filed 1-21-2026. In the amendment, claims 11-12 were amended, no claims were canceled, no claims were added. Thus, claims 1-25 are pending.
The rejections of claims 11-12 under 35 U.S.C. § 103, set forth in the previous Office Action, have been withdrawn in view of Applicant’s amendments and remarks.
Claim Objections
Claims 11-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-10, 13-15, and 18-25 are rejected under 35 U.S.C. 103 as being unpatentable over Chinese Patent Application Publication No. CN110163365A, hereinafter Yida, in view of Bohnstingl (US 20220004851), hereinafter Bohnstingl, and further in view of Narayanan (US 11038520), hereinafter Narayanan.
Regarding Claim 1:
Yida discloses:
each of the multiple layers1 comprising: a bidirectional Synaptic Network Channel (SNC)2
Yida, [0022], “The spike pulse generating unit is used to generate the spike pulse signal of the neuron and is connected to the input and output ports. The pulse input and output port is a bidirectional channel with two states: input and output: inputting the pulse signals of other neurons into the integral ignition unit, or outputting the pulses of the spike pulse generating unit of this neuron to other parts, and adjusting its own state according to the control signal of the integral ignition unit.”
[i.e., the pulse input and output port is a bidirectional Synaptic Network Channel that transmits neurons signals]
as an elastic wave superposition of inputs,
x
F
t
’s and
y
B
t
’s, respectively
Yida, [0022], “The pulse input and output port is a bidirectional channel with two states: input and output: inputting the pulse signals of other neurons into the integral ignition unit, or outputting the pulses of the spike pulse generating unit of this neuron to other parts, and adjusting its own state”
[0023], “the rectangular positive and negative pulses enter the signal integration module respectively, driving the subsequent field effect transistor to realize signal integration and generate continuous positive and negative pulse pairs”
[i.e., the positive and negative pulses being integrated to produce paired pulse signals aligns with the signal combining behavior of elastic wave superposition]
a Hybrid Coupler (HC) to connect the bidirectional SNC and the unidirectional SR units
Yida, [0025], “FIG4 shows a functional schematic diagram of an interactive port, which is mainly used for interconnection between neurons and is mainly composed of an interface module, an input module, an output module, and a control module. Since transmission in biological neurons is bidirectional, there is no distinction between input ports and output ports. Therefore, the present invention designs an interactive port to achieve bidirectional transmission of impulses between neurons…the control module realizes the input and output control of the pulse according to the control signal…that is, the pulse input control of the integral ignition unit and the pulse output control of the Spike pulse generating unit. The interactive port can be implemented using a common two-way switch”
[0030], “After receiving the signal, the Spike pulse generating unit generates a spike pulse of this signal and transmits it to the input and output ports; at the same time, the port also receives the output of the integral ignition unit and changes the state to the output state”
[i.e., the interaction port's connecting of the input/output port (bidirectional SNC) and the spike pulse generating unit and integrated ignition unit (unidirectional SR units) is functionally the same as the claimed Hybrid Coupler].
Yida does not explicitly disclose:
A neural network processing system having multiple layers
for concurrently transmitting weighted sums,
y
F
t
’s and
x
B
t
’s
each of the inputs multiplied and added with corresponding weights w’s
unidirectional Signal Reshaping (SR) units, I’s…for inference
unidirectional Signal Reshaping (SR) units…L’s for…learning, respectively
encoded in variable splitters and combiners in forward and backward directions, respectively
by generating inputs for a following layer in the forward and backward directions from a current layer's weighted sums
y
F
t
’s and
x
B
t
’s, respectively
a weight update unit to calculate each weight difference
Δ
w
i
j
using an input
y
B
i
t
or a weighted sum
y
F
i
t
and an input
x
F
j
t
to update a weight
w
i
j
for a current layer
However, in the same field, analogous art Bohnstingl teaches:
A neural network processing system having multiple layers
Bohnstingl, [0030]-[0031], “According to some embodiments, the spiking neural network includes spiking neuron apparatuses (e.g. neuromorphic neuron apparatuses described in FIG. 4)…For example, the spiking neural network may include multiple layers of neurons, whereon each neuron of the neurons is the neuron apparatus.”
for concurrently transmitting weighted sums,
y
F
t
’s and
x
B
t
’s
Bohnstingl, [0074]-[0075], “The summing block 405 is configured to receive weighted input values W(x1)*x1, W(x2)*x2 . . . W(xn)*xn representative of an object at time t (e.g. an image). The summing block 405 may be configured to perform the sum of the received weighted values x(t)=W(x1)*x1+W(x2)*x2+ . . . W(xn)*xn, and the resulting variable value x(t) is provided or output by the summing block 405 to the accumulation block 401. The accumulation block 401 includes an adder circuit 420, multiplication circuit 411, and activation circuit 412… The accumulation block 401 may be configured to output at the branching point 414, the computed state variable in parallel to the output generation block 403 and to the multiplication logic”
[i.e., FIG. 4 shows the summed weights x(t) generated by the summing block 405 is provided to the accumulation block 401, which outputs the values in parallel]
[0141], “For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved”
each of the inputs multiplied and added with corresponding weights w’s
Bohnstingl, [0074], “The summing block 405 is configured to receive weighted input values W(x1)*x1, W(x2)*x2 . . . W(xn)*xn representative of an object at time t (e.g. an image). The summing block 405 may be configured to perform the sum of the received weighted values x(t)=W(x1)*x1+W(x2)*x2+ . . . W(xn)*xn, and the resulting variable value x(t) is provided or output by the summing block 405 to the accumulation block 401”
unidirectional Signal Reshaping (SR) units, I’s…for inference
Bohnstingl, [0063]-[0066], “In variants, a time-to-spike (TTS) approach can be used… The TTS integrator 335 receives and processes incoming signals such as incoming spikes…integrates a corresponding value of the received signal into a membrane state variable (e.g. referred to herein as membrane potential variable) Vm of the TTS integrator 335… The selection unit 305 selects for each received signal xi a weight value (or modulating term) αi that corresponds to the arrival time of the received signal xi and performs a multiplication of the selected weight value and a value of the received signal”
[0122]-[0124]: “As another example of application, similarity measures can be computed using a simple PWM circuitry 335 [i.e., another name for the TTS integrator component]. The generation of read/write weights may require dot products and norms to be computed, i.e., to measure distances according to EQUATION 15…Also, in this case, the procedure to compute the similarity measure can potentially be implemented in a single crossbar operation. The pulses transmitted as reference points in the TTS scheme can be utilized for the L1 Norm Parallel Read (input vector contains all ones). The dot product parallel read can be implemented using the second pulse and the TTS integrator scheme.”
[i.e., the similarity measure quantifies the distance between the input vectors and the stored memory, effectively performing inference by evaluating how closely the input matches encoded data in memory]
unidirectional Signal Reshaping (SR) units…L’s for…learning, respectively
Bohnstingl, [0059], “The electronic devices 333 are programmed so as to incrementally change states of the devices 333. This is achieved by coupling write signals into one or more of the input lines 331 of the crossbar array structure, e.g., memory 103. The write signals are generated based on write weight vectors that are generated by the interface 102. The write weight vectors are themselves generated according to write instructions from the controller 101”
[0061], “Programming the electronic devices 333 results in incrementally change states of the devices 333 (e.g., change the electrical conductances of the devices 333). The states of the electronic devices 333 correspond to certain values, which determine data as stored on the memory 103” and paragraph [0120]: “For example, in embodiments, a memristive crossbar structure (with PCM cells) is used together with optimized read/write heads to achieve an external memory for the controller 101 and its processing unit The controller is aimed at executing a neural network, be it to train the latter or perform inferences based on the trained network. Such a neural network can thus be augmented with memory built on memristive devices 333”
[i.e., learning occurs though each incremental change in electrical conductance, encoding a trained weight update in external storage used by the controller during training of a neural network]).
Yida, Bohnstingl, and the instant application are analogous art because they are all directed to the implementation of temporal neural networks (Yida, [0002]; Bohnstingl, Abstract).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yida’s neuromorphic computing based on memristors with Bohnstingl’s spiking neural network system and apparatus having multiple layers in order to “enhance a spiking neural network with an external memory”, combine advantages from neural network data processing and persistent storage” and “reduce the communication in the memory-augmented system”, as suggested by Bohnstingl (Bohnstingl, [0021]-[0022]).
Additionally, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yida to incorporate Bohnstingl to concurrently transmit weights that are multiplied and added with inputs of the neural network. Doing so would have allowed Yida to use Bohnstingl’s method to “enable an accurate and efficient processing of temporal data” as suggested by Bohnstingl (Bohnstingl, [0032]).
Lastly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yida to incorporate Bohnstingl to use unidirectional signal reshaping units for inference and learning. Doing so would have allowed Yida to use Bohnstingl’s method to “include, for example, a training or inference of the spiking neural network.” as suggested by Bohnstingl (Bohnstingl, [0024]).
Yida in view of Bohnstingl do not explicitly disclose:
encoded in variable splitters and combiners in forward and backward directions, respectively
by generating inputs for a following layer in the forward and backward directions from a current layer's weighted sums
y
F
t
’s and
x
B
t
’s, respectively
a weight update unit to calculate each weight difference
Δ
w
i
j
using an input
y
B
i
t
or a weighted sum
y
F
i
t
and an input
x
F
j
t
to update a weight
w
i
j
for a current layer
However, in the same field, analogous art Narayanan teaches:
encoded in variable splitters and combiners in forward and backward directions, respectively
Narayanan, col. 7, lines 8-18, “Input voltages V1, V2, V3 are applied to row wires 302, 304, 306, respectively. Each column wire 308, 310, 312, 314 sums the currents I1, I2, I3, I4 generated by each neuromorphic device along the particular column wire. For example, the current I4 generated by column wire 314 is according to the equation I4=V1/σ41+V2/σ42+V3/σ43. Thus, the crossbar array 300 computes the forward matrix multiplication by multiplying the values stored in the neuromorphic devices by the row wire inputs, which are defined by voltages V1, V2, V3”
[i.e., input applied to row and columns wires in forward multiplication aligns with weights split in a forward direction]
Col. 7, lines 19-25, “In backward matrix multiplication, voltages are applied at column wires 308, 310, 312, 314 and then read from row wires 302, 304, 306. For weight updates, which are described in greater detail below, voltages are applied to column wires and row wires at the same time, and the conductance values stored in the relevant cross-point synaptic devices all update in parallel. Accordingly, the multiplication and addition operations required to perform weight updates are performed locally at each neuromorphic device 320 of crossbar array 300”
[i.e., weights are encoded in the neuromorphic device consisting of variable splitter and combiner functionality by the crossbar array]
by generating inputs for a following layer in the forward and backward directions from a current layer's weighted sums
y
F
t
’s and
x
B
t
’s, respectively
Narayanan, col. 6, lines 57-59, “Multiple crossbars may be needed for a single neural network layer, or in some cases, multiple neural network layers are implemented on the same crossbar array”
Col. 7, lines 8-21, “Input voltages V1, V2, V3 are applied to row wires 302, 304, 306, respectively. Each column wire 308, 310, 312, 314 sums the currents I1, I2, I3, I4 generated by each neuromorphic device along the particular column wire. For example, the current I4 generated by column wire 314 is according to the equation I4=V1/σ41+V2/σ42+V3/σ43. Thus, the crossbar array 300 computes the forward matrix multiplication by multiplying the values stored in the neuromorphic devices by the row wire inputs, which are defined by voltages V1, V2, V3. The backward matrix multiplication is very similar. In backward matrix multiplication, voltages are applied at column wires 308, 310, 312, 314 and then read from row wires 302, 304, 306”
a weight update unit to calculate each weight difference
Δ
w
i
j
using an input
y
B
i
t
or a weighted sum
y
F
i
t
and an input
x
F
j
t
to update a weight
w
i
j
for a current layer
Narayanan, col. 6, lines 24-30, “during weight updates, the input neurons 202 and hidden neurons 206 apply a first weight update, and the output neurons 208 and hidden neurons 206 apply a second weight update through the network 200. The combinations of these voltages create a state change within each weight 204, causing the weights 204 to take on a new resistance value”
Col. 7, lines 21-30, “For weight updates, which are described in greater detail below, voltages are applied to column wires and row wires at the same time, and the conductance values stored in the relevant cross-point synaptic devices all update in parallel. Accordingly, the multiplication and addition operations required to perform weight updates are performed locally at each neuromorphic device 320 of crossbar array 300, by using the cross-point synaptic devices all update in parallel.”
Yida, Bohnstingl, Narayanan, and the instant application are analogous art because they are all directed to the optimization of neural network architecture (Yida, [0008]; Bohnstingl, [0003]; Narayanan, col. 1, Summary).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yida and Bohnstingl with Narayanan to use forward variable splitters and backward combiners that encode weights and to generate inputs for a following layer in the forward and backward directions from a current layer's weighted sums. Doing so would have allowed Yida and Bohnstingl to use Narayanan’s method in order to “apply a computation based on an input data point from an input of the neuron, and to produce a result of the computation as an output data point at an output of the neuron” and “apply a computation based on an error data point and a derivative of the computation of the feed forward chain from an input and to produce an error data point at an output” as suggested by Narayanan (Narayanan, col. 4, lines 54-61).
Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida and Bohnstingl to incorporate the teachings of Narayanan to include a weight update unit to calculate weight differences using an input or a weighted sum and an input, to update a weight for a current layer. Doing so would have allowed Yida and Bohnstingl to use Narayanan’s method in order to “produce a weight update data point in accordance with a local error value (‘weight’)” as suggested by Narayanan (Narayanan, col. 4, lines 62-64).
Regarding Claim 4:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein said unidirectional SR units perform inference and learning using a Spike Time-Dependent Plasticity algorithm
Yida, [0002], “The present invention relates to the field of neuromorphic computing based on memristors, and in particular to a neuron circuit in a spiking neural network based on STDP (Spiking timing dependent plasticity) rules”
[0023], “Figure 2 shows the Spike pulse generating unit in the neuron. This unit is only sensitive to rising edge signals… generates positive and negative pulses with adjustable signal amplitude and waveform, and uses a high-power amplifier to improve the circuit's load capacity”
[0024], “Figure 3 shows the integral ignition unit, which accumulates and compares the spike excitations of other neurons to generate a pulse signal, which is input into the spike generation module and generates the spike excitation signal of this neuron”
Regarding Claim 5:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Narayanan further discloses:
wherein said unidirectional SR units perform inference and learning using a backpropagation algorithm
Narayanan, col. 14, lines 9-12, “However, when considering non-monotonic activation functions such as sech2 function (the derivative of tan h), which can be used during the backpropagation step, any given output interval could correspond to one or more input intervals, as shown by 515 in FIG. 7…”
Col. 15 lines 13-16: “the ability to directly implement non-monotonic functions without needing to use dedicated digital hardware is critical, especially for ANN backpropagation and training”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida in view of Bohnstingl to incorporate the teachings of Narayanan to use a backpropagation algorithm for inference and learning in Bohnstingl’s unidirectional signal reshaping units. Doing so would have allowed Yida in view of Bohnstingl to use Narayanan’s method “to apply a computation based on an error data point and a derivative of the computation of the feed forward chain from an input and to produce an error data point at an output” and “provide a voltage back across the array of weights” as suggested by Narayanan (Narayanan, col. 4, lines 58-61, and col. 6, lines 8-9).
Regarding Claim 6:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein said unidirectional SR units reshape fragments of spike energies from a plurality of preceding neurons into a single spike signal for next-stage neural communication in a following layer
Yida, [0023], “Figure 2 shows the Spike pulse generating unit in the neuron. This unit is only sensitive to rising edge signals and is mainly composed of…a signal shaping module…it enters the signal shaping module to convert the edge jump signal into a rectangular positive pulse with a certain width…extracts the falling edge of the upper signal output and adjusts it to a rectangular negative pulse; then, the rectangular positive and negative pulses enter the signal integration module respectively, driving the subsequent field effect transistor to realize signal integration and generate continuous positive and negative pulse pairs”
[i.e., the rising and falling edges represent fragments of spike energies from multiple preceding neurons that are reshaped and integrated into an output pulse pair (single signal)],
[0024], “Figure 3 shows the integral ignition unit, which accumulates and compares the spike excitations of other neurons to generate a pulse signal which is input into the spike generation module and generates the spike excitation signal of this neuron”
Regarding Claim 7:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein said unidirectional SR units generate inputs for a following layer in the forward and backward directions from the current layer’s
y
F
t
’s or
x
F
t
’s, and
x
B
t
’s, respectively
Yida, [0025], “Since transmission in biological neurons is bidirectional, there is no distinction between input ports and output ports.”
[0026], “The present invention not only realizes the integral ignition function, but also possesses the network characteristics of biological neurons such as resting period, lateral inhibition, and bidirectional transmission; it can realize the network construction between neurons without external digital control signals”
[i.e., bidirectional transmission in neural networks refers to the backwards and forwards directional flow of data points or generated inputs from layer to layer]
[0030], “After receiving the signal, the Spike pulse generating unit generates a spike pulse of this signal and transmits it to the input and output ports; at the same time, the port also receives the output of the integral ignition unit and changes the state to the output state. The pulse output of the Spike pulse generating unit can be smoothly transmitted from this neuron, and the port is converted back to the input state.”
Bohnstingl further teaches:
by calculating a cross correlation function for
y
F
t
and
x
B
t
, or for
x
F
t
and
x
B
t
and integrating the cross correlation function for a given period of time
Bohnstingl, [0122], “As another example of application, similarity measures can be computed using a simple PWM circuitry 335. The generation of read/write weights may require dot products and norms to be computed, i.e., to measure distances according to EQUATION 15…”
[0124], “Note, a time-to-spike scheme can be used for the input vector presentation at the rows/columns of the crossbar array, instead of using DACs or a PWM circuitry, which allows the energy required to transmit the input to be reduced”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to calculate and integrate a cross correlation function for a given period of time in the STDP algorithm. Doing so would have allowed Yida to use Bohnstingl’s method “to compute the similarity measure can potentially be implemented in a single crossbar operation” as suggested by Bohnstingl (Bohnstingl, [0124]).
Regarding Claim 8:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Bohnstingl further teaches:
wherein input and output signals to and from the neural network processing system are unidirectional signals
Bohnstingl, [0059], “FIG. 3A is a block diagram of a memristive crossbar array of a neuromorphic memory device…The crossbar array structure, e.g., memory 103, includes input lines 331 and output lines 332, where the lines 331, 332 are interconnected at junctions via electronic devices 333 (e.g., memristive devices)…This is achieved by coupling write signals into one or more of the input lines 331 of the crossbar array structure, e.g., memory 103…This is achieved by coupling read signals input lines 331, based on read weight vectors generated by the interface 102”
[i.e., distinct input and output lines for read and write signals imply a unidirectional signal flow]
[0124], “Note, a time-to-spike scheme can be used for the input vector presentation at the rows/columns of the crossbar array, instead of using DACs or a PWM circuitry, which allows the energy required to transmit the input to be reduced”
[i.e., time-to-spike scheme is known to utilize unidirectional signal input]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to transmit unidirectional input and output signals in the spiking neural network. Doing so would have allowed Yida to use Bohnstingl’s method for “reduce the communication in the memory-augmented system. For example, only spikes may be transmitted between the access heads and the controller network…throughout the controller network itself and from the controller network to the output layer.” as suggested by Bohnstingl (Bohnstingl, [0022]).
Regarding Claim 9:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein both input
x
F
i
t
and weighted sum
x
B
i
t
coexist as independent signals in the bidirectional SNC until they are decoupled by the HC
Yida, [0024], “the integral ignition unit, which accumulates and compares the spike excitations of other neurons to generate a pulse signal…The third stage is the signal integration and isolation module, which isolates and integrates the previous signal.”
[i.e., the integral ignition accumulates signals to compute and transmit a weighted sum as a discharge output]
[0025], “FIG4 shows a functional schematic diagram of an interactive port, which is mainly used for interconnection between neurons and is mainly composed of…and a control module. Since transmission in biological neurons is bidirectional, there is no distinction between input ports and output ports. Therefore, the present invention designs an interactive port to achieve bidirectional transmission of impulses between neurons… the control module realizes the input and output control of the pulse according to the control signal (for example, the output signal of the integral ignition unit, etc.), that is, the pulse input control of the integral ignition unit and the pulse output control of the Spike pulse generating unit. The interactive port can be implemented using a common two-way switch.”
[i.e., the input and weighted sums are independent signals and the interactive port (HC) modulates the splitting up of input and output signals, enabling bidirectional transmission of pulses].
Regarding Claim 10:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein both input
y
F
i
t
and weighted sum
y
B
i
t
coexist as independent signals in the bidirectional SNC until they are decoupled by the HC
See paras. [0024]-[0025] as addressed above in claim 9. Claims 9 and 10 recite the same bidirectional SNC encompassing coexisting independent inputs and weighted sums that are eventually decoupled by the HC. Although claim 9 and claim 10 use different terms for their input and weighted sums, a person of ordinary skill in the art can recognize that substituting either set of signals into the same port architecture is routine and expected.
Regarding Claim 13:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Yida further discloses:
wherein the HC selectively transmits forward and backward signals to interface with unidirectional and bidirectional components
Yida, [0025], “FIG4 shows a functional schematic diagram of an interactive port, which is mainly used for interconnection between neurons and is mainly composed of an interface module, an input module, an output module, and a control module. Since transmission in biological neurons is bidirectional, there is no distinction between input ports and output ports. Therefore, the present invention designs an interactive port to achieve bidirectional transmission of impulses between neurons…the input module is used to transmit the pulses of external neuron input received by the interface module to the integral ignition unit of the current neuron; the output module is used to transmit the output pulses generated by the Spike pulse generating unit of the current neuron to the interface module, and then to the external neuron…The interactive port can be implemented using a common two-way switch”
Regarding Claim 14:
As discussed above, Yida in view of Bohnstingl and further in view of Narayanan teaches [the] neural network processing system of claim 1, and Bohnstingl further teaches:
wherein the weight update unit calculates a cross correlation function of the inputs
y
B
t
’s and
x
F
t
’s
Bohnstingl, [0122], “As another example of application, similarity measures can be computed using a simple PWM circuitry 335. The generation of read/write weights may require dot products and norms to be computed, i.e., to measure distances according to EQUATION 15…”
[0123], “In EQUATION 15, k represents the input vector and M represents the memory. Such computations can potentially be performed using a single generation of PWM input signals. A fixed part, representing 1, is added to the PWM signal corresponding to the value of k to compute the norm ∥M∥1. Two read accesses from the device are needed, where the first access corresponds to the norm ∥M∥ 1 and the second access corresponds to a vector-matrix multiplication kM... In variants the integrators operate continuously and after the fixed part has been processed at the input, the current value is stored in an auxiliary memory. After the full input has been processed, the previously stored value needs to be subtracted from the total result to obtain ∥M∥ 1 and kM”
[0124]: “Note, a time-to-spike scheme can be used for the input vector presentation…The dot product parallel read can be implemented using the second pulse and the TTS integrator scheme…As a consequence, a single TTS read returns both the norm of M and the value of kM”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to use a weight update unit that calculates a cross correlation function of inputs. Doing so would have allowed Yida to use Bohnstingl’s method that “allows the energy required to transmit the input to be reduced. Also, in this case, the procedure to compute the similarity measure can potentially be implemented in a single crossbar operation.”, as suggested by Bohnstingl (Bohnstingl, [0124]).
Regarding Claim 15:
Claim 15 is a computer-implemented method claim corresponding to the neural network processing system of claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Bohnstingl teaches:
A computer-implemented method for concurrent machine learning, comprising:
Bohnstingl, paragraphs [0074]-[0075], [0141], and [0125]: “In some embodiments, the neural network system 900 provides instructions for the aforementioned methods and/or functionalities to a client machine such that the client machine executes the method, or a portion of the method, based on the instructions provided by the neural network system 900. In some embodiments, the neural network system 900 includes software executing on hardware incorporated into multiple devices”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to concurrently transmit weights that are multiplied and added with inputs of the neural network. Doing so would have allowed Yida to use Bohnstingl’s method to “enable an accurate and efficient processing of temporal data” as suggested by Bohnstingl (Bohnstingl, [0032]).
Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to use unidirectional signal reshaping units for inference and learning. Doing so would have allowed Yida to use Bohnstingl’s method to “include, for example, a training or inference of the spiking neural network.” as suggested by Bohnstingl (Bohnstingl, [0024]).
Regarding Claims 18-23:
Claims 18-23 are computer-implemented method claims corresponding to the neural network processing system of claims 4-7 and 9-10 and are rejected for at least the same reasons as given in the rejection of claims 4-7 and 9-10. In particular, 18:4, 19:5, 20:6, 21:7, 22:9, 23:10.
Regarding Claim 24:
Claim 24 is a computer program product claim corresponding to the neural network processing system of claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Bohnstingl teaches:
A computer program product for concurrent machine learning, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:
Bohnstingl, paragraphs [0074]-[0075], [0141], and [0139]: “These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein includes an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to concurrently transmit weights that are multiplied and added with inputs of the neural network. Doing so would have allowed Yida to use Bohnstingl’s method to “enable an accurate and efficient processing of temporal data” as suggested by Bohnstingl (Bohnstingl, [0032]).
Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yida to incorporate the teachings of Bohnstingl to use unidirectional signal reshaping units for inference and learning. Doing so would have allowed Yida to use Bohnstingl’s method to “include, for example, a training or inference of the spiking neural network.” as suggested by Bohnstingl (Bohnstingl, [0024]).
Regarding Claim 25:
Claim 25 is a neural network processing system claim corresponding to the neural network processing system of claim 1 and is rejected for at least the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Yida discloses:
wherein the SR units I's decide whether to transmit forward spike signals into the following layer during inference by thresholding and reshaping weighted sums
y
F
i
t
’s
Yida, [0023], “Figure 2 shows the Spike pulse generating unit in the neuron. This unit is only sensitive to rising edge signals and is mainly composed of a rising edge detection module…When a square wave signal is input, it first enters the rising edge detection module to extract the edge jump signal; then, it enters the signal shaping module to convert the edge jump signal into a rectangular positive pulse with a certain width [i.e., the signal shaping module thresholds and reshapes signals]…the rectangular positive and negative pulses enter the signal integration module respectively, driving the subsequent field effect transistor to realize signal integration and generate continuous positive and negative pulse pairs”
[i.e., the rising edge detection module extracts edge jump signals deciding whether to transmit forward spikes, and the subsequent FET corresponds to a downstream layer in a neural network]
and wherein the SR units L's decide whether to transmit backward spike signals into the preceding layer by thresholding and reshaping weighted sums
x
B
i
t
’s
Yida, [0024], “Figure 3 shows the integral ignition unit, which accumulates and compares the spike excitations of other neurons to generate a pulse signal…The third stage is the signal integration and isolation module, which isolates and integrates the previous signal [i.e., integration of prior signal implies accumulating backward spike signals into weighted sums]. The fourth stage is the threshold detection module, which takes the internal trigger and external trigger signals (non-simultaneous) as its input and generates a discharge signal when the voltage exceeds the threshold…The control end of the voltage-controlled switch is connected to a lateral inhibition signal, wherein the output of the lateral inhibition signal is connected to the output end of the integral ignition unit”
[i.e., lateral inhibition is used to modulate/threshold backward pulse signals].
Although Yida doesn’t explicitly mention the term “preceding layer”, a person having ordinary skill in the art would be able to realize that in the context of a spiking neural network based on STDP (Spiking timing dependent plasticity) rules, the term “spike excitations of other neurons” are pre-synaptic spikes that come from the preceding layer
Claims 2 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yida and Bohnstingl in view of Narayanan, and further in view of Chinese Patent Application Publication No. CN113408714A Zhao, hereinafter Zhao.
Regarding Claim 2:
As discussed above, Yida in view of Bohnstingl, and further in view of Narayanan teaches [the] neural network processing system of claim 1, but does not explicitly disclose:
wherein the weights are shared between inference and learning
However, in the same field, analogous art Zhao teaches:
wherein the weights are shared between inference and learning
Zhao, [0087], “The presynaptic pulse will be input into the plasticity learning module, which will read the corresponding synaptic weight from the synaptic array module…and write the updated synaptic weight into the synaptic array module…In the recognition stage…the weights of the synapses are summed to find the neuron with the largest membrane potential in the output layer”
[i.e., the same synapse weights are used during training (learning phase) and during inference (recognition phase) showing shared usage]
Yida, Bohnstingl, Narayanan, Zhao, and the instant application are analogous art because they are all directed to optimizing neural network architecture (Yida, [0008]; Bohnstingl, [0003]; Narayanan, col. 1, Summary; Zhao, [0004]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yida, Bohnstingl and Narayanan to incorporate the teachings of Zhao to use shared weights for both inference and learning. Doing so would have allowed the combination of Yida, Bohnstingl and Narayanan to use Zhao’s method in order to “control the reading and writing of synaptic weights in the learning phase” and “control the reading and writing of synaptic weights in the recognition stage”, as suggested by Zhao (Zhao, [0009]).
Regarding Claim 16:
Claim 16 is a computer-implemented method claim corresponding to the neural network processing system of claim 2 and is rejected for at least the same reasons as given in the rejection of claim 2.
Claims 3 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yida and Bohnstingl in view of Narayanan, and further in view of Nugent (US 9269043 B2), hereinafter Nugent.
Regarding Claim 3:
As discussed above, Yida in view of Bohnstingl, and further in view of Narayanan teaches [the] neural network processing system of claim 1, but does not explicitly disclose:
wherein said unidirectional SR units perform inference and learning using a Hebbian algorithm
However, in the same field, analogous art Nugent teaches:
wherein said unidirectional SR units perform inference and learning using a Hebbian algorithm
Nugent, col. 12 lines 25-35: “Read Phase—Anti-Hebbian…The application of a read voltage V will damage the synaptic state… We say that this change in the synaptic state is anti-Hebbian because the change of the synaptic weight will occur in such a direction as to prevent the next read operation from evaluating to the same state” and col. “Write Phase—Hebbian…To undue [sic – undo] the damage done via the act of reading of the state, we may (but need not) apply a “rewarding” feedback to the “winner” memristor… We say that this change in the synaptic state is Hebbian, since it reinforces the synaptic state. The longer the feedback is applied, the more the synaptic weight is strengthened”
[i.e., the same synapse weights are used during training (learning phase) and during inference (recognition phase) showing shared usage]
Yida, Bohnstingl, Narayanan, Nugent, and the instant application are analogous art because they are all directed to neuromorphic machine learning (Yida, [0008]; Bohnstingl, [0003]; Narayanan, col. 1, Summary; Nugent, col. 1-2, Field of the Invention).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Yida, Bohnstingl and Narayanan to incorporate the teachings of Nugent to utilize a Hebbian algorithm in unidirectional signal reshaping units for performing inference and learning. Doing so would have allowed the combination of Yida, Bohnstingl and Narayanan to use Nugent’s method in order to “accelerate[s] the full spectrum of machine learning algorithms, from optimal classification to clustering, combinatorial optimization, and robotic control to name a few”, as suggested by Nugent (Nugent, col. 6-7, lines 67, 1-3).
Regarding Claim 17:
Claim 17 is a computer-implemented method claim corresponding to the neural network processing system of claim 3 and is rejected for at least the same reasons as given in the rejection of claim 3.
Response to Arguments
Applicant's arguments filed 1-21-2026 (“Remarks”) have been fully considered but they are not persuasive.
35 U.S.C. § 103:
Remarks, pgs. 10-11. Applicant argues Examiner presents two interpretations that do not teach claim 1. Examiner respectfully disagrees. Examiner is only using one consistent interpretation of there being a disclosure of parallel transmission. Since Applicant pointed out the time-lag between parallel outputs in para. 75 of Bohnstingl, Examiner states that the transmission of state variables s(t-1) and y(t-1) are occurring at the same time-step (t), which indicates that para. 75 of Bohnstingl does not teach away from the concept of parallel transmission since both are sent at the same time step. As stated by Applicant on pg. 12 of remarks, the weights disclosed by Bohnstingl are denoted as x(t), and as cited under section 35 U.S.C. § 103, in line with paras. 74-75, Bohnstingl teaches outputting the summed weights x(t) in parallel at time step t. Therefore, Bohnstingl is construed as teaching concurrently transmitting weighted sums.
Remarks, pgs. 11-12. Applicant further argues that parallel and concurrent do not have the same meaning and that a parallel output does not teach a concurrent output. In particular, Applicant argues parallel refers to sending data through different resources and one of ordinary skill would not apply parallelism to claim 1’s “bidirectional Synaptic Network Channel (SNC)” since parallelism would use a plurality of SNC channels to transmit data, instead of a single SNC. Examiner respectfully disagrees. As discussed above and under section 103, Bohnstingl teaches parallel transmission of weighted sums, and although Yida does not explicitly disclose the idea of concurrent transmission of signals, a person of ordinary skill in the art would understand that a channel designed for continuous bidirectional communication of signals is inherently structurally capable of concurrent signal transmission in both directions. Therefore, it is the combination of Yida’s channel for continuous bidirectional communication of signals in view of Bohnstingl’s parallel transmission of weighted sums that teaches the “concurrent transmission” limitation.
Remarks, pgs. 12-13. Applicant argues Bohnstingl does not teach or suggest claim 1’s “concurrent transmission” because s(t-1) and y(t-1) are not weights. Examiner respectfully disagrees with Applicant’s interpretation. Examiner agrees that s(t-1) and y(t-1) are not weights. However, as discussed above, Examiner was merely stating that the transmission of state variables s(t-1) and y(t-1) are occurring at the same time-step (t), which indicates that para. 75 of Bohnstingl still teaches towards the concept of parallel transmission. Further, the weights disclosed by Bohnstingl are denoted as x(t), and as cited under section 35 U.S.C. § 103, following paras 74-75, Bohnstingl teaches outputting the summed weights x(t) at time step t in parallel. Therefore, Bohnstingl is construed as teaching concurrently transmitting weighted sums.
Remarks, pgs. 13-16. Applicant’s arguments with respect to amended claims 11 and 12 have been fully considered and are persuasive. The previous rejections of claims 11 and 12 have been withdrawn.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Regarding claims 11 and 12, Lee et al. (US 20210232900), para. [0068] teaches a final neuron layer that performs both the forward and backward propagation using a forward and backward neuron. The input signal of the forward neuron (forward propagation) and the backward neuron (backward propagation) is further said to be partially different.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PHUNG whose telephone number is (703) 756-1499. The examiner can normally be reached Monday-Thursday: 9:00AM-4:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KAMRAN AFSHAR can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN PHUNG/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
1 In the context of the multilayer neural network processing system disclosed by Bohnstingl, the structural Synaptic Network Channel of Yida provides a suitable component for implementing the bidirectional communication between those layers.
2 While Yida does not explicitly name the transmission of ‘weighted sums’ or associated weight values, relied upon by Bohnstingl below, a person of ordinary skill in the art would understand that the neuron signals transmitted by Yida’s circuity in an STDP (Spike-Timing-Dependent Plasticity) system are capable of carrying or inherently having associated weight values for network computations as explicitly taught by Bohnstingl.