Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) are in claim 1:
“multi-layered spiked neural network module and is configured to transmit the input voltage pulse signal to the memristive”
“wherein the multi-layer spiked neural network is configured to perform a layer- by-layer calculation and conversion on the input voltage pulse signal to complete an on-chip learning to obtain an output signal;”
“wherein the multi-layer spiked neural network is configured to transmit”
The specification recites in para 0058 “Error-triggered learning (Equation (6)) requires signals that are both local and non-local to the SNN. The ternary nature of the rule enables a natural distribution of the computations across core boundaries, while significantly reducing the communication overhead. An exemplary hardware architecture 600 contains Neuromorphic Cores (NCs) and Processing Cores (PCs) as depicted in FIG. 6A. The NCs are responsible for implementing the neuron and synapse dynamics described in Equation (1). Each core additionally contains circuits that are needed for implementing training. In various embodiments, the error signals are calculated on the PCs and communicated asynchronously to the NCs. Thus, each core can function independently without affecting each other” and “In addition to data and control buses, the PC contains four main blocks, namely for error calculation 610, error encoding 620, arbitration 630, and handshaking 640. The PC can be shared among several NCs, where communication across the two types of cores is mediated using the same address event routing conventions as the NCs.” It is interpreted that the NC and PC are used perform the limitations of the SNN network to calculate and output.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, and 11 are rejected under 35 U.S.C. 102 (a)(1)(2) as being anticipated by (US20180075338) (“Gokmen”).
Regarding claim 1 and analogous claim 11, Gokman teaches a neural network learning system comprising: an input circuitry module (Gokmen para 0002 line 1-8, The present invention relates in general to novel configurations of trainable resistive crosspoint devices, which are referred to herein as resistive processing units (RPUs). More specifically, the present invention relates to artificial neural networks (ANNs) formed from crossbar arrays of two-terminal RPUs that provide local data storage and local data processing without the need for additional processing elements beyond the two-terminal RPU, thereby accelerating the ANN's ability to learn and implement algorithms such as online neural network training, matrix inversion, matrix decomposition and the like [A neural network learning system].
Para 0004 line 13-15, transmitting voltage pulses corresponding to error of the output maps of the convolution layer to the RPU array.
Para 0105 line 5-7, FIG. 18B illustrates a block diagram of a neuron, which is used as a neuron 1800 of a neural network, such as a CNN. The neuron can represent any of the input neurons, the hidden neurons, or the output neurons (see FIG. 16). It should be noted that FIG. 18B shows components to address all three phases of operation: feed forward, back propagation, and weight update. [an input circuitry module])
a multi-layer spiked neural network with memristive neuromorphic hardware; a weight update circuitry module (Gokmen para 0048 line 1-9, Crosspoint devices, in effect, function as the ANN's weighted connections between neurons. Nanoscale two-terminal devices, for example memristors having "ideal" conduction state switching characteristics [memristive neuromorphic hardware], are often used as the crosspoint devices in order to emulate synaptic plasticity with high energy efficiency. The conduction state (e.g., resistance) of the ideal memristor material can be altered by controlling the voltages applied between individua wires of the row and column wires [a multi-layer spiked neural network]
Gokmen Fig. 7A,
PNG
media_image1.png
532
1116
media_image1.png
Greyscale
para 0071 line 1-8, FIG. 6 illustrates an example flowchart for training a CNN with one or more convolutional layers 500. The example logic can be implemented by a processor, such as a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or any other processor or a combination thereof. Alternatively or in addition, the training can be performed by a system that is equipped with an RPU array as described herein.
para 0076, FIG. 7A depicts a simplified illustration of a typical read-process-write weight update operation, wherein CPU/ GPU cores (i.e., simulated "neurons") read a memory (i.e., a simulated "synapse") and perform weight update processing operations, then write the updated weights back to memory [a weight update circuitry module].), and
wherein the input circuitry module is configured to receive an input current signal and convert the input current signal to an input voltage pulse signal utilized by the memristive neuromorphic hardware of the multi-layered spiked neural network module and is configured to transmit the input voltage pulse signal to the memristive neuromorphic hardware of the multi-layered spiked neural network module (Gokman Para 0089 line 1-10, Input voltages Vi, V2 , V3 are applied to row wires 802, 804, 806, respectively. Each colunm wire 808, 810, 812, 814 sums the currents 11, 12 , 13 , 14 generated by each RPU along the particular colunm wire. For example, as shown in FIG. 8, the current 14 generated by column wire 814 is according to the equation I4=V1 0 41+V2 0 42+V3 0 4y Thus, array 800 computes the forward matrix multiplication by multiplying the values stored in the RPU s by the row wire inputs, which are defined by voltages V1 , V2 , V3 . The backward matrix multiplication is very similar.
para 0107, During back propagation mode, an error signal is generated. The error signal can be generated at an output neuron 1808 or can be computed by a separate unit that accepts inputs from the output neurons 1808 and compares the output to a correct output based on the training data. Otherwise, if the neuron 1800 is a hidden neuron 1806, it receives back propagating information from the array of weights 1804 and compares the received information with the reference signal at difference block 1810 to provide a continuously valued, signed error signal. This error signal is multiplied by the derivative of the non-linear function from the previous feed forward step stored in memory 1809 using a multiplier 1812, with the result being stored in the storage 1813. The value determined by the multiplier 1812 is converted to a backwards propagating voltage pulse proportional to the computed error at back propagation generator 1814 [wherein the input circuitry module is configured to receive an input current signal and convert the input current signal to an input voltage pulse signal utilized by the memristive neuromorphic hardware of the multi-layered spiked neural network module], which applies the voltage to the previous array. The error signal propagates in this way by passing through multiple layers of arrays and neurons until it reaches the input layer of neurons [and is configured to transmit the input voltage pulse signal to the memristive neuromorphic hardware of the multi-layered spiked neural network module]);
wherein the multi-layer spiked neural network is configured to perform a layer- by-layer calculation and conversion on the input voltage pulse signal to complete an on-chip learning to obtain an output signal (Gokmen Fig.3,
PNG
media_image2.png
412
497
media_image2.png
Greyscale
para 0066 line 1-12, Referring to FIG. 4, neurons in layer-I 410 are connected to neurons in a next layer, layer-2 420, as described earlier (see FIG. 3). The neurons in FIG. 4 are as described with reference to FIG. 1. A neuron in layer-2 420, consequently, receives an input value from each of the neurons in layer-I 410. The input values are then summed and this sum compared to a bias [wherein the multi-layer spiked neural network is configured to perform a layer- by-layer calculation]. If the value exceeds the bias for a particular neuron, that neuron then holds a value, which can be used as input to neurons in the next layer of neurons. This computation continues through the various layers 430-450 of the CNN, until it reaches a final layer 460, referred to as "output" in FIG. 4. In an example of a CNN used for character recognition, each value in the layer is assigned to a particular character. The network is configured to end with the output layer having only one large positive value in one neuron, which then demonstrates which character the network has computed to be the most likely handwritten input character [conversion on the input voltage pulse signal to complete an on-chip learning to obtain an output signal]);
wherein the multi-layer spiked neural network is configured to transmit the output signal to the weight update circuitry module (Gokmen
Para 0005, The processor also performs update pass computations for the CNN via the RPU array by transmitting voltage pulses corresponding to the input data of the convolution layer and the error of the output maps to the RPU array; and update weights of RPU devices of the RPU array.
para 0071 line 1-8, FIG. 6 illustrates an example flowchart for training a CNN with one or more convolutional layers 500. The example logic can be implemented by a processor, such as a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or any other processor or a combination thereof. Alternatively or in addition, the training can be performed by a system that is equipped with an RPU array as described herein.
para 0083, the memristor is in effect used to store the weight value, and the pair of transistors is used to compute a local multiplication operation that is needed for the weight updates, wherein the result of the weight update modifies the memristor's conduction state. The Soudry et al. article describes, in effect, a four terminal device composed of a memristor and two transistors, which are used to make a 2D array of the 4 terminal devices in order to implement the back-propagation training of the neural network hardware [to transmit the output signal to the weight update circuitry module]);
wherein the weight update circuitry module is configured to implement a synaptic function by using a conductance modulation characteristic of the memristive neuromorphic hardware and is configured to calculate an error signal and based on a magnitude of the error signal, trigger an adjustment of a conductance value of the memristive neuromorphic hardware so as to update synaptic weight values stored by the memristive neuromorphic hardware (GokmenFIG.9,
PNG
media_image3.png
468
335
media_image3.png
Greyscale
para 0093, FIG. 9 illustrates a comparison of the update operation of an exemplary known floating point (FP) weight update rule against the described stochastic-RPU (SRPU) update rule. The FP weight update rule requires calculating a vector-vector outer product which is equivalent to a multiplication operation and an incremental weight update to be performed locally at each cross-point as shown in FIG. 9. The FP weight update rule can be expressed as
w
i
j
←
w
i
j
+
η
x
i
δ
j
, wherein
w
i
j
represents the weight value for the i th row and the j th column, x, is the activity at the input neuron, oj is the error computed by the output neuron and 17 is the global learning rate.
Para 0094, As shown in FIG. 9, the FP weight update rule uses a FP crosspoint device 902 located at a crosspoint between a row wire 904 and a column wire 906 of a crossbar array (not shown) …The FP weight update rule provides accuracy but requires either a read-write-process update operation ( e.g., shown in FIG. 7 A) or relatively complex and power consuming local processing components having more than two terminals. [wherein the weight update circuitry module is configured to implement a synaptic function by using a conductance modulation characteristic of the memristive neuromorphic hardware].
Para 0096 line 12-19, RPU 820A calculates the new value of
W
i
j
,
using the stochastic bit streams, the non-linear characteristics of the RPU 820A, an AND operation 918 and an addition operation 920. More specifically, RPU 820A causes an incremental conductance change that is equivalent to a weight change,
∆
w
m
i
n
for every coincidence event and adds ,
∆
w
m
i
n
to the stored weight value to arrive at the updated weight value,
W
i
j
,
[ and is configured to calculate an error signal and based on a magnitude of the error signal, trigger an adjustment of a conductance value of the memristive neuromorphic hardware so as to update synaptic weight values stored by the memristive neuromorphic hardware]).
Regarding claim 2, Gokmen teaches the system of claim 1.
Gokmen further teaches wherein the memristive neuromorphic hardware comprises memristive crossbar arrays (Gokmen para 0047, Crossbar arrays, also known as crosspoint arrays, crosswire arrays, or RPU arrays, are high density, low cost circuit architectures used to form a variety of electronic circuits and devices, including ANN architectures, neuromorphic microchips and ultra-high density nonvolatile memory. A basic crossbar array configuration includes a set of conductive row wires and a set of conductive column wires formed to intersect the set of conductive row wires. The intersections between the two sets of wires are separated by so-called crosspoint devices, which can be formed from thin film material.
Para 0048, Crosspoint devices, in effect, function as the ANN's weighted connections between neurons. Nanoscale two-terminal devices, for example memristors having "ideal" conduction state switching characteristics, are often used as the crosspoint devices in order to emulate synaptic plasticity with high energy efficiency. The conduction state (e.g., resistance) of the ideal memristor material can be altered by controlling the voltages applied between individual wires of the row and column wires [memristive crossbar arrays]).
Regarding claim 3, Gokmen teaches the system of claim 2.
Gokmen further teaches wherein a row of a memristive crossbar array comprises a plurality of memristive devices (Gokmen para 0047, Crossbar arrays, also known as crosspoint arrays, crosswire arrays, or RPU arrays, are high density, low cost circuit architectures used to form a variety of electronic circuits and devices, including ANN architectures, neuromorphic microchips and ultra-high density nonvolatile memory. A basic crossbar array configuration includes a set of conductive row wires and a set of conductive column wires formed to intersect the set of conductive row wires. The intersections between the two sets of wires are separated by so-called crosspoint devices, which can be formed from thin film material.
Para 0048, Crosspoint devices, in effect, function as the ANN's weighted connections between neurons. Nanoscale two-terminal devices, for example memristors having "ideal" conduction state switching characteristics, are often used as the crosspoint devices in order to emulate synaptic plasticity with high energy efficiency. The conduction state (e.g., resistance) of the ideal memristor material can be altered by controlling the voltages applied between individual wires of the row and column wires [plurality of memristive devices]).
Regarding claim 4 and analogous 12, Gokmen teaches the system of claim 3.
Gokmen further teaches wherein the error signal is generated for each row of the memristive crossbar array, wherein for an individual error signal, each of the plurality of memristive devices of a row associated with the individual error signal is updated together based on a magnitude of the individual error signal (Gokmen para 0123, During back propagation, the output neurons provide a voltage back across the array of RPU devices. The output layer compares the generated network response to training data and computes an error. The error is applied to the RPU array 800 as a voltage pulse, where the height and/or duration of the pulse is modulated proportional to the error value [, wherein the error signal is generated for each row of the memristive crossbar array,]. In this example, a row of RPU devices receives a voltage from a respective output neuron in parallel and converts that voltage into a current which adds column-wise to provide an input to hidden neurons. The hidden neurons combine the weighted feedback signal with a derivative of its feed-forward calculation and stores an error value before outputting a feedback signal voltage to its respective column of RPU devices. This back propagation travels through the entire RPU array until all hidden neurons and the input neurons have stored an error value [wherein for an individual error signal, each of the plurality of memristive devices of a row associated with the individual error signal is updated together based on a magnitude of the individual error signal.]).
Regarding claim 6 and analogous 13, Gokmen teaches the system of claim 1.
Gokmen further teaches wherein the weight update circuitry module is configured to generate a signal to update the synaptic weight values or to bypass updating the synaptic weight values based on the magnitude of the error signal (Gokmen para 0005 line 19-24, The processor also performs update pass computations for the CNN via the RPU array by transmitting voltage pulses corresponding to the input data of the convolution layer and the error of the output maps to the RPU array; and update weights of RPU devices of the RPU array.
Para 0073 line 4-6, the training can be deemed completed if the CNN identifies the inputs according to the expected outputs with a predetermined error threshold [wherein the weight update circuitry module is configured to generate a signal to update the synaptic weight values]).
Regarding claim 7, Gokmen teaches the system of claim 6.
Gokmen further teaches wherein the weight update circuitry module increases the synaptic weight values (Gokmen para 0090 line 1-4, Continuing with the diagram of FIG. 8, in accordance with one or more embodiments, the operation of a positive weight update methodology for RPU 820 and its corresponding weight [increases the synaptic weight values])).
Regarding claim 8, Gokmen teaches the system of claim 6.
Gokmen further teaches wherein the weight update circuitry module decreases the synaptic weight values (Gokmen para 0091 line 15-19, After the positive weight updates are performed, a separate set of sequences with the polarity of the respective voltages reversed can be used to update weights in a negative direction for those weights that need such correction [decreases the synaptic weight values]).
Regarding claim 9 and analogous 14, Gokmen teaches the system of claim 1.
Gokmen further teaches wherein updating of synaptic weights are triggered based on a comparison of the magnitude of the error signal within an error threshold value (Gokmen para 0073 line 4-9, For example, the training can be deemed completed if the CNN identifies the inputs according to the expected outputs with a predetermined error threshold [based on a comparison of the magnitude of the error signal within an error threshold value]. If the training is not yet completed, another iteration, or training epoch is performed using the modified convolutional kernels from the most recent iteration.
Para 0076, FIG. 7A depicts a simplified illustration of a typical read-process-write weight update operation, wherein CPU/GPU cores (i.e., simulated "neurons") read a memory (i.e., a simulated "synapse") and perform weight update processing operations, then write the updated weights back to memory. (i.e. the training is continued and the weights are further updated)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Gokmen in view of I. Yeo, S. -g. Gi, J. -g. Kim and B. -g. Lee, "A CMOS-based Resistive Crossbar Array with Pulsed Neural Network for Deep Learning Accelerator," 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 2019, pp. 34-37, doi: 10.1109/AICAS.2019.8771576 (“Yeo”).
Regarding claim 5, Gokmen teaches the system of claim 1.
Gokmen does not explicitly teach wherein the input circuitry module comprises pseudo resistors.
However Yeo teaches wherein the input circuitry module comprises pseudo resistors (Yeo page 35 Fig. 2,
PNG
media_image4.png
416
383
media_image4.png
Greyscale
Page 35 II. CMOS-Based RCE and Pulsed Neural Network System, A. Resistive Computing Element and Update Engine para 1-3
In general, active resistors such as Fig. 3(a) are often used to realize a programmable resistor [6]. However, in this scheme, one of the transistors operating in the triode region moves to a sub-threshold region (decreasing the G) or a saturation region (increasing the G) depending on the voltage difference between VA and VB. This critically impacts on VMM computation, because unexpected conductance values are presented depending on amplitude of read voltage. Even if a pulse rate (or width) modulated signal is used, a mismatch exists between G(+VAB) and G(-VAB) at positive and negative conductance matrix.
The RCE cell which is robust to amplitude of read voltage as shown in Fig 3(b). In the design, the PMOS transistor P1 and P2 are connected in series to make programmable resistor, and the bulks of P1 and P2 are connected to the source node to minimize the threshold voltage variation [pseudo resistors]. Also, N1, N2 and N3, N4 constitute a source follower (SF), respectively. The SF, e.g., voltage buffer, sense the voltages at node A and node B by using N2 and N4, and cross-coupled outputs directly drive voltage to the gate terminals of P1 and P2. As a result of which, the P1 and P2 stay in the triode and saturation regions for +VAB, respectively, and the other case of -VAB the condition of transistor is vice versa. Hence, directions of conductance deviation for read voltage polarity have been identical.
For the conductance update, the UE, where it consists of a row scanner, a programmable bias generator (PBG) [7], and a column-parallel charge pump (CP) as shown in Fig. 3(c), is used. When a row of RCE cells is selected from a row scanner, a conductance updating signal which is form of electric charge is generated from CP and stored in the gate of N1 and N3. After then, the stored charge is converted to the voltage (VCON) by gate capacitor. The source current of the CP is mirrored from the bias current (Ib) of the PBG and Ib scaled by the external 4-bit digital control word like as (15/16)Im,(14/16)Im,…,(1/16)Im. Where Im denotes the master current that comes from off-chip component.).
Gokmen and Yeo are both considered are both considered to be analogous to the claimed invention of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gokmen to incorporate the teachings of Strachan and have incorporated the use of CMOS-based restive computing element (RCE). Doing so to hardware accelerate a DNN model using crossbar array of memristive devices for training and classification and improving energy-efficiency (Yeo page 34 I. Introduction Para 1 line1-5, In order to address those issues, a hardware accelerator for a DNN model has been developed using various technologies [1-4]. In [1-2], crossbar array of memristive devices provide immense acceleration for DNN training and classification with high energy-efficiency.
Para 3 line 1-12. A CMOS-based resistive computing element (RCE) crossbar array, which solves the aforementioned issues, is presented as an alternative to the existing memristive device as shown in Fig. 1(a). The RCE satisfies the hardware constraints [5], which illustrated in Fig. 1(b), such as dynamic ranges of conductance, I-V nonlinearity, and on/off ratio. In order to evaluate the feasibility and functionality of the RCE, SPICE and behavioral simulations for pulsed neural networks employing the RCE have been rigorously performed. The proposed RCE-based pulsed neural network system improves energy-efficiency by up to 27.5x, compared to state-of-the-art memristive-based accelerator [2].).
Claim(s) 10 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Gokmen in view of Strachan et al. (US20200073755A1) (“Strachan”).
Regarding claim 10 and analogous claim 15, Gokmen teaches the system of claim 9 and analogous 14.
However Gokmen does not explicitly teach wherein the error threshold value is adjustable by the weight update circuitry module.
However Strachan teaches wherein the error threshold value is adjustable by the weight update circuitry module (Strachan Para 0023 In examples, an output error array 23 may be determined in connection with the result array 21, where the output error array 23 represents the forward propagation of error values that are introduced through implementation of the data flow. The output error array 23 can be evaluated to determine those nodes or cells for which the output error value is significant. The output error array 23 can be subjected to a backward propagation process which correlates the output error array to the error arrays 32, 34, 36, 38 of each of the respective layers. Each cell or node of the output error array 23 which is deemed significant can be correlated to the respective cell(s) or node(s) of one or more of the error arrays, with the value of the significant cells or nodes providing a basis for setting the threshold error value for the respective cells or nodes of the individual error arrays 32, 34, 36, 38. The calibration step can be repeated over time, to tune the value of the error thresholds, and to populate error thresholds for individual cells or nodes of the respective error arrays 32, 34, 36, 38. [error threshold value is adjustable by the weight update circuitry module]).
Gokmen and Strachan are both considered are both considered to be analogous to the claimed invention of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gokmen to incorporate the teachings of Strachan and have an error threshold value that is adjustable. Doing so to evaluate the error of arrays and determine the significant of each cell or node and set a threshold error value for setting the threshold error value for the respective cells or nodes of the individual error arrays (Strachan Para 0023 In examples, an output error array 23 may be determined in connection with the result array 21, where the output error array 23 represents the forward propagation of error values that are introduced through implementation of the data flow. The output error array 23 can be evaluated to determine those nodes or cells for which the output error value is significant. The output error array 23 can be subjected to a backward propagation process which correlates the output error array to the error arrays 32, 34, 36, 38 of each of the respective layers. Each cell or node of the output error array 23 which is deemed significant can be correlated to the respective cell(s) or node(s) of one or more of the error arrays, with the value of the significant cells or nodes providing a basis for setting the threshold error value for the respective cells or nodes of the individual error arrays 32, 34, 36, 38. The calibration step can be repeated over time, to tune the value of the error thresholds, and to populate error thresholds for individual cells or nodes of the respective error arrays 32, 34, 36, 38.).
Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Le Gallo-Bourdeau et al. (US11386319B2) – teaches in Figure 5 training and storing weights to arrays of the synaptic layer and updating them until convergence.
T. Morie, T. Matsuura, M. Nagata and A. Iwata, "A multinanodot floating-gate MOSFET circuit for spiking neuron models," in IEEE Transactions on Nanotechnology, vol. 2, no. 3, pp. 158-164, Sept. 2003, doi: 10.1109/TNANO.2003.817221 – teaches a MOSFET Circuit for Spiking Neuron Model (see Fig. 3 in page 160).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFREDO CAMPOS whose telephone number is (571)272-4504. The examiner can normally be reached 7:00 - 4:00 pm M - F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALFREDO CAMPOS/Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129