DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks, filed 01/20/2026 have been fully considered:
Regarding 101 non-statutory subject matter and claim objection, based on amendments, remarks and reconsideration, the 101 non-statutory subject matter and claim objection have been withdrawn.
Regarding applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Please see below for new grounds of rejection, necessitated by Amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-10, 12-16 and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hasan et al. (“Memristor Crossbar Based Low Cost Classifiers and Their Applications”, ©2014 IEEE) in view of Ge et al. (US 20210266000 A1) in view of Nguyen et al. (“Memristor-CMOS Hybrid Neuron Circuit with Nonideal-Effect Correction Related to Parasitic Resistance for Binary- Memristor-Crossbar Neural Networks”, Micromachines 2021, 12, 791).
Regarding claim 1.
Hasan teaches a computer method for preparing a trained crossbar array of a neural network (see page 76, section 3, “memristor crossbar based neural network implementation”, also see page 77, “Fig. 3. Schematic of memristor crossbar circuit for training 16 four input min-terms.”),
comprising: feeding an input portion of a predetermined truth table into a computer simulation of a crossbar array (see page 79, table 2, and figure 10, “We have utilized two polynomial classifiers for training two non-linearly separable two input functions (xor and xnor). Table II shows the truth table of the two input xor and xnor functions along with the encodings used for inputs and outputs for our implementations. The schematic of the circuit is shown in Fig. 9 and Fig. 10 shows the training curve obtained from a MATLAB-SPICE simulation.”, i.e. The learning curve obtained from the MATLAB-SPICE simulation is the learning curve about two input function (the xor and xnor) in the memristor crossbar circuit);
generating analog output values for the input portion of the truth table based on simulated weights see page 78p, section A, “2) Apply an input pattern x to the crossbar circuit and evaluate the neuron outputs”, i.e. the analog domain in the memristor crossbar array using the analog synaptic weighted value and agreement calculation. The input pattern x is applied to the crossbar circuit and the neuron output is evaluated);
calculating a loss value from each of the analog output values and expected values for an output portion of the truth table (see page 78, section A, step 3,
PNG
media_image1.png
70
446
media_image1.png
Greyscale
, i.e. &j which is the error between the goal output (Dj) and the inverter output (Fj) is calculated about each neuron j);
adjusting the simulated weights based on the calculated loss values (see page 75, section A, step 4 and 5,
PNG
media_image2.png
138
412
media_image2.png
Greyscale
i.e. Aw is determined (rn is the study speed) as if the conductance of each memristor has to be changed. The writing pulse is applied to the crossbar in which Awj,i = n x sign (8j) x sign (xj,i) , 5) pulse width is proportionate to Awj,i and the memristor conductance is updated. Awj,i is the synapse weight value In the lower level 2 about each input pattern x, it repeats until 5 is collected);
and refeeding the input portion of the predetermined truth table into the computer simulation and recalculating the output values using the adjusted simulated weights until the analog output values produce the expected values for the output portion of the truth table see page 78, section A, step 2 to 5 repeated until convergence,
PNG
media_image3.png
448
484
media_image3.png
Greyscale
).
Hasan do not specifically teach a generating analog output values for the input portion of the truth table based on simulated weights by applying a rectified linear unit to summed output values of memory devices of the crossbar array and predefined margin of error.
Ge teaches generating analog output values for the input portion of the truth table based on simulated weights by applying a rectified linear unit to summed output values of memory devices of the crossbar array (see ¶ abstract, “Technologies relating to analog-to-analog quantizers with an intrinsic Rectified Linear Unit (ReLU) function designed for in-memory computing”, also see ¶ 37, “As shown in FIG. 1, the crossbar array circuit 101 includes one or more of bit lines (e.g., a bit line 111), one or more word lines (e.g., a word line 113), and one or more cross-point devices (e.g., a 1T1R cell 115) connected between the bit lines and the word lines. The crossbar array circuit 101 may be implemented to perform in-memory computations and may therefore be also referred to as an In-Memory Computing crossbar (IMC crossbar).”, also see ¶ 44, “The analog input signal may then be computed or programmed via the first IMC crossbar 3031 to produce an analog signal.”, also ¶ 34, “in neural network applications, a ReLU function is needed to generate and transmit signal from one crossbar array circuit to the next crossbar array circuit (e.g., from one layer of a neural network to the next layer of the neural network). By using analog-to-analog quantizers with buffers, a ReLU function is intrinsically implemented without additional hardware, because a voltage is set and bounded between Vref and Gnd.”).
Both Hasan and Ge pertain to the problem of Crossbar array Neural Networks, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Hasan and Ge to teach the above limitations. The motivation for doing so would be “in neural network applications, a ReLU function is needed to generate and transmit signal from one crossbar array circuit to the next crossbar array circuit (e.g., from one layer of a neural network to the next layer of the neural network). By using analog-to-analog quantizers with buffers, a ReLU function is intrinsically implemented without additional hardware, because a voltage is set and bounded between Vref and Gnd.” (see Ge ¶ 34).
Hasan and Ge do not specifically teach a predefined margin of error.
Nguyen teaches a predefined margin of error (see page 15, “the memristor crossbar can perform VMM operation based on the analog current-voltage relationship of memristors.”, also see page 17, “The proposed correction circuit was verified to be able to restore both the source voltage and the output voltage degradation due to the nonideal effects. For the source voltage, the average percentage error of the uncompensated crossbar is as large as 36.7%. If the correction circuit is used, the percentage error in the source voltage can be reduced from 36.7% to 7.5%. For the output voltage, the average percentage error of the uncompensated crossbar is as large as 65.2%. The correction circuit can improve the percentage error in the output voltage from 65.2% to 8.6%. Almost the percentage error can be reduced to ~1/7 if the correction circuit is used.”).
Hasan, Ge and Nguyen pertain to the problem of Memristor-Crossbar Neural Networks, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Hasan, Ge and Nguyen to teach the above limitations. The motivation for doing so would be “a memristor-CMOS hybrid neuron circuit is proposed for compensating the parasitic-resistance-related nonideal effects during not the training phase but the inference one, where the complicated adaptive training is not needed. Moreover, unlike the previous linear correction method performed by the external hardware, the proposed correction circuit can be included in the memristor crossbar to minimize the power and hardware overheads for compensating the nonideal effects. The proposed correction circuit has been verified to be able to restore the degradation of source and output voltages in the nonideal crossbar. For the source voltage, the average percentage error of the uncompensated crossbar is as large as 36.7%. If the correction circuit is used, the percentage error in the source voltage can be reduced from 36.7% to 7.5%. For the output voltage, the average percentage error of the uncompensated crossbar is as large as 65.2%. The correction circuit can improve the percentage error in the output voltage from 65.2% to 8.6%. Almost the percentage error can be reduced to ~1/7 if the correction circuit is used.” (see Nguyen abstract).
Regarding claim 2.
Hasan, Ge and Nguyen teach the computer method of claim 1,
Nguyen further teach wherein each simulated analog output value includes an error that is less than 49% of drain voltage (Vdd) (see page 15, “the memristor crossbar can perform VMM operation based on the analog current-voltage relationship of memristors.”, also see page 17, “The proposed correction circuit was verified to be able to restore both the source voltage and the output voltage degradation due to the nonideal effects. For the source voltage, the average percentage error of the uncompensated crossbar is as large as 36.7%. If the correction circuit is used, the percentage error in the source voltage can be reduced from 36.7% to 7.5%. For the output voltage, the average percentage error of the uncompensated crossbar is as large as 65.2%. The correction circuit can improve the percentage error in the output voltage from 65.2% to 8.6%. Almost the percentage error can be reduced to ~1/7 if the correction circuit is used.”).
The motivation utilized in the combination of claim 1, super, applies equally as well to claim 2.
Regarding claim 3.
Hasan, Ge and Nguyen teach the computer method of claim 2,
Hasan further teach wherein the predefined margin of error is 0% (see page 79, table 2, and figure 10, “We have utilized two polynomial classifiers for training two non-linearly separable two input functions (xor and xnor). Table II shows the truth table of the two input xor and xnor functions along with the encodings used for inputs and outputs for our implementations. The schematic of the circuit is shown in Fig. 9 and Fig. 10 shows the training curve obtained from a MATLAB-SPICE simulation.”, i.e. the curve show predefined margin of error 0% in figure 10).
Regarding claim 5.
Hasan, Ge and Nguyen teach the computer method of claim 4,
Hasan further teach further comprising programming one or more crossbar arrays with the adjusted simulated weights, wherein the programmed one or more crossbar arrays mimic a field programmable gate array (FPGA) (see page 75, “Memristor crossbar based Boolean function implementations could potentially replace SRAM based lookup tables in FPGA (Field programmable Gate Array).”, also see page 76, “Each neural block contains a memristor crossbar array and CMOS learning cells to program the memristive weights”).
Regarding claim 6.
Hasan, Ge and Nguyen teach the computer method of claim 5,
Hasan further teach further comprising resetting the weights of the one or more crossbar arrays (see page 78, section A, “Initialize the memristors with high random resistances.”, i.e. claim do not specify when the resetting occurs, therefore, initializing memristors before each training resets the learned states, including prior learned weights).
Regarding claim 7.
Hasan, Ge and Nguyen teach the computer method of claim 6,
Hasan further teach further comprising reprogramming the one or more crossbar arrays with different weights to mimic a different field programmable gate array (FPGA) (see page 75, “Memristor crossbar based Boolean function implementations could potentially replace SRAM based lookup tables in FPGA (Field programmable Gate Array).”, also see page 76, “Each neural block contains a memristor crossbar array and CMOS learning cells to program the memristive weights”, also see page 79, figure 8, Network for learning two input xor and xnor functions).
Claims 8-10 and 12-13 recites a computer program product to perform the method recited in claims 1-3 and 5-7. Therefore the rejection of claims 1-3 and 5-7 above applies equally here.
Claim 14 recites a computer system to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here.
Regarding claim 15.
Hasan, Ge and Nguyen teach the computer system of claim 14,
Hasan further teach further comprising a digital logic truth table stored in the computer memory as a training dataset (see page 79, table 2, and figure 10, “We have utilized two polynomial classifiers for training two non-linearly separable two input functions (xor and xnor). Table II shows the truth table of the two input xor and xnor functions along with the encodings used for inputs and outputs for our implementations. The schematic of the circuit is shown in Fig. 9 and Fig. 10 shows the training curve obtained from a MATLAB-SPICE simulation.”).
Claims 16 and 18 recites a computer system to perform the method recited in claims 2 and 3. Therefore the rejection of claims 2 and 3 above applies equally here.
Regarding claim 19.
Hasan, Ge and Nguyen teach the computer system of claim 18,
Hasan further teach further comprising one or more crossbar arrays and an inverter at each output of the one or more crossbar arrays that produces a digital one or zero output from a noisy input signal (see page 76, “Each input signal and its complemented signal (same magnitude but opposite polarity) are applied to a column of memristors and at the end of the column a CMOS inverter is connected to evaluate the neuron output. Two memristors, one connected to the input signal and another connected to a corresponding inverted signal, represent a single synaptic weight of positive or negative value. If the conductance of the memristor connected with the uninverted input signal is greater than the conductance of the memristor connected with the corresponding inverted signal then that pair of memristors represent positive weight and otherwise they represent a negative weight.”).
Regarding claim 20.
Hasan, Ge and Nguyen teach the computer system of claim 19,
Hasan further teach wherein the one or more crossbar arrays are configured to be reset and reprogrammed by applying a voltage pulse (see page 75, “Just as chemical pulses alter synaptic weights in brain tissue, voltage pulses can be applied to memristors to alter their conductivity”, also see page 78, ” 5) Apply write pulses to the crossbar with pulse widths proportional to Δwj,i to update the memristor conductances.”).
Claim(s) 4, 11 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hasan et al. (“Memristor Crossbar Based Low Cost Classifiers and Their Applications”, ©2014 IEEE) in view of Ge et al. (US 20210266000 A1) in view of Nguyen et al. (“Memristor-CMOS Hybrid Neuron Circuit with Nonideal-Effect Correction Related to Parasitic Resistance for Binary- Memristor-Crossbar Neural Networks”, Micromachines 2021, 12, 791) in further in view of Soudry et al. (“Memristor-Based Multilayer Neural Networks With Online Gradient Descent Training”, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 26, NO. 10, OCTOBER 2015).
Regarding claim 4.
Hasan, Ge and Nguyen teach the computer method of claim 3,
Hasan, Ge and Nguyen do not teach claim 4.
Soudry teach wherein the loss value is calculated using a mean square error (MSE) loss function (see page 2409, “the second part is focused on a simple example of the adaline algorithm—a linear SNN trained using mean square error (MSE).”, also see page 2410,
PNG
media_image4.png
296
350
media_image4.png
Greyscale
).
Hasan, Ge, Nguyen and Soudry pertain to the problem of Memristor-Crossbar Neural Networks, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Hasan, Ge, Nguyen and Soudry to teach the above limitations. The motivation for doing so would be “a method for performing these update operations simultaneously (incremental outer products) using memristor-based arrays is proposed. The method is based on the fact that, approximately, given a voltage pulse, the conductivity of a memristor will increment proportionally to the pulse duration multiplied by the pulse magnitude if the increment is sufficiently small. The proposed method uses a synaptic circuit composed of a small number of components per synapse: one memristor and two CMOS transistors. This circuit is expected to consume between 2% and 8% of the area and static power of previous CMOS-only hardware alternatives. Such a circuit can compactly implement hardware MNNs trainable by scalable algorithms based on online gradient descent (e.g., backpropagation). The utility and robustness of the proposed memristor-based circuit are demonstrated on standard supervised learning tasks” (see Soudry abstract).
Claim 11 recites a computer program product to perform the method recited in claims 4. Therefore the rejection of claims 4 above applies equally here.
Claim 17 recites a computer system to perform the method recited in claim 4. Therefore the rejection of claim 4 above applies equally here.
Related prior arts:
Wang et al. (US 20210326114 A1) teaches adapted for a processor to perform MAC operations on a memory, are provided. In the method, a format of binary data of weights is transformed from a floating-point format into a quantized format by truncating at least a portion of fraction bits of the binary data and calculating complements of remaining bits, and programming the transformed binary data into cells of the memory. A tuning procedure is performed by iteratively inputting binary data of input signals into the memory, integrating outputs of the memory, and adjusting the weights programmed to the cells based on the integrated outputs.
Kvatinsky et al. (US 20150256178 A1) teaches pure memristive logic gate, wherein the pure memristive logic gate consists essentially of at least one input memristive device and an output memristive device that is coupled to and differs from the at least one memristive device; wherein the pure memristive device is controlled by a single control voltage.
Wu et al. (US 20150170025 A1) teaches iterative training of memristor crossbar arrays for neural networks by applying voltages corresponding to selected training patterns. Error is detected and measured as a function of the actual response to the training patterns versus the expected response to the training pattern.
Kiani et al. (“A fully hardware-based memristive multilayer neural network”, Kiani et al., Sci. Adv. 7, eabj4801 (2021) 24 November 2021) teaches a compact multi-channel rectifying linear unit (ReLU) using off-the-shelf analog components. We further built a two-layer fully hardware-based per ceptron with 64 ReLUs as the hidden neurons that connect two 128 × 64 memristive crossbar arrays…Our two-layer perceptron is composed of two memristive crossbar arrays representing the matrices of synaptic weights of each layer and the ReLUs as the activation functions in between (Fig. 1A).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IMAD KASSIM/Primary Examiner, Art Unit 2129