Prosecution Insights
Last updated: April 19, 2026
Application No. 17/049,349

TRAINING METHOD OF NEURAL NETWORK BASED ON MEMRISTOR AND TRAINING DEVICE THEREOF

Final Rejection §103
Filed
Oct 21, 2020
Examiner
SMITH, KEVIN LEE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Tsinghua University
OA Round
6 (Final)
37%
Grant Probability
At Risk
7-8
OA Rounds
4y 8m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 134 resolved
-18.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
45 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Applicant's submission filed on 16 September 2025 [hereinafter Response] has been entered, where: Claims 1, 4, 5, 13, 16, 20, and 22 have been amended. Claim 2, 3, 11, 15, and 19 have been cancelled. Claims 1, 4-10, 12-14, 16-18, and 20-22 are pending. Claims 1, 4-10, 12-14, 16-18, and 20-22 are rejected. Claim Rejections - 35 U.S.C. § 103 3. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 5. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. 6. Claims 1-3, 6-10, 12-18, and 21, 22 are rejected under 35 U.S.C. § 103 as being unpatentable over Zhang et al., “Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip,” IEEE (May 2019) [hereinafter Zhang] in view of Merkel et al., "Comparison of Off-chip Training Methods for Neuromemristive Systems," IEEE (2015) [hereinafter Merkel] and US Published Application 201700761116 to Chen et al. [hereinafter Chen]. Regarding claims 1 and 13, Zhang teaches [a] training method for a neural network based on memristors (Zhang, right column of p. 1876, “1. Introduction,” first partial paragraph, teaches [a] robust [memristive-quantized neural network (M-QNN)] approach . . . to accelerate and compress the training process of CNN models via the quantization of synaptic weights and kernels (that is, a training method for a neural network based on memristors)), wherein the neural network comprises a plurality of neuron layers connected one by one and weight parameters between the plurality of neuron layers (Zhang, right column of p. 1884, “B. MQ-MNN,” first paragraph, teaches in [evaluating] the effectiveness of the presented [memristive-quantized multilayer neural network (MQ-MNNs) (there are two layers of memristive crossbar arrays in the hardware), an MQ-MNN composed of 30 input neurons, 10 hidden neurons, and 4 output neurons (that is, a plurality of neuron layers . . . and weight parameters between the plurality of neuron layers); Zhang, left column of p. 1882, “C. Image Recognition,” first partial paragraph, teaches MQ-CNNs are fully connected to an MQ-NN for classification computations (192 inputs × 10 outputs) (that is, “fully connected” is a plurality of neuron layers connected one by one); Zhang, left column of p. 1880, “A. Image Processing,” first partial paragraph, teaches [t]he binary memristors suffer less from the asymmetric conductance (or abrupt conductance decreasing), since sufficient pulses are used to update the memristive binary synaptic weights (that is, “memristive binary synaptic weights” are weight parameters between the plurality of neuron layers))) of claim 1, and [a] training device for a neural network based on memristors (Zhang, right column of p. 1876, “1. Introduction,” first partial paragraph, teaches [a] robust [memristive-quantized neural network (M-QNN)] approach . . . to accelerate and compress the training process of CNN models via the quantization of synaptic weights and kernels [(that is, a training method for a neural network based on memristors)]”) of claim 13,and the training method comprises: [during an off-chip training process,] training the weight parameters of the neural network to obtain weight parameters after being trained (Zhang, right column of p. 1876, “1. Introduction,” first partial paragraph, teaches [a] robust [memristive-quantized neural network (M-QNN)] approach . . . to accelerate and compress the training process of CNN models via the quantization of synaptic weights and kernels (that is, training the weight parameters of the neural network to obtain weight parameters after being trained)), and programming a memristor array based on the weight parameters after being trained to write the weight parameters after being trained into the memristor array (Zhang, Fig. 1 & caption, teaches a binary memristive array (that is, memristor array) for neural networks: PNG media_image1.png 487 504 media_image1.png Greyscale Zhang, right column of p. 1877, “A. MQ-SNN,” first paragraph, teaches [the] memristive neurocomputing system receives inputs V O l - 1 ϵ RM and outputs V O l ϵ RN. The size of the memristive crossbar in the hardware is M × N. Let l denote the layer of the memristive crossbar array in the hardware, L denote the total number of layers, and 1 denote the first layer. Memristive switches (MSs) are utilized to control the training and testing processes in memristive circuits; Zhang, left column of p. 1877, “II. Memristive Models,” first paragraph, teaches that [d]uring the SET process (V(t) = VSET > VP > 0), the memristive memory cell is programmed to the [low resistance state (LRS)] (R(t) = RON) (that is, the “SET process” is programming a memristor array); Zhang, left at p. 1878, “B. MQ-MNN,” first paragraph, teaches MQ-SNNs can be expanded to MG-MNNS. . . . A modified [back propagation (BP)] algorithm is applied as flows. . . . 2) Randomly apply the programming voltage VW to update the synaptic weights (that is, to write the weight parameters). Read out and record all random values of the initial memristances (R(t)) or conductances (G(t)) and synaptic weights (   W j i l ) (that is, programming . . . based on the weight parameters after being trained to write the weight parameters after being trained into the memristor array)); and [during an on-chip training process,] updating a critical layer or several critical layers of the weight parameters of the neural network (Zhang, left column of p. 1876, “I. Introduction,” second paragraph, teaches [t]he update accuracy of the memristive synaptic weights in training or learning processes (that is, a critical layer or several critical layers); Zhang, right column of p. 1877, “III. Memristive Quantized Neural Networks,” first paragraph, teaches “M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs) (that is, “a single layer” is a last layer) by adjusting conductance values of at least part of memristors of the memristor array to adjust the critical layer or several critical layers of the weight parameters after being trained (Zhang, left column of p. 1878, “A. MQ-SNN,” first partial paragraph, teaches [a] modified feasible [back propagation (BP)] algorithm to reduce the error in [binary activation function] (6) is PNG media_image2.png 62 404 media_image2.png Greyscale where η is the learning rate. If K discrete iterations of inputs are processed in an MQ-SNN, where k = 1, 2, . . . , K, then during the kth iteration, (10) can be rewritten as PNG media_image3.png 93 420 media_image3.png Greyscale where W j i l = R 0 n G S n - G j i l ∈ - 1 , + 1 , and Gji (Gji = 1/Rji) is the memristive conductance at the ith row and the jth column (that is, conductance values of at least part of memristors of the memristor array); Zhang, right column of p. 1878, “B. MQ-MNN,” first partial paragraph, recites [a modified BP algorithm is applied as follows] . . . 6) Determine ∆ W j i l to ensure that the required memristive conductance is updated (that is, by adjusting conductance values of at least part of memristors of the memristor array to adjust the critical layer or several critical layers of the weight parameters after being trained); [Examiner notes the broadest reasonable interpretation of the limitation “updating only a last layer or several last layers,” is the updating a layer or layers of the neural network, which is consistent with the Specification. (MPEP § 2111). Accordingly, the broadest reasonable interpretation of this limitation covers the teachings of Zhang, which sets out that a MQ-SNN, which is a memristive-quantized single layer neural network, which is a “single layer” or a “layer,” and is thus “only a last layer.” (see Zhang, right column of p. 1877, “III. Memristive Quantized Neural Networks,” first paragraph, teaches “M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs))]); * * * wherein [during the off-chip training process], training the weight parameters of the neural network to obtain the weight parameters after being trained (Zhang, right column of p. 1878, “B. MQ-MNN,” first paragraph, is “6) . . . The value of [synaptic weights] Δ W j i is obtained in the training process and the corresponding [conductances] Δ G j i . . . is calculated with field-programmable gate array in a high precision method [(that is, training the weight parameters of the neural network to obtain the weight parameters after being trained)]”), and programming the memristor array based on the weight parameters after being trained to write the weight parameters after being trained into the memristor array (Zhang, left column of p. 1878, “B. MQ-MNN,” first paragraph, teaches “2) Randomly apply the programming voltage VW to update the synaptic weights [(that is, programming the memristor array based on the weight parameters)]. Read out and record [(that is, “record” is to write)] all random values of the initial memristances (R(t)) or conductances (G(t)) and synaptic weights ( W j i l ) [(that is, to write the weight parameters after being trained into the memristor array)]”; see above Zhang, Fig. 1, regarding the memristor array), comprises: during [the off-chip training process], according to a value range of conductance values of respective memristors in the memristor array (Zhang, right column at p. 1878, “B. MQ-MNN,” first paragraph, teaches “6) Determine [a difference of synaptic weights] ∆ W j i l , to ensure that the required memristive conductance is updated [(that is, “ ∆ W j i l represents a value range of conductance values, which: is according to a value range of conductance values of respective memristors in the memristor array)]”) directly obtaining quantized weight parameters of the neural network (Zhang, right column of p. 1876, “I. Introduction,” first practical paragraph, teaches highlights of this paper can be summarized as follows. 1) A robust M-QNN approach is proposed to accelerate and compress the training process of CNN models via the quantization of synaptic weights and kernels (that is, directly obtaining quantized weight parameters of the neural network); [Examiner notes In the limitation of “according to a value range of conductance values of respective memristors in the memristor array,” the plain meaning of the claim term “according to” is that of a general attribution to “a value range of conductance memristors” in obtaining quantized weight parameters. The broadest reasonable interpretation of the claim term “according to a value range of conductance values of respective memristors in the memristor array,” is that there is a value range of conductance values upon which there is “directly obtaining quantized weight parameters,” which is not inconsistent with the Applicant’s disclosure, (MPEP § 2111), and covers the teachings of Zhang]), and writing the quantized weight parameters into the memristor array (Zhang, right column of p. 1878, “B. MQ-MNN,” first partial paragraph, teaches 6) . . . [t]he required time of voltage pulse for memristive synaptic weight adjustment is obtained with the lookup table (LUT) and the quantized values are stored (that is, “stored” is writing the quantized weight parameters into the memristor array) in the hardware with a quantized method using [equation] (9) [where Wl is a matrix of weights in layer l PNG media_image4.png 83 388 media_image4.png Greyscale where W is a Z-bit synaptic weight, and W z b ∈ - 1 ,   + 1 .]), wherein the weight parameters after being trained are the quantized weight parameters (Zhang, left column of p. 1876, “I. Introduction,” first full paragraph, teaches “Neural networks can be quantized by reducing the bits of the synaptic weights using k-means scalar quantization [(that is, a “quantized neural network” is the weight parameters after being trained are the quantized weight parameters)]”); or during [the off-chip training process], training the weight parameters of the neural network to obtain the weight parameters after being trained, performing a quantization operation on the weight parameters after being trained based on a value range of conductance values of respective memristors in the memristor array to obtain quantized weight parameters , and writing the quantized weight parameters into the memristor array. Though Zhang teaches methods support hardware-friendly algorithms in software for [deep neural networks] for a single-layer neural network, Zhang, however, does not explicitly teach that software includes “an off-chip training process” as well as the on-chip training process that it does teach. But Merkel teaches an off-chip training process and an on-chip training process for a neuromemristive system [Examiner annotations in dashed-line text boxes]: PNG media_image5.png 353 586 media_image5.png Greyscale “An ideal model of the network is trained off-chip [(that is, off-chip training process)] using a software training algorithm (e.g. backpropagation, resilient backpropagation, Levenberg-Marquardt, genetic algorithms, etc.). Then, data from the trained model are used to train the on-chip [neuromemristive system (NMS)] [(that is, on-chip training process)].” (Merkel, right column of p. 99, “I. Introduction,” first partial paragraph, and Fig. 1). Zhang and Merkel are from the same or similar field of endeavor. Zhang teaches on-chip training implemented in memristive multi-layer neural networks. Merkel teaches off-chip training and on-chip training methods for neuromemristive systems: weight programming and feature training. Thus, it would have been obvious to a person having ordinary skill in the art to modify Zhang teaching on-chip training of memristive arrays with the off-chip and on-chip training of Merkel. The motivation to do so is to because “[t]he goal is to develop a robust training method such that the on-chip NMS and the ideal model respond identically to input data.” (Merkel, right column of p. 99, “I. Introduction,” first partial paragraph). Though Zhang and Merkel teach on-chip and off-chip training of a model, the combination of Zhang and Merkel, however, does not explicitly teach – * * * wherein a data set used in the on-chip training process is a subset of a data set used in the off-chip training process; * * * But Chen teaches – * * * wherein a data set used in the on-chip training process is a subset of a data set used in the off-chip training process (Chen ¶ 0034 teaches “post-silicon training can be performed off-chip and off-line by monitoring signals during chip operation and routing the monitored signals off-chip to collect trace information which may be used to train a new SVM model [(that is, a data set used in the off-chip training process)]. The new SVM model may be loaded on-chip to replace the old model in a secure firmware update fashion. In other embodiments, post-silicon training can be performed on-chip by routing monitored signals to on-chip memory in batches [(that is, a data set used in the on-chip training process is a subset)]. Using a training engine that is executed as an application software on-chip, the monitored signals from on-chip memory are retrieved and applied as training data to update the SVM model, thereby enabling the SOC 500 to update the SVM model without requiring external firmware updates”); * * * Zhang, Merkel, and Chen are from the same or similar field of endeavor. Zhang teaches on-chip training implemented in memristive multi-layer neural networks. Merkel teaches off-chip training and on-chip training methods for neuromemristive systems: weight programming and feature training. Chen teaches applying extracted verification data as a training dataset of feature vectors to a learning engine to build an SVM model in an on-chip and off-chip basis. Thus, it would have been obvious to a person having ordinary skill in the art to modify the combination of Zhang teaching on-chip and off-chip training of memristive arrays with the on-chip training using data set batches of Chen. The motivation to do so is because “[u]sing a training engine that is executed as an application software on-chip, the monitored signals from on-chip memory are retrieved and applied as training data to update the SVM model, thereby enabling the SOC 500 to update the SVM model without requiring external firmware updates. With either off-chip or on-chip model updates, the product manufacturer can further refine the SVM model based on the trace generated by running their software.” (Chen ¶ 0034). Regarding claim 6, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 1, as described above in detail. Zhang teaches - wherein during the on-chip training process, updating the critical layer or several critical layers of the weight parameters of the neural network by adjusting the conductance values of the at least part of memristors of the memristor array to adjust the critical layer or several critical layers of the weight parameters after being trained, comprises: training the memristor array through a forward calculation operation and a reverse calculation operation (Zhang, left column of p. 1879, “C. MQ-CNN,” first & second paragraphs, teaches [s]ince CNNs are more effective in solving image-recognition problems, compared with traditional MNNs, MQ-CNNs are proposed to accelerate computation and compress memory during the learning or training processes (that is, training the memristor array). Memristive synaptic weights W and kernels K are utilized in MQ-CNNs. The memristive arrays can be extended to memristive deep CNNs (MD-CNNs). . . . Compared with the processes of forward and backward propagations, higher precision is required for the parameters during the updates (that is, training . . . through a forward calculation operation and a reverse calculation operation)); and applying a forward voltage or a reverse voltage to the at least part of memristors of the memristor array based on a result of the forward calculation operation (Zhang, left column of p. 1879, “B. MQ-MNN,” second paragraph, teaches Small changes ΔW in the parameters W with higher precision accumulate and a few bits of memory bandwidth are spared during the [forward propagation (FP)] (that is, the forward calculation operation)) and a result of the reverse calculation operation (Zhang, left column at p. 1879, “B. MQ-MNN,” first paragraph, teaches [a] modified [back propagation] algorithm (that is, the reverse calculation operation) to update the conductance values of the at least part of memristors of the memristor array (Zhang, left column of p. 1877, “II. Memristive Models,” second paragraph, teaches synaptic devices with multilevel states can use robust weight updating rules (that is, to update the conductance values of the at least part of memristors of the memristor array) for some simple neural networks. If only two different memristive states are required, the memristive model can be simplified to a binary model PNG media_image6.png 104 412 media_image6.png Greyscale where V(t) is applied to the memristors with sufficient time; and VN and VP are, respectively, the negative and positive threshold voltages (that is, applying a forward voltage or a reverse voltage). During the SET process (V(t) = VSET > VP > 0), the memristive memory cell is programmed to the [low resistance state (LRS)] (R(t) = RON). On the contrary, the RESET process (V(t) = VRES < VN < 0) ultimately returns the memristive cell to the [high resistance state (HRS)] (R(t) = ROFF). Note that if VN ≤ V(t) ≤ VP, R(t) remains the same (R(t) = RON or ROFF)). Regarding claim 7, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 6, as described above in detail. Zhang teaches - wherein the reverse calculation operation is performed only on the at least part of memristors of the memristor array (Zhang, Fig. 4 and caption teaches a [p]roposed method to solve the sneak path issue in memristive crossbar arrays: PNG media_image7.png 243 474 media_image7.png Greyscale Zhang, left column of p. 1880, “B. Sneak Path Issue,” first paragraph, teaches that [p]rotect voltages are applied to memristive crossbar (1M) arrays on different rows and columns without transistors in the architecture (that is, “the protect voltages” are so that the reverse calculation operation is performed only on the at least part of the memristors of the memristor array)). Regarding claim 8, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 6, as described above in detail. Zhang teaches - wherein the memristor array comprises memristors arranged in an array with a plurality of rows and a plurality of columns, and training the memristor array through the forward calculation operation and the reverse calculation operation comprises: performing the forward calculation operation and the reverse calculation operation on the memristors (Zhang, left column of p. 1879, “C. MQ-CNN,” first & second paragraphs, teaches [c]ompared with the processes of forward and backward propagations, higher precision is required for the parameters during the updates (that is, training . . . through a forward calculation operation and a reverse calculation operation)), which are arranged in the plurality of rows and the plurality of columns, of the memristor array row by row or column by column or in parallel as a whole (Zhang, Fig. 1, teaches a memristive array (Examiner annotations in dashed-line text boxes): PNG media_image8.png 538 502 media_image8.png Greyscale Zhang, right column of p. 1878, “A. MQ-SNN,” first paragraph, teaches [t]he size of the memristive crossbar in the hardware is M × N (that is, performing the forward calculation operation and the reverse calculation operation on the memristors, which are arranged in the plurality of rows and the plurality of columns, of the memristor array row by row or column by column or in parallel as a whole)). Regarding claim 9, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 6, as described above in detail. Zhang teaches - wherein weight parameters corresponding to the at least part of memristors of the memristor array are updated row by row or column by column (Zhang, Fig. 4 & Caption, teaches updating a memristive crossbar array (Examiner annotations in dashed-line text boxes): PNG media_image9.png 372 518 media_image9.png Greyscale Zhang, left column of p. 1880, “B. Sneak Path Issue,” last partial paragraph, teaches Take a 3 × 4 memristive array as an example (see Fig. 4). Assume only selected memristors M22 and M23 change and the unselected memristors remain the same (that is, updated row by row)). Regarding claim 10, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 6, as described above in detail. Zhang teaches - wherein the forward calculation operation and the reverse calculation operation use only part of training set data to train the memristor array (Zhang, Fig. 6 & caption, teaches [i]mages of MNIST handwritten digits (with a training set of 60,000 examples and a testing set of 10,000 examples); Zhang, right column of p. 1882, “V. Experiments,” first paragraph, teaches When ten different samples in the MNIST are selected for training, the advantage of the 2-bit quantized SNNs is not obvious. The training speed of the 2-bit quantized SNNs is greatly increased compared with 64-bit (8 B) SNNs, when dealing with more complex tasks, such as 100 samples (that is, only part of training set data) in the MNIST images (that is, the forward calculation operation and the reverse calculation operation use only part of training set data to train the memristor array)). Regarding claim 12, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 1, as described above in detail. further comprising: Zhang teaches - by the memristor array, outputting an output result of the neural network based on the weight parameters that are updated (Zhang, Fig. 1, teaches a memristor array (Examiner annotations in dashed-line text boxes): PNG media_image10.png 521 549 media_image10.png Greyscale Zhang, left column of p. 1877, “MQ-SNN,” first paragraph, teaches assume that a memristive neurocomputing system receives input V O t - 1   ∈   R M and outputs V O l   ∈   R N ; Zhang, left column of p. 1878, “B. MQ-MNN,” first paragraph, teaches [a] modified BP algorithm is applied as follows. . . . 2) Randomly apply the programming voltage VW to update the synaptic weights. Read out and record all random values of the initial memristances (R(t)) or conductances (G(t)) and synaptic weights ( W j i l . 3) Apply the input image voltages to the M-NN circuits and evaluate the values of hidden and output neurons. 4) Determine the error ΔVL-1 by processing the error between the output V O L and the target output V T L of the neurons at the output layer (that is, by the memristor array, outputting an output result of the neural network based on the weight parameters that are updated)). Regarding claim 14, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 13, as described above in detail. Zhang teaches - wherein the off-chip training unit comprises an input unit (Zhang, right column of p. 1877, “A. MQ-SNN,” first paragraph, teaches memristive neurocomputing system receives inputs V O t - 1   ∈   R M (that is, an input unit)) and a read-write unit (Zhang, right column of p. 1877, “A. MQ-SNN,” first paragraph, teaches memristive neurocomputing system outputs V O l   ∈   R M (that is, a read-write unit)), and [the on-chip training unit] comprises a calculation unit (Zhang, right column of p. 1878, “B. MQ-MNN,” second paragraph, teaches [t]he circuit implementation (that is, on-chip training unit) of the BP algorithm for the MNNs (that is, a calculation unit)), an update unit, and an output unit (Zhang, Fig. 1, teaches a memristive array [(that is, on-chip training unit)] for neural networks (Examiner annotations in dashed-line text boxes): PNG media_image11.png 509 504 media_image11.png Greyscale In Fig. 1, resistances Rs and Rf as an update unit, and differential amps Al are an output unit); the input unit is configured to input the weight parameters after being trained (Zhang, right column of p. 1878, “B. MQ-MNN,” paragraph 6), teaches The required time of voltage pulse for memristive synaptic weight adjustment (that is, being trained) is obtained with the lookup table (LUT) (that is, input the weight parameters after being trained) and the quantized values are stored in the hardware with a quantized method (that is, the input unit is configured to input the weight parameters after being trained)); the read-write unit is configured to write the weight parameters after being trained into the memristor array (Zhang, right column of p. 1879, “A. Image Processing,” second paragraph, teaches [t]o read the value stored in a memristor, a positive read-voltage pulse with a small magnitude (0 < VR = VH < VP) and appropriate duration is used (that is, the read-write unit is configured to write the weight parameters after being trained into the memristor array)); the calculation unit is configured to train the memristor array through a forward calculation operation and a reverse calculation operation (Zhang, left column of p. 1879, “C. MQ-CNN,” first & second paragraphs, teaches [s]ince CNNs are more effective in solving image-recognition problems, compared with traditional MNNs, MQ-CNNs are proposed to accelerate computation and compress memory during the learning or training processes (that is, training the memristor array). Memristive synaptic weights W and kernels K are utilized in MQ-CNNs. The memristive arrays can be extended to memristive deep CNNs (MD-CNNs). . . . Compared with the processes of forward and backward propagations, higher precision is required for the parameters during the updates (that is, training . . . through a forward calculation operation and a reverse calculation operation)); the update unit is configured to apply a forward voltage or a reverse voltage to the at least part of memristors of the memristor array based on a result of the forward calculation operation (Zhang, left column of p. 1879, “B. MQ-MNN,” second paragraph, teaches Small changes ΔW in the parameters W with higher precision accumulate and a few bits of memory bandwidth are spared during the [forward propagation (FP)] (that is, the forward calculation operation)) and a result of the reverse calculation operation (Zhang, left column at p. 1879, “B. MQ-MNN,” first paragraph, teaches [a] modified [back propagation] algorithm (that is, the reverse calculation operation) to update weight parameters corresponding to the at least part of memristors of the memristor array (Zhang, left column of p. 1877, “II. Memristive Models,” second paragraph, teaches synaptic devices with multilevel states can use robust weight updating rules (that is, to update the conductance values of the at least part of memristors of the memristor array) for some simple neural networks. If only two different memristive states are required, the memristive model can be simplified to a binary model PNG media_image6.png 104 412 media_image6.png Greyscale where V(t) is applied to the memristors with sufficient time; and VN and VP are, respectively, the negative and positive threshold voltages (that is, applying a forward voltage or a reverse voltage). During the SET process (V(t) = VSET > VP > 0), the memristive memory cell is programmed to the [low resistance state (LRS)] (R(t) = RON). On the contrary, the RESET process (V(t) = VRES < VN < 0) ultimately returns the memristive cell to the [high resistance state (HRS)] (R(t) = ROFF). Note that if VN ≤ V(t) ≤ VP, R(t) remains the same (R(t) = RON or ROFF)); and the output unit is configured to calculate an output result of the neural network based on the weight parameters that are updated (Zhang, Fig. 1, teaches a memristor array (Examiner annotations in dashed-line text boxes): PNG media_image12.png 493 505 media_image12.png Greyscale Zhang, left column of p. 1877, “MQ-SNN,” first paragraph, teaches assume that a memristive neurocomputing system receives input V O t - 1   ∈   R M and outputs V O l   ∈   R N ; Zhang, left column of p. 1878, “B. MQ-MNN,” first paragraph, teaches [a] modified BP algorithm is applied as follows. . . . 2) Randomly apply the programming voltage VW to update the synaptic weights. Read out and record all random values of the initial memristances (R(t)) or conductances (G(t)) and synaptic weights W j i l . 3) Apply the input image voltages to the M-NN circuits and evaluate the values of hidden and output neurons. 4) Determine the error ΔVL-1 by processing the error between the output V O L and the target output V T L of the neurons at the output layer (that is, by the memristor array, outputting an output result of the neural network based on the weight parameters that are updated)). Regarding claim 16, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 14, as described above in detail. Zhang teaches - wherein the calculation unit is configured to perform the reverse calculation operation only on at least part of memristors of the memristor array (Zhang, Fig. 4 and caption teaches a [p]roposed method to solve the sneak path issue in memristive crossbar arrays: PNG media_image7.png 243 474 media_image7.png Greyscale Zhang, left column of p. 1880, “B. Sneak Path Issue,” first paragraph, teaches that [p]rotect voltages are applied to memristive crossbar (1M) arrays on different rows and columns without transistors in the architecture (that is, “the protect voltages” are so that the reverse calculation operation is performed only on the at least part of the memristors of the memristor array)). Regarding claim 17, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 14, as described above in detail. Zhang teaches - wherein the memristor array comprises memristors arranged in an array with a plurality of rows and a plurality of columns (Zhang, Fig. 1, teaches a memristive array (Examiner annotations in dashed-line text boxes): PNG media_image8.png 538 502 media_image8.png Greyscale Zhang, right column of p. 1878, “A. MQ-SNN,” first paragraph, teaches [t]he size of the memristive crossbar in the hardware is M × N (that is, performing the forward calculation operation and the reverse calculation operation on the memristors, which are arranged in the plurality of rows and the plurality of columns, of the memristor array row by row or column by column or in parallel as a whole)), the calculation unit is configured to perform the forward calculation operation and the reverse calculation operation on the memristors (Zhang, left column of p. 1879, “C. MQ-CNN,” first & second paragraphs, teaches [c]ompared with the processes of forward and backward propagations, higher precision is required for the parameters during the updates (that is, training . . . through a forward calculation operation and a reverse calculation operation)), which are arranged in the plurality of rows and the plurality of columns, of the memristor array row by row or column by column or in parallel as a whole (Zhang, Fig. 1, teaches a memristive array (Examiner annotations in dashed-line text boxes): PNG media_image8.png 538 502 media_image8.png Greyscale Zhang, right column of p. 1878, “A. MQ-SNN,” first paragraph, teaches [t]he size of the memristive crossbar in the hardware is M × N (that is, performing the forward calculation operation and the reverse calculation operation on the memristors, which are arranged in the plurality of rows and the plurality of columns, of the memristor array row by row or column by column or in parallel as a whole)). Regarding claim 18, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 14, as described above in detail. Zhang teaches - wherein the update unit is configured to update the weight parameters corresponding to the at least part of memristors of the memristor array row by row or column by column (Zhang, Fig. 4 & Caption, teaches updating a memristive crossbar array (Examiner annotations in dashed-line text boxes): PNG media_image9.png 372 518 media_image9.png Greyscale Zhang, left column of p. 1880, “B. Sneak Path Issue,” last partial paragraph, teaches Take a 3 × 4 memristive array as an example (see Fig. 4). Assume only selected memristors M22 and M23 change and the unselected memristors remain the same (that is, updated row by row)). Regarding claim 21, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 1, as described above in detail. Zhang teaches - wherein during the on-chip training process, updating the critical layer or several critical layers of the weight parameters of the neural network, comprises: updating a last layer or last several layers of the weight parameters in the neural network (Zhang, left column of p. 1876, "I. Introduction," second paragraph, teaches [t]he update accuracy of the memristive synaptic weights in training or learning processes (that is, updating only the weight parameters of the neural network); Zhang, right column of p. 1877, "Ill. Memristive Quantized Neural Networks," first paragraph, teaches "M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs) (that is, "a single layer'' is a last layer); [Examiner notes the broadest reasonable interpretation of the limitation "updating only a last layer or several last layers," is the updating a layer or layers of the neural network, which is not inconsistent with the Specification. (MPEP § 2111 ). Accordingly, the broadest reasonable interpretation of this limitation covers the teachings of Zhang, which sets out that a MQ-SNN, which is a memristive-quantized single layer neural network, which is a "single layer" or a "layer," and is thus "a last layer." (see Zhang, right column of p. 1877, "Ill. Memristive Quantized Neural Networks," first paragraph, teaches "M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs))]). Regarding claim 22, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 13, as described above in detail. Zhang teaches – wherein the on-chip training circuit is further configured to update a last layer or last several layers of the weight parameters in the neural network (Zhang, left column of p. 1876, "I. Introduction," second paragraph, teaches [t]he update accuracy of the memristive synaptic weights in training or learning processes (that is, configured to update a last layer or several layers of the weight parameters of the neural network); Zhang, right column of p. 1877, "Ill. Memristive Quantized Neural Networks," first paragraph, teaches "M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs) (that is, "a single layer'' is a last layer); [Examiner notes the broadest reasonable interpretation of the limitation "updating a last layer or several last layers," is the updating a layer or layers of the neural network, which is not inconsistent with the Specification. (MPEP § 2111 ). Accordingly, the broadest reasonable interpretation of this limitation covers the teachings of Zhang, which sets out that a MQ-SNN, which is a memristive-quantized single layer neural network, which is a "single layer," or a "layer," and is thus "a last layer." (see Zhang, right column of p. 1877, "Ill. Memristive Quantized Neural Networks," first paragraph, teaches "M-QNNs are proposed at three different levels: 1) memristive-quantized single layer neural networks (MQ-SNNs))]). 7. Claim 4 is rejected under 35 U.S.C. § 103 as being unpatentable over Zhang et al., “Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip,” IEEE (May 2019) [hereinafter Zhang] in view of Merkel et al., “Comparison of Off-Chip Training Methods for Neuromemrisive Systems,” IEEE (2015) [hereinafter Merkel], US Published Application 201700761116 to Chen et al. [hereinafter Chen], and Yan et al., “CELIA: A Device and Architecture Co-Design Framework for STT-MRAM-Based Deep Learning Acceleration,” ICS (2018) [hereinafter Yan]. Regarding claim 4, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 1, as described above in detail. Though Zhang, Merkel, and Chen teaches a quantized neural network, the combination of Zhang, Merkel, and Chen, however, does not explicitly teach – wherein the quantization operation comprises uniform quantization and non-uniform quantization. But Yan teaches - wherein the quantization operation comprises uniform quantization (Yan, Fig. 5, teaches uniform data quantization: PNG media_image13.png 212 309 media_image13.png Greyscale and non-uniform quantization (Yan, Fig. 8, teaches a non-uniform quantization: PNG media_image14.png 256 441 media_image14.png Greyscale Yan, left column of p. 150, “1. Introduction, second paragraph, teaches non-uniform data quantization for synaptic weights; explaining, Yan, left column of p. 152, “3. Motivation and Overview,” third paragraph, teaches [the] observation that the uniform quantization points used by the fixed-point number representation are not equally important. Intuitively, a quantization point that is close to zero has little impact on the model accuracy because of its small value; a quantization point that is far from zero also shows limited impact since only a very small amount of weights are quantized to it. In other words, the most important quantization points are not uniformly distributed. Consequently, we will propose a non-uniform data quantization scheme that assigns more quantization points to the weight range that is more important to the model inference, thus better characterizing the original NN model and minimizing the model accuracy loss due to the reduced bit width). Zhang, Merkel, Chen, and Yan are from the same or similar field of endeavor. Zhang teaches novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs). Merkel teaches a two off-chip training methods for neuromemristive systems: weight programming and feature training. Chen teaches applying extracted verification data as a training dataset of feature vectors to a learning engine to build an SVM model in an on-chip and off-chip basis. Yan teaches emerging non-volatile memory (NVM)’s unique characteristics, including the crossbar array structure and gray-scale cell resistances, to perform neural network (NN) computation is a well-studied approach in accelerating deep learning tasks. Thus, it would have been to a person having ordinary skill in the art as of the effective filing date of the invention to modify the combination of Zhang, Merkel, and Chen pertaining to memristive quantized neural networks with the quantization of synaptic weights of Yan. The motivation to do so is to significantly mitigate model accuracy loss due to reduced data precision in a cohesive manner, constructing a comprehensive [Spin-Transfer Torque Memristor RAM (STT-MRAM)] accelerator system for fast [neural )network] computation with high energy efficiency and low cost. (Yan, left column of p. 150, “1. Introduction,” second paragraph). 8. Claim 5 is rejected under 35 U.S.C. § 103 as being unpatentable over Zhang et al., “Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip,” IEEE (May 2019) [hereinafter Zhang] in view of Merkel et al., "Comparison of Off-chip Training Methods for Neuromemristive Systems," IEEE (2015) [hereinafter Merkel], US Published Application 201700761116 to Chen et al. [hereinafter Chen], and US Published Application 20200327406 to Piveteau et al. [hereinafter Piveteau]. Regarding claim 5, the combination of Zhang, Merkel, and Chen teaches all of the limitations of claim 1, as described above in detail. Zhang teaches – wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval of the conductance value of the memristor array based on the quantized weight parameters (Zhang, Fig. 2, teaches memristive quantized weights (Examiner annotations in dashed-line text boxes): PNG media_image15.png 342 511 media_image15.png Greyscale ; Zhang, right column of p. 1880, “C. Image Recognition,” first paragraph, teaches Once the number of bits (n) is determined, the computation of W ≈ s W b , a = s ∑ j = 1 n a j W j b (see Fig. 2) can be acheieved by the proposed circuits (see Fig. 3 [proposed binary memristive crossbar array for M-QNNs]). * * * if yes, applying a reverse pulse (Zhang, right column of p. 1880, “B. Sneak Path Issue,” first paragraph, teaches [t]he RESET voltage VRES and protect voltages V p r o 2 ' are applied (that is, applying a reverse pulse), respectively to the selected row and unselected rows (that is, “the RESET voltage” to protect when conductance state is within the target is if yes, applying a reverse pulse); and if not, applying a forward pulse (Zhang, left column of p. 1880, “B. Sneak Path Issue,” last partial paragraph, teaches [a]ssume only selected memristors M22 and M23 change and the unselected memristors remain the same. If ΔW22 > 0 and ΔW23 > 0, selected memristors M22 and M23 decrease. SET voltage VSET (that is, applying a forward pulse) and protect voltages Vpro2 (that is, applying a reverse pulse) are applied to the selected row and unselected rows, respectively (that is, “the RESET voltage” to write or store when conductance state is outside the target if no, applying a reverse pulse)); and * * * Though Zhang, Merkel, and Chen teaches SET and RESET pulses in a memristive array, the combination of Zhang, Merkel, and Chen, however, does not explicitly teach – wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval of the conductance value of the memristor array based on the quantized weight parameters; judging whether conductance values of respective memristors of the memristor array are within the target interval or not; if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval, * * * if yes, writing the quantized weight parameters into the memristor array. But Piveteau teaches - wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval (Piveteau ¶ 0054 teaches “[w]eights may be constrained to a range -c to c (that is, target interval) to accommodate the limited conductance range of the devices) of the conductance value of the memristor array (Piveteau ¶ 0058 teaches [p]articular weight values or ranges of values may be mapped to respective conductance values GT, e.g. by dividing the weight range -c to c for a layer into a number of sub-ranges corresponding to a number of programmed conductance states GT; (that is, “sub-ranges” of “respective conductance values” is acquiring a target interval of the conductance value of the memristor array); Piveteau ¶ 0063 teaches memristive arrays of the inference apparatus are then programmed accordingly to store the weight-sets for each layer) based on the quantized weight parameters (Piveteau ¶ 0064 teaches a set of quantized weight values w1, w2, wi, . , wn ϵ [-c, c] may be defined in apparatus 1 [of Fig. 1]. These quantized weight values correspond to respective programmed conductance states G1, G2, Gn ϵ Grand with associated conductance-error distributions PΔG(Gi) (that is, based on the quantized weight parameters)); judging whether conductance values of respective memristors of the memristor array are within the target interval or not; if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval (Piveteau ¶ 0049 teaches [t]he conductance state, and hence stored weight wij, can be varied in operation by application of programming signals to a device; Piveteau ¶ 0050 teaches ANN weights can be encoded as programmed conductance states of memristive devices in various ways, e.g. by mapping particular weight values, or ranges of values, to particular programmed states defined by target conductance values or ranges of conductance values of a device. . . . [S]toring digital weights in memristive device arrays is an imprecise process due to the loss of digital precision and conductance errors arising from various causes including write (programming) and read stochasticity. If G is the conductance of a memristive device (that is, “memristive device” is respective memristors of the memristor array), when programming the device to a target state with conductance GT, there is a conductance error of ΔG, i.e. a subsequent measurement of device conductance will retrieve the value GT+ ΔG (that is, the “conductance error” is judging whether the conductance values of respective memristors of the memristor array are within the target interval or not and if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval); [Examiner notes that the plain meaning of “judging” with respect to a “target interval” is determinatively judging either the conductance values are within the target interval or exceeds the target interval]), * * * if yes, writing the quantized weight parameters into the memristor array (Piveteau ¶ 0049 teaches [t]he conductance state, and hence stored weight wij, can be varied in operation by application of programming signals to a device (that is, “programming signals” is writing the quantized weight parameters into the memristor array)). Zhang, Merkel, Chen, and Piveteau are from the same or similar field of endeavor. Zhang teaches novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs). Merkel teaches a two off-chip training methods for neuromemristive systems: weight programming and feature training. Chen teaches applying extracted verification data as a training dataset of feature vectors to a learning engine to build an SVM model in an on-chip and off-chip basis. Piveteau teaches to training weights of such networks for network implementations in which the weights are stored as programmed conductance states of memristive devices. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Zhang, Merkel, and Ch pertaining to memristive quantized neural networks with the storing of quantized weights by dividing the weight range -c to c for a layer into a number of sub-ranges corresponding to a number of programmed conductance states GT of Piveteau. The motivation for doing so is to efficiently train artificial neural network weights with digital precision while accommodating conductance variations for the memristive devices in an inference apparatus. (Piveteau ¶ 0063). 9. Claim 20 is rejected under 35 U.S.C. § 103 as being unpatentable over Zhang et al., “Memristive Quantized Neural Networks: A Novel Approach to Accelerate Deep Learning On-Chip,” IEEE (May 2019) [hereinafter Zhang] in view of Merkel et al., "Comparison of Off-chip Training Methods for Neuromemristive Systems," IEEE (2015) [hereinafter Merkel], US Published Application 201700761116 to Chen et al. [hereinafter Chen], Yan et al., “CELIA: A Device and Architecture Co-Design Framework for STT-MRAM-Based Deep Learning Acceleration,” ICS (2018) [hereinafter Yan], and US Published Application 20200327406 to Piveteau et al. [hereinafter Piveteau]. Regarding claim 20, the combination of Zhang, Merkel, Chen, and Yan teaches all of the limitations of claim 4, as described above in detail. Zhang teaches – wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval of the conductance value of the memristor array based on the quantized weight parameters (Zhang, Fig. 2, teaches memristive quantized weights (Examiner annotations in dashed-line text boxes): PNG media_image15.png 342 511 media_image15.png Greyscale ; Zhang, right column of p. 1880, “C. Image Recognition,” first paragraph, teaches Once the number of bits (n) is determined, the computation of W ≈ s W b , a = s ∑ j = 1 n a j W j b (see Fig. 2) can be achieved by the proposed circuits (see Fig. 3 [proposed binary memristive crossbar array for M-QNNs]). * * * if yes, applying a reverse pulse (Zhang, right column of p. 1880, “B. Sneak Path Issue,” first paragraph, teaches [t]he RESET voltage VRES and protect voltages V p r o 2 ' are applied (that is, applying a reverse pulse), respectively to the selected row and unselected rows (that is, “the RESET voltage” to protect when conductance state is within the target is if yes, applying a reverse pulse); and if not, applying a forward pulse (Zhang, left column of p. 1880, “B. Sneak Path Issue,” last partial paragraph, teaches [a]ssume only selected memristors M22 and M23 change and the unselected memristors remain the same. If ΔW22 > 0 and ΔW23 > 0, selected memristors M22 and M23 decrease. SET voltage VSET (that is, applying a forward pulse) and protect voltages Vpro2 (that is, applying a reverse pulse) are applied to the selected row and unselected rows, respectively (that is, “the RESET voltage” to write or store when conductance state is outside the target if no, applying a reverse pulse)); and * * * Though Zhang, Merkel, and Yan teaches SET and RESET pulses in a memristive array for storing memristor conductances in a training method, the combination of Zhang, Merkel, and Yan, however, does not explicitly teach – wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval of the conductance value of the memristor array based on the quantized weight parameters; judging whether conductance values of respective memristors of the memristor array are within the target interval or not; if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval, * * * if yes, writing the quantized weight parameters into the memristor array. But Piveteau teaches - wherein writing the quantized weight parameters into the memristor array, comprises: acquiring a target interval (Piveteau ¶ 0054 teaches Weights may be constrained to a range -c to c (that is, target interval) to accommodate the limited conductance range of the devices) of the conductance value of the memristor array (Piveteau ¶ 0058 teaches [p]articular weight values or ranges of values may be mapped to respective conductance values GT, e.g. by dividing the weight range -c to c for a layer into a number of sub-ranges corresponding to a number of programmed conductance states GT; (that is, “sub-ranges” of “respective conductance values” is acquiring a target interval of the conductance value of the memristor array); Piveteau ¶ 0063 teaches memristive arrays of the inference apparatus are then programmed accordingly to store the weight-sets for each layer) based on the quantized weight parameters (Piveteau ¶ 0064 teaches a set of quantized weight values w1, w2, wi, . , wn ϵ [-c, c] may be defined in apparatus 1 [of Fig. 1]. These quantized weight values correspond to respective programmed conductance states G1, G2, Gn ϵ Grand with associated conductance-error distributions PΔG(Gi) (that is, based on the quantized weight parameters)); judging whether conductance values of respective memristors of the memristor array are within the target interval or not; if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval (Piveteau ¶ 0049 teaches [t]he conductance state, and hence stored weight wij, can be varied in operation by application of programming signals to a device; Piveteau ¶ 0050 teaches ANN weights can be encoded as programmed conductance states of memristive devices in various ways, e.g. by mapping particular weight values, or ranges of values, to particular programmed states defined by target conductance values or ranges of conductance values of a device. . . . [S]toring digital weights in memristive device arrays is an imprecise process due to the loss of digital precision and conductance errors arising from various causes including write (programming) and read stochasticity. If G is the conductance of a memristive device (that is, “memristive device” is respective memristors of the memristor array), when programming the device to a target state with conductance GT, there is a conductance error of ΔG, i.e. a subsequent measurement of device conductance will retrieve the value GT+ ΔG (that is, the “conductance error” is judging whether the conductance values of respective memristors of the memristor array are within the target interval or not and if not, judging whether the conductance values of the respective memristors of the memristor array exceeds the target interval); [Examiner notes that the plain meaning of “judging” with respect to a “target interval” is determinatively judging either the conductance values are within the target interval or exceeds the target interval]), * * * if yes, writing the quantized weight parameters into the memristor array (Piveteau ¶ 0049 teaches [t]he conductance state, and hence stored weight wij, can be varied in operation by application of programming signals to a device (that is, “programming signals” is writing the quantized weight parameters into the memristor array)). Zhang, Merkel, Yan, Chen, and Piveteau are from the same or similar field of endeavor. Zhang teaches novel approach to accelerate on-chip learning systems using memristive quantized neural networks (M-QNNs). Merkel teaches a two off-chip training methods for neuromemristive systems: weight programming and feature training. Yan teaches emerging non-volatile memory (NVM)’s unique characteristics, including the crossbar array structure and gray-scale cell resistances, to perform neural network (NN) computation is a well-studied approach in accelerating deep learning tasks. Chen teaches applying extracted verification data as a training dataset of feature vectors to a learning engine to build an SVM model in an on-chip and off-chip basis. Piveteau teaches to training weights of such networks for network implementations in which the weights are stored as programmed conductance states of memristive devices. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the combination of Zhang, Merkel, Chen, and Yan pertaining to memristive quantized neural networks with the storing of quantized weights by dividing the weight range -c to c for a layer into a number of sub-ranges corresponding to a number of programmed conductance states GT of Piveteau. The motivation for doing so is to efficiently train artificial neural network weights with digital precision while accommodating conductance variations for the memristive devices in an inference apparatus. (Piveteau ¶ 0063). Response to Arguments 10. Examiner has fully considered Applicant’s arguments, and responds below accordingly. 11. Applicant submits that “the cited references of Zhang, Yan, Piveteau, and Merkel fail to teach or suggest the . . . features as recited in the amended claim 1.” (Response at p. 9). Examiner’s Response: Examiner respectfully submits that the argument sets out limitations that are not tethered to the instant claims. Also, that the broadest reasonable interpretation of the claim terms cover the teachings of the cited prior art. Amended claim 1 recites, inter alia: * * * wherein a data set used in the on-chip training process is a subset of a data set used in the off-chip training process; wherein during the off-chip training process, training the weight parameters of the neural network to obtain the weight parameters after being trained, and programming the memristor array based on the weight parameters after being trained to write the weight parameters after being trained into the memristor array, comprises: during the off-chip training process, according to a value range of conductance values of respective memristors in the memristor array, directly obtaining quantized weight parameters of the neural network, and writing the quantized weight parameters into the memristor array, wherein the weight parameters after being trained are the quantized weight parameters; or during the off-chip training process, training the weight parameters of the neural network to obtain the weight parameters after being trained, performing a quantization operation on the weight parameters after being trained based on a value range of conductance values of respective memristors in the memristor array to obtain quantized weight parameters, and writing the quantized weight parameters into the memristor array. (claim 1, lines 12-26 (emphasis added by Applicant)). Examiner agrees that the cited references of Zhang, Yan, Piveteau, and Merkel do not teach the limitation “wherein a data set used in the on-chip training process is a subset of a data set used in the off-chip training process.” (claim 1, lines 12-13). In this regard, Examiner relies upon the teachings of Chen to teach this feature. Chen relates to a runtime classifier hardware circuit, in which a “post-silicon training data set” is used to create a better model, and the training may be performed on-chip memory in batches. (see Chen ¶ 0034). With regard to the limitation of “according to a value range of conductance values of respective memristors in the memristor array, directly obtaining quantized weight parameters of the neural network,” (claim 1, lines 18-20 (emphasis added by Examiner)), Examiner respectfully submits that the cited prior art of Zhang teaches this feature, as set out above in detail. In the claim limitation of “according to a value range of conductance values of respective memristors in the memristor array,” the plain meaning of the claim term “according to” is that of a general attribution to “a value range of conductance memristors” in obtaining quantized weight parameters. The broadest reasonable interpretation of the claim term “according to a value range of conductance values of respective memristors in the memristor array,” is that there is a value range of conductance values upon which there is “directly obtaining quantized weight parameters,” which is not inconsistent with the Applicant’s disclosure, (MPEP § 2111; see, e.g., Specification ¶ 0070 (“value range of the conductance values), and covers the teachings of Zhang. In other words, though “obtaining quantized weight parameters” may be attributable to “a value range,” there is no language provided in the claim that clarifies what the attribution may be because the array elements have each have a synaptic weight value, and thus, at least inherently have “a value range.” For example, Zhang is relied upon as teaching “6) Determine [a difference of synaptic weights] ∆ W j i l ,, to ensure that the required memristive conductance is updated” [(that is, ∆ W j i l , represents a value range of conductance values, and is covered by the Applicant’s claim language. (Zhang, right column at p. 1878, “B. MQ-MNN,” first paragraph). ” Moreover, Zhang does teach a synaptic weights reduced to a binarized value having a value range of {-1, +1}, which is a “quantized weight value” (see Zhang, left column of p. 1879, “C. MQ-CNN,” first paragraph). Still further, the underlying synaptic weights with respect to a “binarized value [synaptic weight] Wb” of Zhang have a respective “value range.” Applicant argues “[t]he Action argued that the term ‘binary’ is equivalent to ‘constraint of a conductance state of the memristor array’ in the pending application. Applicant respectfully disagrees.” (Response at p. 9). Examiner respectfully points out that the claim term “a value range” is not so limited as submitted by Applicant. Accordingly, the limitations argued by Applicant are not tethered to the instant claims, and the broadest reasonable interpretation of the claim terms cover the teachings of the cited prior art. The rejection clearly sets forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection. 12. Applicant argues “[t]he Action argued that the term ‘binary’ is equivalent to ‘constraint of a conductance state of the memristor array’ in the pending application. Applicant respectfully disagrees. In pending application, the constraint of the conductance state of the memristor array refer to the constraint on the value range of the conductance value of each memristor in the memristor array. The quantized weight parameters are obtained based on the value range of the conductance values of the respective memristors in the memristor array during the off-chip training process. Zhang merely discloses that memristors are much easier to realize binary state switches, and Zhang does not disclose or suggest the value range of the conductance values of the memristors.” (Response at p. 9). Examiner’s Response: Examiner respectfully points out that the claims do not recite a “constraint of a conductance state of the memristor array” as argued by Applicant. The claim simply sets out, inter alia, “according to a value range of conductance values of respective memristors in the memristor array, directly obtaining quantized weight parameters.“ Accordingly, the claim is not so limited as argued by Applicant. The plain meaning of the term “quantize” is converting a continuous value into a discrete value. The broadest reasonable interpretation of the claim term “quantized weight parameters” covers the teachings of Zhang relating to a discrete value embodied as a discrete “binary” value. Accordingly, claim term “a value range” is not so limited as submitted by Applicant. Accordingly, the claims are not as limited as argued by Applicant, and under BRI are covered by the teachings of Zhang. 13. Applicant argues that “Zhang does not involve the off-chip training process, let alone the specific process during the off-chip training process.” (Response at p. 11). Also, Applicant argues that “[a]lthough Merkel discloses a combined of the on-chip training process and the off-chip training process, Merkel also at least do not disclose or suggest the above-quoted features as recited in the amended claim 1.” (Response at p. 12). Examiner’s Response: Examiner acknowledges that though Zhang teaches accelerating an on-chip learning system, Zhang does not explicitly teach an off-chip learning system. In this regard, the teachings of Merkel are relied upon as teaching “off-chip training” and “on-chip training,” as set out above in detail. For example, Merkel teaches the use and reasons of implementing on-chip training and off-chip training, where: Training neuromemristive systems presents a formidable challenge due to several CMOS and memristor process variations. On-chip training is most effective at overcoming these, but it is severely limited to very basic training algorithms. On the other hand, off-chip training has the advantage of flexible software training algorithms. A high-level overview of off-chip training for an NMS is illustrated in Figure 1 [of a high-level depiction of off-chip training for a neuromemristive system]. An ideal model of the network is trained off-chip using a software training algorithm (e.g. backpropagation, resilient backpropagation, Levenberg-Marquardt, genetic algorithms, etc.). Then, data from the trained model are used to train the on-chip NMS. (Merkel, left column of p. 99, “I. Introduction,” second paragraph). Accordingly, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify the on-chip machine learning of Zhang with the on-chip and off-chip training architecture of Merkel. Also, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually, as is the case here with the cited prior art of Merkel. (MPEP § 2145.IV). Moreover, the rejection clearly sets forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection. Conclusion 14. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 15. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: (Zyarah et al., “Ziksa: On-Chip Learning Accelerator with Memristor Crossbars for Multilevel Neural Networks,” Rochester Institute of Technology (2017)) teaches an on-chip learning accelerator, known as Ziksa, that is integrated with the memristor crossbars. (US Published Application 20170017879 to Kataeva et al.) teaches a neural network is implemented as a memristive neuromorphic circuit that includes a neuron circuit and a memristive device connected to the neuron circuit. In accordance with a training rule, a desired conductance change for the memristive device is computed based on the sensed input voltage and the sensed error voltage. Then a training voltage is applied to the memristive device. 16. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.S./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Oct 21, 2020
Application Filed
Oct 02, 2023
Non-Final Rejection — §103
Dec 29, 2023
Response Filed
Jan 27, 2024
Final Rejection — §103
May 26, 2024
Request for Continued Examination
Jun 04, 2024
Response after Non-Final Action
Jul 26, 2024
Non-Final Rejection — §103
Nov 01, 2024
Response Filed
Feb 13, 2025
Final Rejection — §103
May 20, 2025
Request for Continued Examination
May 23, 2025
Response after Non-Final Action
Jun 13, 2025
Non-Final Rejection — §103
Sep 16, 2025
Response Filed
Jan 08, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591815
METHOD AND SYSTEM FOR UPDATING MACHINE LEARNING BASED CLASSIFIERS FOR RECONFIGURABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12585917
REINFORCEMENT LEARNING USING ADVANTAGE ESTIMATES
2y 5m to grant Granted Mar 24, 2026
Patent 12547759
PRIVACY PRESERVING MACHINE LEARNING MODEL TRAINING
2y 5m to grant Granted Feb 10, 2026
Patent 12530613
SYSTEMS AND METHODS FOR PERFORMING QUANTUM EVOLUTION IN QUANTUM COMPUTATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518214
DISTRIBUTED MACHINE LEARNING SYSTEMS INCLUDING GENERATION OF SYNTHETIC DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
37%
Grant Probability
55%
With Interview (+18.0%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month