Prosecution Insights
Last updated: April 19, 2026
Application No. 16/373,447

MEMORY EFFICIENT NEURAL NETWORKS

Final Rejection §103§112§DP
Filed
Apr 02, 2019
Examiner
SMITH, BRIAN M
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
4y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
129 granted / 246 resolved
-2.6% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 246 resolved cases

Office Action

§103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendments This action is in response to amendments filed October 30th, 2025, in which Claims 1, 10, 20, 24, and 28 have been amended. No claims have been added nor cancelled. The amendments have been entered, and Claims 1-28 are currently pending. Claim Objections Claim 6 is objected to because of the following informalities: A typographical error appears to have been introduced in the current set of amendments, and the word by appears to have been inadvertently omitted in the phrase the weights of the first precision are replaced by: … Appropriate correction is required Priority Applicant claims priority to provisional application 62/730,508, filed September 12th, 2018. However, the claims as currently amended do not have written description support in the provisional application. Specifically, the provisional application does not disclose to update the weights of the second precision while performing a second plurality of forward-backward passes without changing the second precision of the weights during the second plurality of forward-backward passes. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-28 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Specifically, each of independent Claims 1, 10, 20, and 24 recite the limitations to replace the updated weights of the first precision with weights of a second precision and then to update the weights of the second precision while performing a second plurality of forward-backward passes … without changing the second precision of the weights during the second plurality of forward-backward passes. This feature does not appear to be supported by or described in the application’s disclosure, nor has the applicant indicated any supporting location for this feature. The closest support located in the specification includes [0048], which discloses a second number of forward-backward passes following a quantization of the weights, but discloses that “additional quantization of thew eights from the floating point values to the values that are represented using fewer bits” takes place, not an update of the weights of the second precision without changing the second precision of the weights. Similarly, [0052] discloses fine-tuning weights based on floating point updates, (i.e. failing to teach not changing the second precision of the weights in the step of updating them – see the final sentence of paragraph [0052]). All of the description appears to either generate lower precision weights from floating point weights (which does not support the amended limitations) or to freeze the lower precision weights – no updating of the second precision weights, without changing their precision, appears to occur after the weights of the first precision have been replaced with the weights of the second precision. Further, the limitations of Claim 28 which recite to quantize first from a first precision to a second precision, and then from a second precision to a third precision, during rounds of forward-backwards passes, do not appear to have support in the specification nor claims as originally filed. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-9, 11-19, 21-23, and 25-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 3, 11, 21, and 25 each recite a first number of forward-backward passes before performing a first quantization of the weights and a second number of forward-backward passes before performing a second quantization of the weights from the first precision to the second precision. The independent claims upon which these claims depend has already recited a first plurality of forward-backward passes (after which a quantization occurs) and a second plurality of forward-backward passes (after which an update to the second-precision weights occurs). It is unclear whether these first number of passes and the first plurality of passes are intended to be the same action, or required to be different actions of forward-backwards passes, from each other, as replace the updated weights of the first precision (independent claims) and performing a first quantization of weights from the first precision to the second precision (Claims 3, 11, 21, 25) have overlapping scope. The same is true of the second number and the second plurality of forward-backward passes. However, a direct reading of the claim scope would indicate that to update the weights of the second precision without changing the second precision of the weights after replacing the weights of the first precision with weights of a second precision (independent claims) does not overlap with the scope of performing a second quantization of the weights from the first precision to the second precision. It is thus unclear whether two, three, or four distinct sets of forward-backwards passes are required to occur by the claim language of Claims 3, 11, 21, and 25. Further, Claims 3, 11, 21, and 25 each recite the weights multiple times (i.e. the phrase after the weights are updated). This limitation is indefinite because the independent claims have introduced weights of a first precision and updated weights of a first precision and weights of a second precision and to update the weights of the second precision, and it is unclear as to which of the weights each of the phrases the weights in the dependent claims refer. Further, Claims 6, 14, and 28 each recite the weights. This limitation is indefinite because the independent claims have introduced weights of a first precision and updated weights of a first precision and weights of a second precision and to update the weights of the second precision, and it is unclear as to which of the weights each of the phrases the weights in the dependent claims refer. Dependent claims of these claims are rejected for inheriting the indefiniteness of a parent claim without curing the indefiniteness. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5, 28; 10, 11, 13, 19; 20, 21, 23; and 24, 25, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over McKinstry et al., “Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference.” Regarding Claim 1, McKinstry teaches a processor, comprising: one or more circuits (McKinstry, pg. 4, 2nd column, 1st paragraph, “Software was implemented using PyTorch” denotes that the invention is performed on a computer) to: cause one or more neural networks to update weights of a first precision while performing a first plurality of forward-backward passes during training of the one or more neural networks without changing the first precision of the weights during the first plurality of forward-backward passes (McKinstry, Abstract, “starting with pretrained fp32 precision baseline networks” where pretraining does not inherently teach a first plurality of forward-backward passes, but it would have been obvious to one of ordinary skill in the art to pretrain using backpropagation or stochastic gradient descent, as these forward-backward techniques are used for further training by McKinstry, see pg. 3, 2nd column, last paragraph, “SGD is used … as usual”); upon completion of the first plurality of forward-backward passes (after the network has been pretrained) cause the one or more neural networks to replace the updated weights of the first precision with weights of a second precision, wherein the second precision is different from the first precision (McKinstry, pg. 3, 2nd column, 3rd paragraph, “We start with pre-trained, high precision networks from the PyTorch model zoo, quantize, and then fine tune” where “quantize” denotes replace the updated weights with weights of a second precision, see 4th paragraph, “The quantizer we use throughout this paper is parameterized by the precision (in number of bits) b”) and cause the one or more neural networks to update the weights of the second precision while performing a second plurality of forward-backward passes during the training of the one or more neural networks without changing the second precision of the weights during the second plurality of forward-backward passes (McKinstry, pg. 3, 2nd column, 3rd paragraph, “quantize, and then fine-tune” where “fine-tune” includes forward-backward passes, see pg. 4, 1st column, 4th paragraph, “Training: To train such a quantized network, we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass”). Regarding Claim 2, McKinstry teaches the processor of Claim 1 (and thus the rejection of Claim 1 is incorporated). McKinstry further teaches to: perform one or more activation functions in the one or more neural networks by applying the weights of the second precision to activation inputs that have been converted from the first precision to the second precision (McKinstry, pg. 4, 1st column, 4th paragraph, “quantize weights and activations in the forward pass” & pg. 3, 2nd column, 2nd paragraph, “networks with both weights and activations constrained to be either 4 bit, or 8-bit fixed-point integers”). Regarding Claim 3, McKinstry teaches the processor of Claim 1 (and thus the rejection of Claim 1 is incorporated). McKinstry further teaches wherein the weights of the first precision are replaced by: performing a first quantization of the weights from the first precision to the second precision after the weights are updated using a first number of forward-backward passes of training the one or more neural networks (McKinstry, pg. 4, 1st column, 4th paragraph, “Training: To train such a quantized network, we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass”); and performing a second quantization of the weights from the first precision to the second precision after the weights are updated using a second number of forward-backward passes training the one or more neural networks following the first quantization of the weights (McKinstry, pg. 4, 1st column, 5th-6th paragraphs, “For fine-tuning pretrained 8-bit networks … we find that we need only a single additional epoch of training … 4-bit networks … requires training for 110 additional epochs” where each additional epoch is a second number of forward-backward passes following the first quantization where each forward pass performs a quantization of the weights, see pg. 4, 1st column, 4th paragraph, “quantize weights and activations in the forward pass”). Regarding Claim 5, McKinstry teaches the processor of Claim 3 (and thus the rejection of Claim 3 is incorporated). McKinstry further teaches wherein the second number of forward-backward passes is determined based, at least in part, on a frequency hyperparameter associated with training the one or more neural networks (McKinsey, pg. 4, 2nd column, 1st paragraph, “for 110 additional epochs” & last paragraph, “We explored sensitivity to shortening fine-tuning by repeating the experiment for 30, 60, and 110 epochs”). Regarding Claim 28, McKinstry teaches the processor of Claim 1 (and thus the rejection of Claim 1 is incorporated). McKinstry further teaches to: upon completion of the second plurality of forward-backward passes, cause the one or more neural network to replace the updated wights of the second precision with weights of a third precision, wherein the third precision is different from the second precision; and cause the one or more neural networks to update the weights of the third precision while performing a third plurality of forward-backward passes during training of the one or more neural networks without changing the third precision of the weights during the third plurality of forward-backward passes (McKinstry, pg. 4, 1st column, 2nd-to-last paragraph, “To train such a quantized network we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradient” where copying the quantized weights to full precision denotes replace the updated weights of the second precision/4- or 8-bit with weights of a third/full precision, wherein the third precision is different from the second precision and “which are updated with the gradient” denotes updating the weights of the third precision). Regarding Claim 10, McKinstry teaches a method, comprising: training one or more neural networks (McKinstry, Abstract, “starting with pre-trained fp32 precision baseline networks and fine-tuning”) , wherein training the one or more neural networks includes: updating weight parameters of a first precision while performing a plurality of forward-backward passes during training of the one or more neural networks without changing the first precision of the weight parameters during the first plurality of forward-backward passes (McKinstry, Abstract, “starting with pretrained fp32 precision baseline networks” where pretraining does not inherently teach a first plurality of forward-backward passes, but it would have been obvious to one of ordinary skill in the art to pretrain using backpropagation or stochastic gradient descent, as these forward-backward techniques are used for further training by McKinstry, see pg. 3, 2nd column, last paragraph, “SGD is used … as usual”); upon completion of the first plurality of forward-backward passes (after the network has been pretrained) replacing the updated weight parameters of the first precision with weight parameters of a second precision, wherein the second precision is different from the first precision (McKinstry, pg. 3, 2nd column, 3rd paragraph, “We start with pre-trained, high precision networks from the PyTorch model zoo, quantize, and then fine tune” where “quantize” denotes replace the updated weights with weights of a second precision, see 4th paragraph, “The quantizer we use throughout this paper is parameterized by the precision (in number of bits) b”) and updating the weight parameters of the second precision while performing a second plurality of forward-backward passes during the training of the one or more neural networks without changing the second precision of the weight parameters during the second plurality of forward-backward passes (McKinstry, pg. 3, 2nd column, 3rd paragraph, “quantize, and then fine-tune” where “fine-tune” includes forward-backward passes, see pg. 4, 1st column, 4th paragraph, “Training: To train such a quantized network, we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass”). Regarding Claim 11, McKinstry teaches the method of Claim 10 (and thus the rejection of Claim 10 is incorporated). McKinstry further teaches wherein replacing the weight parameters of the first precision comprises: performing a first quantization of the weight parameters from the first precision to the second precision after the weights are updated using a first number of forward-backward passes of training one or more neural networks (McKinstry, pg. 4, 1st column, 4th paragraph, “Training: To train such a quantized network, we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass”); performing a second quantization of the weight parameters from the first precision to the second precision after the weight parameters are updated using a second number of forward-backward passes of training the one or more neural networks following the first quantization of weight parameters (McKinstry, pg. 4, 1st column, 5th-6th paragraphs, “For fine-tuning pretrained 8-bit networks … we find that we need only a single additional epoch of training … 4-bit networks … requires training for 110 additional epochs” where each additional epoch is a second number of forward-backward passes following the first quantization where each forward pass performs a quantization of the weights, see pg. 4, 1st column, 4th paragraph, “quantize weights and activations in the forward pass”). Regarding Claim 13, McKinstry teaches the method of Claim 11 (and thus the rejection of Claim 11 is incorporated). McKinstry further teaches determining the second number of forward-backward passes based, at least in part, on a frequency hyperparameter associated with training the one or more neural networks (McKinsey, pg. 4, 2nd column, 1st paragraph, “for 110 additional epochs” & last paragraph, “We explored sensitivity to shortening fine-tuning by repeating the experiment for 30, 60, and 110 epochs”). Regarding Claim 19, McKinstry teaches the method of Claim 10 (and thus the rejection of Claim 10 is incorporated). McKinstry further teaches wherein the weight parameters are associated with a fully connected layer in the one or more neural networks (McKinstry, pg. 3, 2nd-to-last paragraph, “in the last, fully-connected layer”). Claims 20, 21, and 23 recite a system comprising: one or more computers including one or more processors to perform precisely the methods of Claims 10, 11, and 13, respectively. As McKinstry performs their method on a computer (McKinstry, pg. 4, 2nd column, 1st paragraph, “Software was implemented using PyTorch”), Claims 20, 21, and 23 are rejected for reasons set forth in the rejections of Claims 10, 11, and 13, respectively. Similarly, Claims 24, 25, and 27 recite a non-transitory computer readable storage medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to perform precisely the methods of Claims 10, 11, and 13, respectively. As Zhou performs their method on a computer (McKinstry, pg. 4, 2nd column, 1st paragraph, “Software was implemented using PyTorch” and where a non-transitory computer readable storage medium is inherent in computer-implementation of McKinstry’s method), Claims 24, 25, and 27 are rejected for reasons set forth in the rejections of Claims 10, 11, and 13, respectively. Claims 4, 12, 22, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over McKinstry, in view of Bhuiyan, “How do I know when to stop training a neural network”. Regarding Claim 4, McKinstry teaches the processor of Claim 3 (and thus the rejection of Claim 3 is incorporated). As McKinstry is silent on details of pre-training of the network, which was were identified in the rejection of Claim 3 with the first number of forward-backward passes, McKinstry does not teach wherein the first number of forward-backward passes is determined based, at least in part, on an offset hyperparameter associated with training the one or more neural networks. However, in the context of training a neural network, Bhuiyan teaches this limitation (Bhuiyan, pg. 1, “A neural network is stopped training when the error … is below some threshold value or the number of iterations or epochs is above some threshold value” where these “threshold values” are offset hyperparameters). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to pre-train the neural network using such hyperparameters. The motivation to do so is to know when to stop training the network (as asked by the questioner in the prior art document). Regarding Claim 12, McKinstry teaches the method of Claim 11 (and thus the rejection of Claim 11 is incorporated). As McKinstry is silent on details of pre-training of the network, which was were identified in the rejection of Claim 11 with the first number of forward-backward passes, McKinstry does not teach determining the first number of forward-backward passes based, at least in part, on an offset hyperparameter associated with training the one or more neural networks. However, in the context of training a neural network, Bhuiyan teaches this limitation (Bhuiyan, pg. 1, “A neural network is stopped training when the error … is below some threshold value or the number of iterations or epochs is above some threshold value” where these “threshold values” are offset hyperparameters). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to pre-train the neural network using such hyperparameters. The motivation to do so is to know when to stop training the network (as asked by the questioner in the prior art document). Claim 22 recites a system comprising: one or more computers including one or more processors to perform precisely the method of Claim 12. As McKinstry performs their method on a computer (McKinstry, pg. 4, 2nd column, 1st paragraph, “Software was implemented using PyTorch”), Claim 22 is rejected for reasons set forth in the rejection of Claim 12. Similarly, Claim 26 recites a non-transitory computer readable storage medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to perform precisely the method of Claim 12. As McKinstry performs their method on a computer (McKinstry, pg. 4, 2nd column, 1st paragraph, “Software was implemented using PyTorch”, where a non-transitory computer readable storage medium is inherent in computer-implementation), Claim 26 is rejected for reasons set forth in the rejection of Claim 12. Claims 6-9, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over McKinstry, in view of Alakuijala, US PG Pub 2021/0027195. Regarding Claim 6, McKinstry teaches the processor of Claim 1 (and thus the rejection of Claim 1 is incorporated). McKinstry does not teach, but Alakuijala, also in the art of neural network quantization, teaches wherein the weights of the first precision are replaced by: freezing a first portion of the weights in a first one or more layers of the one or more neural networks; and modifying a second portion of the weights in a second one or more layers of the one or more neural networks (Alakuijala, [0031], “training for quantization can include determining weights to freeze during training to improve error … adding rules into learning to freeze layers”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to freeze a portion of weights while training others, as does Alakuijala, in the quantization method of McKinstry. The motivation to do so is “to improve error” (Alakuijala, [0031]). Regarding Claim 7, the McKinstry/Alakuijala combination of Claim 6 teaches the processor of Claim 6 (and thus the rejection of Claim 6 is incorporated). McKinstry further teaches wherein an output of the first one or more layers is quantized (McKinstry, pg. 4, 1st column, 4th paragraph, “quantize weights and activations in the forward pass” & pg. 3, 2nd column, 2nd paragraph, “networks with both weights and activations constrained to be either 4 bit, or 8-bit fixed-point integers”) and thus in the combination as described in the rejection of Claim 6, this occurs prior to modifying the second portion of weights in the second one or more layers (Alakuijala, Fig. 3, elements 306/308/318). Regarding Claim 8, the McKinstry/Alakuijala combination of Claim 6 teaches the processor of Claim 6 (and thus the rejection of Claim 6 is incorporated). The combination as described in the rejection of Claim 6 further teaches after the second portion of the weights is modified, freezing the second portion of the weights in the second one or more layers of the one or more neural networks, and modifying a third portion of the weights in a third one o more layers of the one or more neural networks following the second one or more layers (Alakuijala, Fig. 3, elements 306/308/318 and the back-arrow, as well as [0031], “in some models, the lower layers may typically remain unchanged whereas higher layers exhibit more change … this can be made explicit during a training phase, such as by adding rules into learning to freeze layers”). Regarding Claim 9, the McKinstry/Alakuijala combination of Claim 6 teaches the processor of Claim 6 (and thus the rejection of Claim 6 is incorporated). The combination as described in the rejection of Claim 6 further teaches wherein modifying the second portion of the weights comprises: updating the second portion of the weights based, at least in part, on an output of the first one or more layers; and converting the second portion of the weights from the first precision to the second precision (Alakuijala, Fig. 3, elements 306/308/318). Regarding Claim 14, McKinstry teaches the method of Claim 10 (and thus the rejection of Claim 10 is incorporated). McKinstry does not teach, but Alakuijala, also in the art of neural network quantization, teaches wherein the weight parameters of the first precision are replaced by: freezing a first portion of the weight parameters in a first one or more layers of the one or more neural networks; and modifying a second portion of the weight parameters in a second one or more layers of the one or more neural networks (Alakuijala, [0031], “training for quantization can include determining weights to freeze during training to improve error … adding rules into learning to freeze layers”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to freeze a portion of weights while training others, as does Alakuijala, in the quantization method of McKinstry. The motivation to do so is “to improve error” (Alakuijala, [0031]). Regarding Claim 15, the McKinstry/Alakuijala combination of Claim 14 teaches the method of Claim 14 (and thus the rejection of Claim 14 is incorporated). McKinstry further teaches wherein an output of the first one or more layers is quantized (McKinstry, pg. 4, 1st column, 4th paragraph, “quantize weights and activations in the forward pass” & pg. 3, 2nd column, 2nd paragraph, “networks with both weights and activations constrained to be either 4 bit, or 8-bit fixed-point integers”) and thus in the combination as described in the rejection of Claim 6, this occurs prior to modifying the second portion of weight parameters in the second one or more layers (Alakuijala, Fig. 3, elements 306/308/318). Regarding Claim 16, the McKinstry/Alakuijala combination of Claim 14 teaches the method of Claim 14 (and thus the rejection of Claim 14 is incorporated). The combination as described in the rejection of Claim 14 further teaches after the second portion of the weight parameters is modified, freezing the second portion of the weight parameters in the second one or more layers of the one or more neural networks, and modifying a third portion of the weight parameters in a third one or more layers of the one or more neural networks following the second one or more layers (Alakuijala, Fig. 3, elements 306/308/318 and the back-arrow, as well as [0031], “in some models, the lower layers may typically remain unchanged whereas higher layers exhibit more change … this can be made explicit during a training phase, such as by adding rules into learning to freeze layers”). Regarding Claim 17, the McKinstry/Alakuijala combination of Claim 14 teaches the method of Claim 14 (and thus the rejection of Claim 14 is incorporated). The combination as described in the rejection of Claim 14 further teaches wherein modifying the second portion of the weight parameters comprises: updating the second portion of the weight parameters based, at least in part, on an output of the first one or more layers; and converting the second portion of the weight parameters from the first precision to the second precision (Alakuijala, Fig. 3, elements 306/308/318). Regarding Claim 18, the McKinstry/Alakuijala combination of Claim 14 teaches the method of Claim 14 (and thus the rejection of Claim 14 is incorporated). McKinstry further teaches wherein the first one or more layers of the one or more neural networks comprise a convolutional layer, a batch normalization layer, and an activation layer (McKinstry, pg. 2, Fig. 1 lists the network architectures, including convolutional layers, pg. 1, 1st column, last paragraph, “deep convolutional networks,” batch normalization, pg. 3, 2nd column, 1st paragraph, “batch-normalization”, and activation layers, pg. 3, 2nd column, 2nd-to-last paragraph, “ReLU activation layers”). Response to Arguments Applicant’s arguments filed October 30th, 2025 have been fully considered but are not fully persuasive. Applicant’s arguments with respect to the 35 U.S.C. 101, 35 U.S.C. 112(b), and Double Patenting rejections of the previous office action have been considered, and due to the claim amendments the rejections have been withdrawn. Applicant’s arguments with respect to the prior art rejections of the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wang, “Classification Accuracy Improvement for Neuromorphic Computing Systems with One-level Precision Synapses” and Yao, US PG Pub 2022/0129759, each teach different techniques of freezing some weights while updating others during neural network quantization. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN M SMITH whose telephone number is (469)295-9104. The examiner can normally be reached Monday - Friday, 8:00am - 4pm Pacific. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIAN M SMITH/Primary Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Apr 02, 2019
Application Filed
Mar 01, 2022
Non-Final Rejection — §103, §112, §DP
Jun 03, 2022
Applicant Interview (Telephonic)
Jun 03, 2022
Examiner Interview Summary
Aug 09, 2022
Response Filed
Aug 25, 2022
Final Rejection — §103, §112, §DP
Oct 26, 2022
Examiner Interview Summary
Oct 26, 2022
Applicant Interview (Telephonic)
Mar 01, 2023
Notice of Allowance
Jul 03, 2023
Response after Non-Final Action
Jul 10, 2023
Response after Non-Final Action
Aug 31, 2023
Response after Non-Final Action
Nov 08, 2023
Response after Non-Final Action
Nov 09, 2023
Response after Non-Final Action
Nov 13, 2023
Response after Non-Final Action
Nov 13, 2023
Response after Non-Final Action
May 24, 2024
Response after Non-Final Action
Jul 29, 2024
Request for Continued Examination
Jul 30, 2024
Response after Non-Final Action
May 27, 2025
Applicant Interview (Telephonic)
May 28, 2025
Non-Final Rejection — §103, §112, §DP
Oct 30, 2025
Response Filed
Feb 03, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596936
PREDICTIVE DATA ANALYSIS TECHNIQUES USING GRAPH-BASED CODE RECOMMENDATION MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12585985
RECOGNITION SYSTEM, MODEL PROCESSING APPARATUS, MODEL PROCESSING METHOD, AND RECORDING MEDIUM FOR INTEGRATING MODELS IN RECOGNITION PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12555025
METHOD AND SYSTEM FOR INTEGRATING FIELD PROGRAMMABLE ANALOG ARRAY WITH ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Feb 17, 2026
Patent 12518198
System and Method for Ascertaining Data Labeling Accuracy in Supervised Learning Systems
2y 5m to grant Granted Jan 06, 2026
Patent 12488068
PERFORMANCE-ADAPTIVE SAMPLING STRATEGY TOWARDS FAST AND ACCURATE GRAPH NEURAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
89%
With Interview (+37.0%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 246 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month