Prosecution Insights
Last updated: April 19, 2026
Application No. 17/709,796

DYNAMIC COMPENSATION OF ANALOG CIRCUITRY IMPAIRMENTS IN NEURAL NETWORKS

Final Rejection §101§102§103
Filed
Mar 31, 2022
Examiner
LEWIS, MATTHEW LEE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 3 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §102 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendments This action is in response to amendments filed September 16th, 2025, in which Claims 1, 7, 11, 16, & 22 have been amended. No claims have been added or cancelled. The amendments have been entered, and Claims 1-25 are currently pending. Response to Arguments Regarding the applicant’s traversal of the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments filed September 16th, 2025 have been fully considered, and are unpersuasive. Applicant has cited [0016] of the specification, asserting that the claimed invention “provides a specific, novel, and advantageous technique for compensation of analog circuitry impairments”. The flaw with this assertation is that the improvements cited in [0016] seem to primarily be directed to the abstract limitations cited from the previous office action of claim 1, with the exception that it uses physical analog circuitry to perform the methods. As pointed out in the prior office action, using a computer as a tool to perform an abstract idea, does not provide significantly more than the abstract idea (MPEP 2106.05(f)). Further, this can also be shown in the following quote: ([MPEP 2106] “However, it is important to keep in mind that an improvement in the abstract idea itself (e.g. a recited fundamental economic concept) is not an improvement in technology. For example, in Trading Technologies Int’l v. IBG, 921 F.3d 1084, 1093-94, 2019 USPQ2d 138290 (Fed. Cir. 2019), the court determined that the claimed user interface simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology.”) Further, applicant asserts that claim 1 is not directed to any mental process, citing the MPEP 2106, as evidence that if any of the limitations of the claim cannot be practically performed in the human mind, the claim does not recite a mental process, further citing “initiating a model of an analog circuit” and “using analog circuitry to perform MAC operations” as limitations that cannot be done via the human mind. Examiner would respectfully like to draw the applicant’s attention to the MPEP 2106.04.2. Prong Two: “Prong Two asks does the claim recite additional elements that integrate the judicial exception into a practical application? In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. If the additional elements in the claim integrate the recited exception into a practical application of the exception, then the claim is not directed to the judicial exception (Step 2A: NO) and thus is eligible at Pathway B…” This passage explains that if any limitation in the claim recites an abstract idea (such as a mental process), the additional elements of the claim must provide evidence that integrate that into a practical application, meaning the mere presence of additional limitations which are not abstract ideas does not automatically preclude the claim as a whole from reciting that abstract idea. Further, applicant asserts that claim 1 does not fall into any other categories of abstract idea because the claimed method of classifying images with a neural network is not directed to any mathematical concepts or any method of organizing human activity. Examiner respectfully asserts that several limitations have not been disproven from reciting mental processes, as found in the previous office action, and as discussed in the previous interview, there are a variety of those, that in view of the specification, may also recite mathematical processes, such as MAC (multiply-accumulate operations) as one example. It was, and is, advised by the examiner, to focus on providing evidence for the additional limitations to integrate the abstract limitations into a practical application, or to provide further detailed amendments that may help to do so. As such, the 35 U.S.C 101 rejections for claims 1-25 are maintained. Regarding the applicant’s traversal of the 35 U.S.C. 112 rejections of the previous office action, the applicant’s arguments filed September 16th, 2025 have been fully considered, and are persuasive. Claims 7 & 22 were amended accordingly and have overcome all previously cited 112 rejections. As such, these rejections are withdrawn. Regarding the applicant’s traversal of the 35 U.S.C. 102/103 rejections of the previous office action, the applicant’s arguments filed September 16th, 2025 have been fully considered, and are unpersuasive. Applicant asserts that the examiner indicated that claim 1 appears to be patentably distinguishable from JANTSCHER, and further asserts that JANTSCHER “is silent” regarding some of the limitations, including “updating the analog neural network with the compensation coefficient; and after updating the analog neural network, providing an input signal to the analog neural network, the analog neural network configured to perform, by using the analog circuitry, second MAC operations based on the input signal, the compensation coefficient, and the set of weights”. The examiner respectfully submits that these limitations are indeed taught by JANTSCHER, as addressed in the previous action, and detailed as follows: JANTSCHER teaches “updating the analog neural network with the compensation coefficient” (Figure 2) The updated weights (compensation coefficient) from 210 is seen going into the analog neural network at 212. Further, JANTSCHER teaches “after updating the analog neural network, providing an input signal to the analog neural network” (Figure 2) In the above figure, 214 shows another input signal being sent into the analog neural network after its previous update, from 210. Further, JANTSCHER teaches “the analog neural network configured to perform, by using the analog circuitry, second MAC operations based on the input signal, the compensation coefficient, and the set of weights” (Figure 2) In the above figure, 216 shows the second generated output of the analog neural network performing the same operations for output as previously cited for the first output in 206 and further explained to show the multiply-accumulate functionality in ([0065-0067] “To compute a neuron output for a given neuron input, each neuron in each layer of an analog neural network may perform calculation based on the following equation (without device-mismatch induced errors): PNG media_image1.png 40 457 media_image1.png Greyscale where j denotes the neuron index, 1 is the index of the layer, and k is the index of the neuron input. PNG media_image2.png 38 32 media_image2.png Greyscale is the output produced by a current neuron j of layer 1. f is a non – linear activation function. For example, f can be a linear function such as f(x) = x, but the result of the linear function is limited to a maximum of +1 and a minimum of -1. That means, when the value of f(x) is greater than or equal to 1, the value of f(x) is set to +1. When the value of f(x) is less than -1, the value of f(x) is set to -1. Therefore, f is a non-linear function. PNG media_image3.png 37 45 media_image3.png Greyscale is the weight between neuron k and current neuron j. PNG media_image4.png 31 47 media_image4.png Greyscale is the input coming from neuron k of previous layer 1-1. During production of analog neural networks, device-mismatch effects resulting from fabrication tolerance may influence the calculation in Equation 1 and therefore errors may be introduced. As a result, the transfer of trained weights (e.g., weight vectors included in weight data 118) from a trained digital neural network to an analog neural network may cause a significant loss of accuracy of the analog neural network. Below is an example equation for computing a neuron output given a neuron input when errors are introduced in an analog neural network chip. This equation can apply to every neuron of every layer within the analog neural network. PNG media_image5.png 25 451 media_image5.png Greyscale where PNG media_image6.png 24 48 media_image6.png Greyscale is an error function that represents both linear and non - linear error functions and is applied to each term of Equation (1). For example, PNG media_image7.png 26 108 media_image7.png Greyscale represents an input offset error that afflicts the neuron input PNG media_image8.png 26 43 media_image8.png Greyscale of the neuron. PNG media_image9.png 30 82 media_image9.png Greyscale is a multiplicative sum error afflicting the input of the activation function f of the neuron. PNG media_image10.png 26 70 media_image10.png Greyscale is an activation function offset error afflicting an output of the activation function of the neuron.”) It is also noted that unlike 206, 216, as shown in Figure 2, will be influenced by the original input 204, the compensation coefficient calculated in 210, and the set of weights from 202. Therefore, the examiner respectfully submits that, as cited and detailed in the previous office action, JANTSCHER does in fact anticipate the limitations as claimed. The examiner also respectfully asserts that it was determined, as printed in the interview summary filed on September 19th, 2025, “Examiner advised amending claim 1 with more details clarifying the process”, as it was discussed that, as worded, the claim is anticipated by JANTSCHER, despite details from the attorney citing the specification in the interview pointing out distinguishable differences. Claim 1 was only slightly amended, however, and as such, these details have not made it into the claims. As such, the 102/103 rejections of the previous office action are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Regarding claim 1, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A method”. A method is one of the four statutory categories of invention. In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components: “perform… first multiply-accumulate (MAC) operations based on a set of weights and the training signal” (A person can mentally evaluate a set of weights and the training sample, and make a judgement to perform MAC operations based on that (MPEP 2106).) “computing a compensation coefficient based on the output signal and a reference signal, the output signal comprising a classification of the training signal, the reference signal comprising a ground-truth classification of the training signal” (A person can mentally evaluate the output signal and the reference signal, and make a judgement to compute a compensation coefficient based on that (MPEP 2106).) “updating the analog neural network with the compensation coefficient” (A person can mentally evaluate the analog neural network and the compensation coefficient, and make a judgement to update the neural network with the coefficient (MPEP 2106).) “perform… second MAC operations based on the input signal, the compensation coefficient, and the set of weights” (A person can mentally evaluate the input signal, a compensation coefficient, and a set of weights, and make a judgement to perform MAC operations based on that (MPEP 2106).) If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea. In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: “providing a training signal to an analog neural network” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).) “wherein the analog neural network is configured to…” (In step 2A, prong 2 and Step 2B, this recites merely using the neural network constitutes mere instructions to apply the exception using a generic computer (MPEP 2106.05(g)).) “…by using an analog circuitry…” (In step2A, prong 2, this recites using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).) “…and to generate an output signal” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).) “after updating the analog neural network, providing an input signal to the analog neural network” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, additional elements (v) and (viii) recite insignificant extra-solution activities. Further, elements (v), (viii), and (ix) recite steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Additional element (vi) recites merely applying the exception using a generic computer, which is not indicative of significantly more. Additional element (vii) recites use of a computer as a tool to perform the abstract idea, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites the following additional mental process: “wherein the second MAC operations comprise: multiplying the compensation coefficient with a weight in the set of weights” (A person can mentally evaluate the compensation coefficient and weight within the set of weights and make a judgement to multiply them (MPEP 2106).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites the following additional mental process: “determining a value of the compensation coefficient by minimizing the error signal” (A person can mentally evaluate the error signal and make a judgement to minimize it to determine a value for the compensation coefficient (MPEP 2106).) Further, claim 3 recites “wherein computing the compensation coefficient comprises: generating an error signal by comparing the output signal with the reference signal” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites the following additional mental processes: “updating the analog neural network with the compensation coefficient comprises replacing the previously computed compensation coefficient with the compensation coefficient” (A person can mentally evaluate the analog neural network’s previously computed coefficient and make a judgement to replace it with the newly calculated coefficient (MPEP 2106).) Further, claim 4 recites “wherein the first MAC operations are further based on a previously computed compensation coefficient” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 5, it is dependent upon claim 4, and thereby incorporates the limitations of, and corresponding analysis applied to claim 4. Further, claim 5 recites the following additional mental process: “determining a value of the compensation coefficient by updating a value of the previously computed compensation coefficient till the error signal is minimized” (A person can mentally evaluate the error signal and make a judgement to minimize it to determine a value for the compensation coefficient (MPEP 2106).) Further, claim 5 recites “wherein computing the compensation coefficient comprises: generating an error signal by comparing the output signal with the reference signal” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 6, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 6 recites “forming a signal package, the signal package including the training signal and the input signal, wherein the training signal is a preamble of the signal package” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 7, it is dependent upon claim 6, and thereby incorporates the limitations of, and corresponding analysis applied to claim 6. Further, claim 7 recites the following additional mental process: “identifying an impairment of the analog circuitry” (A person can mentally evaluate the analog circuitry and make a judgement identify an impairment (MPEP 2106).) Further, claim 7 recites “forming the signal package…” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Further, claim 7 recites “periodically forming the signal package at a predetermined frequency” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 8, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 8 recites “wherein the reference signal comprises a plurality of ground-truth classifications that includes the ground-truth classification, the output signal comprises a plurality of classifications that includes the classification, and each ground-truth classification in the reference signal corresponds to a same category as a classification in the output signal” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 9, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 9 recites “wherein the training signal comprises a plurality of batches, each batch includes a plurality of training samples, each training sample in a batch corresponds to a different ground-truth classification of the plurality of ground-truth classifications” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 10, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 10 recites “wherein the training signal comprises a plurality of subsets, each subset includes one or more training samples that correspond to a same ground-truth classification of the plurality of ground-truth classifications” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claim 11, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “One or more non-transitory computer-readable media storing instructions executable to perform operations”. Non-transitory computer-readable media is within one of the four statutory categories of invention. Further, claim 11 recites similar limitations to claim 1 and is rejected under the same rationale, with the following additional limitation: “One or more non-transitory computer-readable media storing instructions executable to perform operations” (In step2A, prong 2, this recites using a computer as a tool to perform an abstract idea (MPEP 2106.05(f).) In step 2B, using a computer as a tool to perform an abstract idea is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claims 12-15, they are dependent upon claim 11 and thereby incorporate the limitations of, and corresponding analysis applied to claim 11. Further, claims 12-15 recite similar additional limitations to claim 2-4, and 8, respectively, and are rejected under the same rationale. Regarding claim 16, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “An apparatus”. An apparatus is one of the four statutory categories of invention. Further, claim 16 recites similar limitations to claim 1 and is rejected under the same rationale, with the following additional limitation: “An apparatus, comprising: a computer processor for executing computer program instructions; and one or more non-transitory computer-readable media storing computer program instructions executable by the computer processor to perform operations” (In step2A, prong 2, this recites using a computer as a tool to perform an abstract idea (MPEP 2106.05(f).) In step 2B, using a computer as a tool to perform an abstract idea is not indicative of significantly more.) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Regarding claims 17-25, they are dependent upon claim 16 and thereby incorporate the limitations of, and corresponding analysis applied to claim 16. Further, claims 17-25 recite similar additional limitations to claims 2-10, respectively, and are rejected under the same rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8, 11-20, & 23 are rejected under 35 U.S.C. 102(a)(2) as being clearly anticipated by Jantscher, et al. US Application No.: US 2022/0309331 A1, filed on June 16, 2020 (hereafter, JANTSCHER). Regarding claim 1, JANTSCHER teaches “providing a training signal to an analog neural network” (Figure 2) PNG media_image11.png 793 601 media_image11.png Greyscale In the above figure, (204) shows the inputs being input into the analog neural network, which can be further shown to be a signal in ([0113, sentence 4] “Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine - generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus.”) Further, JANTSCHER teaches “wherein the analog neural network is configured to perform, by using an analog circuitry, first multiply-accumulate (MAC) operations based on a set of weights and the training signal and to generate an output signal” (Figure 2) In the above figure, at 206, we see the output (output signal) of the analog neural network, and at 202, we see the weights for calibration (a set of weights) being used in addition to the previously cited input signal (204). Further, we can see the details of this step, including the multiply-accumulate operations in ([0065-0067] “To compute a neuron output for a given neuron input, each neuron in each layer of an analog neural network may perform calculation based on the following equation (without device-mismatch induced errors): PNG media_image1.png 40 457 media_image1.png Greyscale where j denotes the neuron index, 1 is the index of the layer, and k is the index of the neuron input. PNG media_image2.png 38 32 media_image2.png Greyscale is the output produced by a current neuron j of layer 1. f is a non – linear activation function. For example, f can be a linear function such as f(x) = x, but the result of the linear function is limited to a maximum of +1 and a minimum of -1. That means, when the value of f(x) is greater than or equal to 1, the value of f(x) is set to +1. When the value of f(x) is less than -1, the value of f(x) is set to -1. Therefore, f is a non-linear function. PNG media_image3.png 37 45 media_image3.png Greyscale is the weight between neuron k and current neuron j. PNG media_image4.png 31 47 media_image4.png Greyscale is the input coming from neuron k of previous layer 1-1. During production of analog neural networks, device-mismatch effects resulting from fabrication tolerance may influence the calculation in Equation 1 and therefore errors may be introduced. As a result, the transfer of trained weights (e.g., weight vectors included in weight data 118) from a trained digital neural network to an analog neural network may cause a significant loss of accuracy of the analog neural network. Below is an example equation for computing a neuron output given a neuron input when errors are introduced in an analog neural network chip. This equation can apply to every neuron of every layer within the analog neural network. PNG media_image5.png 25 451 media_image5.png Greyscale where PNG media_image6.png 24 48 media_image6.png Greyscale is an error function that represents both linear and non - linear error functions and is applied to each term of Equation (1). For example, PNG media_image7.png 26 108 media_image7.png Greyscale represents an input offset error that afflicts the neuron input PNG media_image8.png 26 43 media_image8.png Greyscale of the neuron. PNG media_image9.png 30 82 media_image9.png Greyscale is a multiplicative sum error afflicting the input of the activation function f of the neuron. PNG media_image10.png 26 70 media_image10.png Greyscale is an activation function offset error afflicting an output of the activation function of the neuron.”) Further, JANTSCHER teaches “computing a compensation coefficient based on the output signal and a reference signal, the output signal comprising a classification of the training signal, the reference signal comprising a ground-truth classification of the training signal” (Figure 2) The updated weights at 210 (a compensation coefficient) is computed based on the previously generated output signal as shown in 206 and the expected outputs in 208 (a reference signal). Further, the output signal comprises the training signal as previously cited above, and shown in 204, and the reference signal comprises a ground-truth classification of the training signal as is shown in ([0043] “In order for a digital neural network to learn to perform a machine learning task, a large number of pre-classified training examples are needed to train the digital neural network. Each training example includes a training input and a respective ground - truth output for the training input. Each training input is processed by the neural network to generate a respective output. The output generated by neural network is then compared to the respective ground-truth output of the training input. During training, the values of weights (or parameters) of the neural network are adjusted to such as the outputs generated by the neural network get closer to the ground-truth outputs. More specifically, the weights of the neural network can be adjusted to optimize an objective function computed based on the training data (e.g., to minimize a loss function that represents a discrepancy between an output of the model and a ground - truth output). This training procedure is repeated multiple times for all pre – classified training examples until one or more criteria are satisfied, for example, until the digital neural network has achieved a desired level of accuracy.”) Further, JANTSCHER teaches “updating the analog neural network with the compensation coefficient” (Figure 2) The updated weights (compensation coefficient) from 210 is seen going into the analog neural network at 212. Further, JANTSCHER teaches “after updating the analog neural network, providing an input signal to the analog neural network” (Figure 2) In the above figure, 214 shows another input signal being sent into the analog neural network after its previous update, from 210. Further, JANTSCHER teaches “the analog neural network configured to perform, by using the analog circuitry, second MAC operations based on the input signal, the compensation coefficient, and the set of weights” (Figure 2) In the above figure, 216 shows the second generated output of the analog neural network performing the same operations for output as previously cited for the first output in 206 and further explained to show the multiply-accumulate functionality in ([0065-0067] “To compute a neuron output for a given neuron input, each neuron in each layer of an analog neural network may perform calculation based on the following equation (without device-mismatch induced errors): PNG media_image1.png 40 457 media_image1.png Greyscale where j denotes the neuron index, 1 is the index of the layer, and k is the index of the neuron input. PNG media_image2.png 38 32 media_image2.png Greyscale is the output produced by a current neuron j of layer 1. f is a non – linear activation function. For example, f can be a linear function such as f(x) = x, but the result of the linear function is limited to a maximum of +1 and a minimum of -1. That means, when the value of f(x) is greater than or equal to 1, the value of f(x) is set to +1. When the value of f(x) is less than -1, the value of f(x) is set to -1. Therefore, f is a non-linear function. PNG media_image3.png 37 45 media_image3.png Greyscale is the weight between neuron k and current neuron j. PNG media_image4.png 31 47 media_image4.png Greyscale is the input coming from neuron k of previous layer 1-1. During production of analog neural networks, device-mismatch effects resulting from fabrication tolerance may influence the calculation in Equation 1 and therefore errors may be introduced. As a result, the transfer of trained weights (e.g., weight vectors included in weight data 118) from a trained digital neural network to an analog neural network may cause a significant loss of accuracy of the analog neural network. Below is an example equation for computing a neuron output given a neuron input when errors are introduced in an analog neural network chip. This equation can apply to every neuron of every layer within the analog neural network. PNG media_image5.png 25 451 media_image5.png Greyscale where PNG media_image6.png 24 48 media_image6.png Greyscale is an error function that represents both linear and non - linear error functions and is applied to each term of Equation (1). For example, PNG media_image7.png 26 108 media_image7.png Greyscale represents an input offset error that afflicts the neuron input PNG media_image8.png 26 43 media_image8.png Greyscale of the neuron. PNG media_image9.png 30 82 media_image9.png Greyscale is a multiplicative sum error afflicting the input of the activation function f of the neuron. PNG media_image10.png 26 70 media_image10.png Greyscale is an activation function offset error afflicting an output of the activation function of the neuron.”) It is also noted that unlike 206, 216, as shown in Figure 2, will be influenced by the original input 204, the compensation coefficient calculated in 210, and the set of weights from 202. Regarding claim 2, JANTSCHER teaches the limitations of claim 1. Further, JANTSCHER teaches “wherein the second MAC operations comprise: multiplying the compensation coefficient with a weight in the set of weights” ([0065-0067] “To compute a neuron output for a given neuron input, each neuron in each layer of an analog neural network may perform calculation based on the following equation (without device-mismatch induced errors): PNG media_image1.png 40 457 media_image1.png Greyscale where j denotes the neuron index, 1 is the index of the layer, and k is the index of the neuron input. PNG media_image2.png 38 32 media_image2.png Greyscale is the output produced by a current neuron j of layer 1. f is a non – linear activation function. For example, f can be a linear function such as f(x) = x, but the result of the linear function is limited to a maximum of +1 and a minimum of -1. That means, when the value of f(x) is greater than or equal to 1, the value of f(x) is set to +1. When the value of f(x) is less than -1, the value of f(x) is set to -1. Therefore, f is a non-linear function. PNG media_image3.png 37 45 media_image3.png Greyscale is the weight between neuron k and current neuron j. PNG media_image4.png 31 47 media_image4.png Greyscale is the input coming from neuron k of previous layer 1-1. During production of analog neural networks, device-mismatch effects resulting from fabrication tolerance may influence the calculation in Equation 1 and therefore errors may be introduced. As a result, the transfer of trained weights (e.g., weight vectors included in weight data 118) from a trained digital neural network to an analog neural network may cause a significant loss of accuracy of the analog neural network. Below is an example equation for computing a neuron output given a neuron input when errors are introduced in an analog neural network chip. This equation can apply to every neuron of every layer within the analog neural network. PNG media_image5.png 25 451 media_image5.png Greyscale where PNG media_image6.png 24 48 media_image6.png Greyscale is an error function that represents both linear and non - linear error functions and is applied to each term of Equation (1). For example, PNG media_image7.png 26 108 media_image7.png Greyscale represents an input offset error that afflicts the neuron input PNG media_image8.png 26 43 media_image8.png Greyscale of the neuron. PNG media_image9.png 30 82 media_image9.png Greyscale is a multiplicative sum error afflicting the input of the activation function f of the neuron. PNG media_image10.png 26 70 media_image10.png Greyscale is an activation function offset error afflicting an output of the activation function of the neuron.”) It is clear in this citation that the cited multiply-accumulate operations used for the second MAC operations cited above, will use the weights and the input from 210, which includes the computation coefficient, as cited above. Regarding claim 3, JANTSCHER teaches the limitations of claim 1. Further, JANTSCHER teaches “wherein computing the compensation coefficient comprises: generating an error signal by comparing the output signal with the reference signal” (Figure 3) Pr PNG media_image12.png 629 1030 media_image12.png Greyscale Previously cited step 210 (calculation of the compensation coefficient), is shown above and further clarifies “Calculate Error & Adjust weights of Analog NN Chip” which is further detailed in ([0079 - 0084] “The system processes the set of test outputs and the set of expected outputs to adjust the current weights (i.e., input weights received from the trained digital neural network) of the analog neural network (step 210). After adjusting the current weights, the system obtains a set of updated weights for the analog neural network. The process for adjusting the current weights of the analog neural network is described in more detail below with reference to FIG. 3 and FIG. 4. The system loads the set of updated weights to the analog neural network (step 212). This step concludes the compensation phase 205. To execute the validation phases 215, the system receives a predefined set of validation inputs for validation (step 214). The system then processes the set of validation inputs using the analog neural network (that has the updated weights) to generate a set of validation outputs (step 216). That is, the system causes the analog neural network to perform a forward execution given the set of validation inputs in order to generate a set of validation outputs. The system then receives a set of expected validation outputs from the trained digital neural network (step 218). The set of expected validation outputs is obtained by processing the set of validation inputs using the trained digital neural network. In some cases, the system may receive the set of expected validation outputs before performing step 216. The system compares the set of validation outputs and the set of expected validation outputs to determine whether the analog neural network with the updated weights operates within the predetermined accuracy level (step 220). If the analog neural network having the updated weights operates without the predetermined accuracy level, the system determines that the analog neural network chip that implements the analog neural network is acceptable and ready for use (step 222). If the analog neural network having the updated weights does not operate within the predetermined accuracy level (i.e., the fabrication tolerance cannot be compensated), the system determines that the analog neural network chip that implements the analog neural network is not acceptable and cannot be used (step 224). In this case, the chip may be discarded during production test and later destroyed.”) Further, JANTSCHER teaches “determining a value of the compensation coefficient by minimizing the error signal” ([0043] “In order for a digital neural network to learn to perform a machine learning task, a large number of pre-classified training examples are needed to train the digital neural network. Each training example includes a training input and a respective ground-truth output for the training input. Each training input is processed by the neural network to generate a respective output. The output generated by neural network is then compared to the respective ground-truth output of the training input. During training, the values of weights (or parameters) of the neural network are adjusted to such as the outputs generated by the neural network get closer to the ground-truth outputs. More specifically, the weights of the neural network can be adjusted to optimize an objective function computed based on the training data (e.g., to minimize a loss function that represents a discrepancy between an output of the model and a ground - truth output). This training procedure is repeated multiple times for all pre-classified training examples until one or more criteria are satisfied, for example, until the digital neural network has achieved a desired level of accuracy.”) Regarding claim 4, JANTSCHER teaches the limitations of claim 1. Further, JANTSCHER teaches “wherein the first MAC operations are further based on a previously computed compensation coefficient” (Figure 2) ([0071] “FIG . 2 is a block diagram illustrating a general process 200 for (i) compensating errors due to fabrication tolerance in a physical analog neural network chip by adjusting weights of an analog neural network implemented in the analog neural network chip (compensation phase 205), and/or (ii) validating the analog neural network chip having the analog neural network with the adjusted weights (validation phase 215). In some implementations, the general process 200 may include both the compensation phase 205 and a validation phase 215. In some other implementations, the general process may include only the compensation phase 205. The compensation phase 205 is executed to estimate the adjusted weights that minimize the errors shown in Equation 2.”) As this citation illustrates, this process is meant to be run repeatedly with the entire process 200 being run on itself, meaning that the computation coefficient that was previously determined from one cycle would be used in the first MAC operation of another cycle. Further, JANTSCHER teaches “updating the analog neural network with the compensation coefficient comprises replacing the previously computed compensation coefficient with the compensation coefficient” ([0079 - 0082] “The system processes the set of test outputs and the set of expected outputs to adjust the current weights (i.e., input weights received from the trained digital neural network) of the analog neural network (step 210). After adjusting the current weights, the system obtains a set of updated weights for the analog neural network. The process for adjusting the current weights of the analog neural network is described in more detail below with reference to FIG. 3 and FIG. 4. The system loads the set of updated weights to the analog neural network (step 212). This step concludes the compensation phase 205. To execute the validation phases 215, the system receives a predefined set of validation inputs for validation (step 214). The system then processes the set of validation inputs using the analog neural network (that has the updated weights) to generate a set of validation outputs (step 216).”) Regarding claim 5, JANTSCHER teaches the limitations of claim 4. Further, JANTSCHER teaches “wherein computing the compensation coefficient comprises: generating an error signal by comparing the output signal with the reference signal” (Figure 3) Previously cited step 210 (calculation of the compensation coefficient), is shown above and further clarifies “Calculate Error & Adjust weights of Analog NN Chip” which is further detailed in ([0079 - 0084] “The system processes the set of test outputs and the set of expected outputs to adjust the current weights (i.e., input weights received from the trained digital neural network) of the analog neural network (step 210). After adjusting the current weights, the system obtains a set of updated weights for the analog neural network. The process for adjusting the current weights of the analog neural network is described in more detail below with reference to FIG. 3 and FIG. 4. The system loads the set of updated weights to the analog neural network (step 212). This step concludes the compensation phase 205. To execute the validation phases 215, the system receives a predefined set of validation inputs for validation (step 214). The system then processes the set of validation inputs using the analog neural network (that has the updated weights) to generate a set of validation outputs (step 216). That is, the system causes the analog neural network to perform a forward execution given the set of validation inputs in order to generate a set of validation outputs. The system then receives a set of expected validation outputs from the trained digital neural network (step 218). The set of expected validation outputs is obtained by processing the set of validation inputs using the trained digital neural network. In some cases, the system may receive the set of expected validation outputs before performing step 216. The system compares the set of validation outputs and the set of expected validation outputs to determine whether the analog neural network with the updated weights operates within the predetermined accuracy level (step 220). If the analog neural network having the updated weights operates without the predetermined accuracy level, the system determines that the analog neural network chip that implements the analog neural network is acceptable and ready for use (step 222). If the analog neural network having the updated weights does not operate within the predetermined accuracy level (i.e., the fabrication tolerance cannot be compensated), the system determines that the analog neural network chip that implements the analog neural network is not acceptable and cannot be used (step 224). In this case, the chip may be discarded during production test and later destroyed.”) Further, JANTSCHER teaches “determining a value of the compensation coefficient by updating a value of the previously computed compensation coefficient till the error signal is minimized” ([0043] “In order for a digital neural network to learn to perform a machine learning task, a large number of pre-classified training examples are needed to train the digital neural network. Each training example includes a training input and a respective ground-truth output for the training input. Each training input is processed by the neural network to generate a respective output. The output generated by neural network is then compared to the respective ground-truth output of the training input. During training, the values of weights (or parameters) of the neural network are adjusted to such as the outputs generated by the neural network get closer to the ground-truth outputs. More specifically, the weights of the neural network can be adjusted to optimize an objective function computed based on the training data (e.g., to minimize a loss function that represents a discrepancy between an output of the model and a ground - truth output). This training procedure is repeated multiple times for all pre-classified training examples until one or more criteria are satisfied, for example, until the digital neural network has achieved a desired level of accuracy.”) Regarding claim 8, JANTSCHER teaches the limitations of claim 1. Further, JANTSCHER teaches “wherein the reference signal comprises a plurality of ground-truth classifications that includes the ground-truth classification” ([0043] “In order for a digital neural network to learn to perform a machine learning task, a large number of pre-classified training examples are needed to train the digital neural network. Each training example includes a training input and a respective ground-truth output for the training input. Each training input is processed by the neural network to generate a respective output. The output generated by neural network is then compared to the respective ground-truth output of the training input. During training, the values of weights (or parameters) of the neural network are adjusted to such as the outputs generated by the neural network get closer to the ground-truth outputs. More specifically, the weights of the neural network can be adjusted to optimize an objective function computed based on the training data (e.g., to minimize a loss function that represents a discrepancy between an output of the model and a ground-truth output). This training procedure is repeated multiple times for all pre-classified training examples until one or more criteria are satisfied, for example, until the digital neural network has achieved a desired level of accuracy.”) And further, ([0048] “In particular, the techniques described herein include processing test inputs using the analog neural network loaded with digital weights to generate test outputs, and processing the test outputs and expected outputs ( i.e., outputs generated by the trained digital neural network given test inputs) to generate updated weights for the analog neural network in the chip.”) Further, JANTSCHER teaches “the output signal comprises a plurality of classifications that includes the classification” ([0074] “Depending on the task, the analog neural network can be configured to receive any kind of digital data input and to generate any kind of score, classification, or regression output based on the input. For example, if the inputs to the analog neural network are images or features that have been extracted from images, the output generated by the analog neural network for a given image may be scores for each of a set of object categories, with each score representing an estimated likelihood that the image contains an image of an object belonging to the category.”) and further, ([0040, sentence 3 onward] “The final output includes outputs generated by neurons of the output layer, where the output of each neuron may represent one of a set of classes (or categories) that the input data could be assigned to. The neuron that has an output with the highest value may signal a result (e.g., a classification result, a regression result, etc.) achieved by the neural network for the given input data.”) Further, JANTSCHER teaches “each ground-truth classification in the reference signal corresponds to a same category as a classification in the output signal” ([0043] “In order for a digital neural network to learn to perform a machine learning task, a large number of pre-classified training examples are needed to train the digital neural network. Each training example includes a training input and a respective ground-truth output for the training input. Each training input is processed by the neural network to generate a respective output. The output generated by neural network is then compared to the respective ground-truth output of the training input. During training, the values of weights (or parameters) of the neural network are adjusted to such as the outputs generated by the neural network get closer to the ground-truth outputs. More specifically, the weights of the neural network can be adjusted to optimize an objective function computed based on the training data (e.g., to minimize a loss function that represents a discrepancy between an output of the model and a ground-truth output). This training procedure is repeated multiple times for all pre-classified training examples until one or more criteria are satisfied, for example, until the digital neural network has achieved a desired level of accuracy.”) Regarding claim 11, JANTSCHER teaches “One or more non-transitory computer-readable media storing instructions executable to perform operations” ([0113] “Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus.”) Further, claim 11 recites similar additional limitations as claim 1, and is rejected under the same rationale. Regarding claims 12-15, JANTSCHER teaches the limitations of claim 11. Further, claims 12-15 recites similar limitations as claims 2-4, and 8, respectively, and are rejected under the same rationale. Regarding claim 16, JANTSCHER teaches “An apparatus, comprising: a computer processor for executing computer program instructions; and one or more non-transitory computer-readable media storing computer program instructions executable by the computer processor to perform operations” ([0113-0114] “Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus. The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.”) Further, claim 16 recites similar additional limitations as claim 1, and is rejected under the same rationale. Regarding claims 17-20, and 23, JANTSCHER teaches the limitations of claim 16. Further, claims 17-20, & 23 recite similar additional limitations as claims 2-5, & 8, respectively, and are rejected under the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 6 & 21 are rejected under 35 U.S.C. 103 as being unpatentable over JANTSCHER, as applied to claims above, and further in view of Ninkovic, V. et al. “Preamble-Based Packet Detection in Wi-Fi: A Deep Learning Approach.” Available at https://arxiv.org/pdf/2009.05740 on September 12 2020 (hereafter, NINKOVIC) Regarding claim 6, JANTSCHER teaches the limitations of claim 1. JANTSCHER fails to explicitly teach “forming a signal package, the signal package including the training signal and the input signal, wherein the training signal is a preamble of the signal package.” However, analogous art about the use of preamble-based packet detection in deep learning, NINKOVIC, does teach this ([Abstract] “Distinctive feature of majority of LBT-based systems is that the transmitters use preambles that precede the data to allow the receivers to acquire initial signal detection and synchronization. The first digital processing step at the receiver applied over the incoming discrete-time complex-baseband samples after analog-to-digital conversion is the packet detection step, i.e., the detection of the initial samples of each of the frames arriving within the incoming stream. Since the preambles usually contain repetitions of training symbols with good correlation properties, conventional digital receivers apply correlation-based methods for packet detection. Following the recent interest in data-based deep learning (DL) methods for physical layer signal processing, in this paper, we challenge the conventional methods with DL-based approach for Wi-Fi packet detection. Using one-dimensional Convolutional Neural Networks (1D-CNN), we present a detailed complexity vs performance analysis and comparison between conventional and DL-based WiFi packet detection approaches.”) This citation shows that a signal package is formed with training symbols in the preamble, followed by detected packages (input signal). It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of JANTSCHER with the teachings of NINKOVIC because both references aim for the optimal use and training of machine learning models and neural networks. One of ordinary skill in the art would be motivated to do so because as pointed out by NINKOVIC’s introduction, “Using one-dimensional Convolutional Neural Networks (1D-CNN) whose excellent performance for sequence detection is demonstrated in [8], [11], we perform fine-grained evaluation and comparison of 1D-CNN architectures of different parameters, against the conventional correlation-based packet detector. Our results demonstrate that 1D-CNN architectures may outperform conventional methods, both in the performance and computational complexity, while maintaining robustness at low signal-to-noise ratio (SNR).” Regarding claim 21, JANTSCHER teaches the limitations of claim 16. Further, claim 21 cites similar additional limitations as claim 6 and is rejected under the same rationale. Claims 7 & 22 are rejected under 35 U.S.C. 103 as being unpatentable over JANTSCHER in view of NINKOVIC, as applied to claims above, and further in view of Le Gallo-Bourdeau, et al. US Patent No.: US 11,386,319 B2, filed on March 14, 2019 (hereafter, GALLO) Regarding claim 7, JANTSCHER in view of NINKOVIC teaches the limitations of claim 6. JANTSCHER in view of NINKOVIC fails to explicitly teach forming the signal package “…after identifying an impairment of the analog circuitry; or periodically forming the signal package at a predetermined frequency.” However, analogous art of a training process for neural networks, GALLO, does teach “periodically…” calibrating weights “…at a predetermined frequency” ([Abstract] “The method also includes, in a weight-update calculation operation, calculating updates to respective weights stored in each of the P, arrays in dependence on signals propagated by the neuron layers.”) and ([Page 10, Col. 2, lines 17-20] “The method further comprises periodically programming the memristive devices storing each weight w in all of the P1 arrays to update the stored weight in dependence on the accumulation value PNG media_image13.png 18 22 media_image13.png Greyscale for that weight.”) The reference citations teach that signals are processed periodically in order to trigger updates. Given that the methods in NINKOVIC taught that signal package was formed for use in updates, when GALLO is combined with JANTSCHER in view of NINKOVIC, this would result in the claimed invention. It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of JANTSCHER in view of NINKOVIC with the teachings of GALLO because both references explore optimal training methods for neural networks. One of ordinary skill in the art would be motivated to do so because, as GALLO points out in Col. 8, lines 19-24, “weight-update programming can be performed after a desired number of training iterations, e.g. after processing a batch of training examples. This offers an exceptionally efficient training operation using memristive device arrays for synaptic layer implementation.” Regarding claim 22, JANTSCHER teaches the limitations of claim 16. Further, claim 22 recites similar additional limitations as claim 7 and is rejected under the same rationale. Claims 9-10 & 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over JANTSCHER, as applied to claims above, and further in view of Hajaj, M. et al. “Batch Normalization and the impact of batch structure on the behavior of deep convolution networks.” Available at https://arxiv.org/pdf/1802.07590 on February 21 2018 (hereafter, HAJAJ) Regarding claim 9, JANTSCHER teaches the limitations of claim 8. JANTSCHER fails to explicitly teach “wherein the training signal comprises a plurality of batches, each batch includes a plurality of training samples, each training sample in a batch corresponds to a different ground-truth classification of the plurality of ground-truth classifications.” However, analogous art, about batch normalization in the context of training neural networks, HAJAJ, does teach “wherein the training signal comprises a plurality of batches, each batch includes a plurality of training samples” ([Page 3, Col. 1, paragraph 1] “In the first part of the experiment, both training and inference were performed in the standard way, where training batches were created randomly, and inference were carried out on individual images (plurality of training samples within the batch) using a fixed set of means and variances computed using the entire training data after training was completed. In the second part of the experiment, training was done using balanced batches, and inference was carried out twice: First, inference was done in the standard way, where images (plurality of samples) are tested individually using fixed means and variances. Second, to measure the effect of batch structure on the performance of the network, test images were arranged the same way as training images; test images were arranged as balanced batches, and the means and variances of the current test batch itself were used in the inference process. Table (2) shows the results for both experiments. It is clear that balancing the training batches doesn’t change the results, if the inference process is carried out in a standard way. However, if the performance of the network trained using balanced batches is also tested using balanced test batches, the error rate is reduced by about 80% for both network models. The error rate was almost eliminated for the non-trivial CIFAR10.”) Further, HAJAJ teaches “each training sample in a batch corresponds to a different ground-truth classification of the plurality of ground-truth classifications” ([Page 2, 2.2 Balance Batches, paragraph 2, Training] “instead of being constructed randomly, training batches are created to be balanced, and to contain a single instance from each class. If training images were shuffled to prevent an image from always appearing in the same batch (to improve performance), then the shuffling subroutine needs to be changed to always generate balanced batches.”) It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of JANTSCHER with the teachings of HAJAJ because both references examine efficient training strategies for neural networks. One of ordinary skill in the art would be motivated to do so because, as HAJAJ points out in page 3, column 1, sentence 5, “if the performance of the network trained using balanced batches is also tested using balanced test batches, the error rate is reduced by about 80% for both network models.” Regarding claim 10, JANTSCHER teaches the limitations of claim 8. JANTSCHER fails to explicitly teach “wherein the training signal comprises a plurality of subsets, each subset includes one or more training samples that correspond to a same ground-truth classification of the plurality of ground-truth classifications.” However, analogous art, about batch normalization in the context of training neural networks, HAJAJ, does teach “wherein the training signal comprises a plurality of subsets” ([Page 1, Col. 2, Paragraph 1] “Balanced batches are batches that have a single instance from each class with size equal to the number of classes. If the network is trained only on balanced batches, then in addition to learning how to classify single images, BN will allow the network to learn an extra logic based on the structure of balanced batches. Because it was only exposed to balanced batches in the training phase, the network will learn an association mechanism between the images in the batch through the shared means and variances of BN to always expect balanced batches. If the performance of the network is measured in the standard way on single test images, then this association mechanism based on batch structure cannot be noticed. In order to measure it, the network needs to be tested on balanced test batches using each batch’s own means and variances. The practical difficulty here is balancing the test batches, which requires the labels of the test images.“) If the training signal is separated into batches and those batches each contain one instance of each class, this creates a plurality of “subsets” across the batches within the signal for each class. Further, HAJAJ teaches “each subset includes one or more training samples that correspond to a same ground-truth classification of the plurality of ground-truth classifications” ([Page 2, 2.2 Balance Batches, paragraph 2, Training] “instead of being constructed randomly, training batches are created to be balanced, and to contain a single instance from each class. If training images were shuffled to prevent an image from always appearing in the same batch (to improve performance), then the shuffling subroutine needs to be changed to always generate balanced batches.”) If each signal forms multiple batches, and each batch contains one instance of each class, that is equivalent to subsets within the batches that each have the same ground-truth classifications. It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of JANTSCHER with the teachings of HAJAJ because both references examine efficient training strategies for neural networks. One of ordinary skill in the art would be motivated to do so because, as HAJAJ points out in page 3, column 1, sentence 5, “if the performance of the network trained using balanced batches is also tested using balanced test batches, the error rate is reduced by about 80% for both network models.” Regarding claims 24-25, JANTSCHER teaches the limitations of claim 23. Further, claims 24-25 recite similar additional limitations as claims 9-10 and are rejected under the same rationale. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW LEE LEWIS whose telephone number is (571)272-1906. The examiner can normally be reached Monday: 9:30AM - 3:30PM and Tuesday - Friday: 9:30AM - 6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Matthew Lee Lewis/ Examiner, Art Unit 2144 /TAMARA T KYLE/ Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Mar 31, 2022
Application Filed
May 16, 2022
Response after Non-Final Action
Jun 12, 2025
Non-Final Rejection — §101, §102, §103
Sep 03, 2025
Interview Requested
Sep 15, 2025
Applicant Interview (Telephonic)
Sep 15, 2025
Examiner Interview Summary
Sep 16, 2025
Response Filed
Nov 28, 2025
Final Rejection — §101, §102, §103
Mar 29, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month