Prosecution Insights
Last updated: April 19, 2026
Application No. 17/976,084

HARDWARE ARTIFICIAL NEURAL NETWORK (ANN) ANALOG CIRCUIT, AND METHOD OF USING THEREOF

Final Rejection §103§112
Filed
Oct 28, 2022
Examiner
BAKER, EZRA JAMES
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Technion Research & Development Foundation Limited
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
7 granted / 14 resolved
-5.0% vs TC avg
Strong +78% interview lift
Without
With
+77.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
33 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The present application is being examined under the claims filed 12/07/2025. Claims 1 and 4-20 are pending. Response to Amendment This Office Action is in response to Applicant’s communication filed 12/07/2025 in response to office action mailed 08/12/2025. The Applicant’s remarks and any amendments to the claims or specification have been considered with the results that follow. Response to Arguments Regarding Objections and Informalities The objections to the claims have been overcome by the amendments. Regarding 35 U.S.C. 112(b) In Remarks page 8, Argument 1 (Examiner summarizes Applicant’s argument) Applicant argues that the claims have been amended to obviate the 112(b) rejections. Examiner’s response to Argument 1 Examiner agrees that the previous claim rejections for claims 1, 3, and 4 have been overcome. However, the rejection for claim 12 remains, and claim amendments raise new issues that require further 112(b) rejections. For example, the deficiencies of previously filed claim 3 are now present in the independent claims. Regarding 35 U.S.C. 101 In Remarks page 8-12, Argument 2 (Examiner summarizes Applicant’s arguments) Applicant argues that the claims as currently amended are not directed to a mental process because they instead recite a novel analog circuit architecture that performs calculations. Examiner’s response to Argument 2 Applicant’s amendments and arguments are found convincing. In particular, the claims as amended now include specific details about how the analog circuit would perform math as a result of physical processes rather than generically reciting “analog circuits” to perform math with no details about the functionality of the circuits (as previously claimed). Although analog circuits do differ from digital circuits, merely reciting a generic analog circuit could refer to many ordinary and off-the-shelf computer components. MPEP 2106.05(f)(2) recites Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more And MPEP 2106.05(f)(1) recites: In contrast, other cases have found that additional elements are more than "apply it" or are not "mere instructions" when the claim recites a technological solution to a technological problem. In the instant application, the deciding factor of eligibility is the technological solution provided by the particular analog circuits arranged in a particular manner as recited currently as opposed to the generic computer components as previously recited. Accordingly, the rejections under 35 U.S.C. 101 are withdrawn. Regarding Art Rejections In Remarks page 13-15, Argument 3 (Examiner summarizes Applicant’s arguments) Applicant argues that Wang does not teach translinear analaog circuits , but instead uses a different digital mechanism to perform the calculations. Applicant further argues that Wang does not disclose a way to achieve fractional or non-integer exponents, and further that the architecture implemented by Wang would consume much higher power consumption than the analog circuits utilized by the instant application. Examiner’s response to Argument 3 Examiner acknowledges that Wang does not teach all limitations of the independent claims as amended. The rejections under 35 U.S.C. 102 are withdrawn accordingly. Examiner notes that the claims do not recite any limitations relating to fractional or non-integer exponents nor power consumption, and the Specification is not to be read into the claims. Wang additionally provides that circuits may be digital or analog (see 103 rejections below). Furthermore, Examiner maintains that Wang does teach the cited portions of the claims in absence of any convincing arguments, which is reflected in the newly issued rejection under 35 U.S.C. 103. In Remarks page 15, Argument 4 (Examiner summarizes Applicant’s argument) Applicant argues that Boahen and Minch cannot cure the deficiencies of Wang. Applicant argues that the circuits of Boahen do not implement the exponentiation or multiplication circuits that multiply exponentiation signals together. Examiner’s response to Argument 4 Examiner disagrees. Wang clearly teaches the structure of performing exponentials, then multiplying the resulting exponents together using circuits. Though Wang does not teach the particular translinear circuits as claimed, it would be obvious to use the analog translinear circuits explicitly taught by Boahen to perform multiplication and power terms (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents”. Additionally, Boahen explicitly states that these circuits are useful for neural circuits (Boahen Abstract) “These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina”. A person having ordinary skill in the art would understand that the winner-takes-all and pyramidal type neuron circuits disclosed are merely examples and that the translinear calculation circuits would provide benefits if implemented with the structure provided by Wang. Applicant selectively chooses the portions of Boahen which differs from the claimed invention while ignoring substantial similarities, and compelling motivation to combine with Wang. Moreover, Applicant may not attack individual references when the combination is relied upon for a rejection. While Boahen does not appear to use its translinear circuits to exponentiate inputs then multiply the exponentials together, Wang is relied upon to teach this portion of the claim. In Remarks page 16, Argument 5 (Examiner summarizes Applicant’s argument) Applicant argues that Wang and Boahen are fundamentally different because Wang teaches a digital counter approach and Boahen uses incompatible translinear circuits for entirely different purposes. Examiner’s response to Argument 5 Examiner disagrees. Both Wang and Boahen are directed to implementing neurons using computer hardware. Further still, Wang and Boahen use circuits to perform mathematical functions (namely multiplication and exponentiation), but merely go about these operations in different manners technologically. Boahen discusses at length the benefits of using analog technologies to implement hardware neural networks, for example: (column 2 line 4) Subthreshold complementary MOS (CMOS) technology has long been recognized as the technology of choice for implementing micropower digital and analog LSI circuits. It offers the same advantages for the implementation of synthetic neural systems: high integration density, low power dissipation, and useful parasitic bipolar devices. That is, Boahen encourages using CMOS technology as opposed to digital technology for neural circuits. Given Boahen’s disclosure, it is exceedingly obvious to a person having ordinary skill in the art to modify a particular neuron circuit that was formerly implemented using digital technology to use superior analog technology instead for the wealth of benefits it would provide, as was known about before the effective filing date of the present invention. In Remarks page 17, Argument 6 (Examiner summarizes Applicant’s arguments) Applicant argues that limitations were amended from dependent claims which Daniel does not teach and therefore the rejections under 35 U.S.C. 102 rejections should be withdrawn. Examiner’s response to Argument 6 Examiner agrees that Daniel alone does not teach the entirety of claim 15 as currently amended. The 35 U.S.C. 102 rejections are withdrawn accordingly. However, new rejections under 35 U.S.C. 103 are issued as necessitated by amendment. In Remarks pages 17-18, Argument 7 (Examiner summarizes Applicant’s arguments) Applicant argues that Daniel focuses primarily on biological/genetic circuits in living cells, not on the specific hardware implementation recited in claim 15. Examiner’s response to Argument 7 Examiner notes that although Daniel does focus heavily on biological circuits, that does not preclude it from being useful in a neural network environment. Hardware neural networks are often inspired by, directly simulating, or used in combination with biological neurons. In fact, Daniel mentions that: (paragraph [0180]) In some embodiments, the present invention merges many new and innovative ideas from neuroscience, systems biology and electrical engineering, to offer a novel framework for collective computational intelligent abilities, as detailed in table 1 below. Therefore, biological neurons and computer hardware are not mutually exclusive and it is entirely reasonable to use Daniel’s disclosure of a system involving genetic circuits and hardware circuits as a foundation and blueprint for a hardware-based neural network system. The 35 U.S.C. 103 rejection below uses Daniel in this way, augmenting with analog computer hardware circuits taught by other references as required by the claims. In Remarks page 20, Argument 8 (Examiner summarizes Applicant’s arguments) Applicant argues that Minch is not analogous to Wang and the instant application because it is purely related to circuit design and implementing math with analog circuits. Applicant argues that there is no teaching or suggestion that circuits would be useful for neural applications. Examiner’s response to Argument 8 Examiner disagrees. Minch explicitly mentions hardware neurons as a basis for the circuits (page 20 column 2 section II) Inspired originally by Shibata and Ohmi’s neuron MOS concept [6], we recently introduced a new translinear circuit primitive, called the multiple-input translinear element (MITE) [7], [8]. Moreover, it is acknowledged by other cited references such as Daniel (see paragraph 165) and Boahen (see column 2 line 4) that CMOS technology is applicable to neuron circuits. Therefore, it is well known and documented in the art that CMOS circuit technology and neuron circuits are highly related. Even if Minch did not mention neurons anywhere (it does), it is further implicit that the circuits would be applicable and related to neural networks because (1) it is known in the art that they are related and (2) the authors of the research papers study neural networks with translinear circuits (see page 28) and the functions the circuits provide overlap with those used by hardware neurons, and therefore there is no need to explicitly mention neural networks when this application is obvious to anyone having ordinary skill in the art. In Remarks page 20, Argument 9 (Examiner summarizes Applicant’s arguments) Applicant argues that Minch’s circuits are fundamentally different from the claimed invention. Minch uses voltage-in current-out and current-in voltage-out stages, but do not teach the structure of separate exponentiation circuits and a multiplication circuit. Examiner’s response to Argument 9 In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In Remarks page 20, Argument 10 (Examiner summarizes Applicant’s arguments) Applicant argues that Minch does not suggest the weight hardware elements are adjustable as claimed. Examiner’s response to Argument 10 Examiner disagrees. To further explain the rejection, examiner points to other portions of the cited references which explain how the weights are adjustable. Page 20 column 2 shows the portion mapped by Examiner to teach the weight elements PNG media_image1.png 484 786 media_image1.png Greyscale Minch goes on to describe that these weight elements are implemented using hardware and are adjustable. (page 20 column 2 last pargraph) For each of these FGMOS MITEs, the weights (i.e., w1, w2, … wK[*Examiner notes: weights as previously mapped by the examiner]) are equal to the input capacitive divider ratios. The amount of floating-gate charge sets an electronically adjustable, nonvolatile multiplicative scale factor on the MITE’s output current (i.e., λ) that we can use to build adaptive systems or to compensate for device mismatch. Thus Minch clearly describes that the weights as mapped by examiner are in fact adjustable and could be used in an “adaptive system” such as a hardware neural network. In Remarks pages 20-21, Argument 11 (Examiner summarizes Applicant’s arguments) Applicant argues that dependent claims are allowable by virtue of the claims from which they depend. Examiner’s response to Argument 11 Examiner disagrees since the independent claims and other claims argued are not deemed patentable. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Regarding Claim 1, 11, 15 Claim 1 recites the limitation "tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode" in line 7 of the claim. There is insufficient antecedent basis for this limitation in the claim. Claims 11 and 15 recite similar limitations and are rejected for the same reasons. In particular, there is insufficient antecedent basis for “the transistor subthreshold region”. For purposes of examination, the examiner interprets the limitation as though it said “tuned to operate in a Claim 1 recites the limitation "tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode" in line 7 of the claim. Claims 11 and 15 recite similar limitations and are rejected for the same reasons. It is not clear which transistor of the one or more transistor exhibits the characteristics. For purposes of examination, the examiner interprets the limitation as though it says “tuned to operate in the transistor subthreshold region in which the one or more transistors exhibit[[s]] exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode”. Regarding Claim 12 Claim 12 recites the limitation "according to the Michaelis-Menten function" in line 2 of the claim. There is insufficient antecedent basis for this limitation in the claim. In particular, there is insufficient antecedent basis for the term “the Michaelis-Menten function”. For purposes of examination, the examiner interprets the limitation as though it said "according to a Michaelis-Menten function". The examiner suggests amending the claim with this language. Claims: 4-10 are dependent upon claim 1 12-14 are dependent upon claim 11 and 16-20 are dependent upon claim 15 and are therefore similarly rejected for including the deficiencies of claims 1, 11, and 15 respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (Patent No. US 6151594 A) herein referred to as Wang in view of Boahen et al. (US 5206541 A) herein referred to as Boahen. Regarding Claim 1 Wang teaches: A hardware Artificial Neural Network (ANN) analog circuit comprising one or more interconnected node circuits (column 1 line 36) “A neuron circuit (or processing element) is the fundamental building block of a neural network. A neuron circuit has multiple inputs and one output. The structure of a conventional neuron circuit often includes a multiplier circuit, a summing circuit, a circuit for performing a non-linear function (such as a binary threshold or sigmoid function), and circuitry functioning as synapses or weighted input connections” wherein at least one node circuit comprises: two or more first […] analog circuits, each configured to receive a respective input signal, and produce an exponentiation signal representing a calculation of exponentiation of the respective input signal, by a predetermined respective exponent value [*Examiner notes: The broadest reasonable interpretation of the limitation includes multiple analog circuits configured together to produce a plurality of exponentiation signals]; (column 4 line 56) “Counter/latch 20[*Examiner notes: circuit configured to produce an exponentiation signal] serves to hold the input data for a desired number of CLK cycles in order to produce the desired gating function.”; (column 4 line 61) “As explained above regarding FIG. 4, inputs x1, x2, . . . , xn are gated by respective gating functions g1, g2, . . . , gn to produce gated inputs having exponential powers. For example, if gi =2, then the gated input corresponding to input xi is xi 2.”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image2.png 444 443 media_image2.png Greyscale a second […] analog circuit, configured to produce a multiplication signal, representing a product of the exponentiation signals of the two or more first translinear analog circuits; (column 6 line 15) “With reference to FIGS. 4 and 8, the gating function applicable to the inputs xi of the neuron circuit may be expressed by the following: (a) if the gating function gi is 0, pass 1 to the multiplier circuit 22 (refer to box 60 of FIG. 8); (b) if the gating function gi is 1, pass the input xi to the multiplier circuit 22 (refer to box 62 ); and if the gating function is greater than 1, pass the input xi raised to the gi power to the multiplier circuit 22 (refer to box 64 ). The neuron circuit of the embodiment shown in FIG. 4 thus generates an output of the form W x1 g 1 x2 g 2 . . . xn g n[*Examiner notes: product of exponentiation signals].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image3.png 306 388 media_image3.png Greyscale and a third analog circuit, configured to output an activation signal, based on said multiplication signal (column 5 line 26) “When gi =0, multiplier 22 stops multiplying, and the output of multiplier 22, appearing at the output latch 38, represents the output (OUT) of the neuron circuit[*Examiner notes: output activation signal based on multiplication signal].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image4.png 355 377 media_image4.png Greyscale Wang does not explicitly teach: wherein at least one node circuit comprises: two or more first translinear analog circuits wherein each of the two or more first translinear analog circuits comprises one or more transistors, tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode; a second translinear analog circuit, configured to produce a multiplication signal However, Boahen teacheas: wherein at least one node circuit comprises: two or more first translinear analog circuits wherein each of the two or more first translinear analog circuits comprises one or more transistors, tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode; (column 4 line 57) “Translinear circuits (see, e.g., U.S. Pat. No. 3,582,689), traditionally built using bipolar transistors, are a computationally powerful subclass of CM circuits. A translinear circuit is defined as one whose operation depends on a linear relationship between transconductance and channel current in the active device. Such a circuit relies on KVL and on the exponential dependence of drain current on the gate voltage in the MOS transistor (Boltzmann's Law)”; (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents[*Examiner notes: predetermined exponent value]” a second translinear analog circuit, configured to produce a multiplication signal (column 9 line 14) “If the two transistors forming the bidirectional junction are fabricated in an isolated well, the well voltage can be used to modulate the interaction between neurons in a multiplicative fashion by applying an analog signal to the well.”; (column 9 line 34) “The bidirectional junction communication scheme is by itself also a two-input, two-output translinear circuit.” Wang, Boahen, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuit of Wang by using the translinear circuits of Boahen to implement multiplicative and exponential functions because (Boahen Abstract) “These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina” and (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents” Regarding Claim 4 Wang in view of Boahen teaches: The ANN analog circuit of claim 1 (see rejection of claim 1) Boahen further teaches: wherein the second translinear analog circuit comprises one or more transistors, tuned to operate in a transistor subthreshold region, thus producing said multiplication signal in a translinear work mode (Abstract) “A two transistor current-controlled current conveyor (C4) circuit is provided which exploits the translinear properties of the MOS transistor in subthreshold and uses unidirectional current signals.”; (column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Wang and Boahen for the same reasons given in claim 1 above. Claims 5-6 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Boahen and further in view of NPL reference Daniel et al. (PGPUB no. US20200143255A1) herein referred to as Daniel. Regarding Claim 5 Wang in view of Boahen teaches: The ANN analog circuit of claim 1 (see rejection of claim 1) Wang in view of Boahen does not explicitly teach: wherein the one or more node circuits are interconnected such that the output activation signal of at least one first node circuit serves as an input signal of at least one second node circuit However, Daniel Teaches: wherein the one or more node circuits are interconnected such that the output activation signal of at least one first node circuit serves as an input signal of at least one second node circuit (paragraph [0182]) “With reference to FIG. 9B, in some embodiments the present invention provides for a three-layer neural network comprising an input layer having N inputs, an intermediate layer having M neural cells, and an output layer having a single neural cell. In FIG. 9B, I1, I2, . . . IN represent inputs, H1, H2, . . . HM represent interim layers neural cells, and out is the output layer neural cell.” Wang, Boahen, Daniel, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuits of Wang in view of Boahen with the interconnected nodes of Daniel because (Daniel paragraph [0068]) “The interactions between non-linear functions (nodes) through the analog connections (weights) lead to global behavior of the network, which cannot be observed only by the node elements.” Regarding Claim 6 Wang in view of Boahen teaches: The ANN analog circuit of claim 1 (see rejection of claim 1) Wang in view of Boahen does not explicitly teach: further comprising: an input layer of node circuits, adapted to receive an input vector comprising one or more input signals; and an output layer of node circuits, adapted to emit an output signal based on the activation signal of the output layer node circuits However, Daniel teaches: further comprising: an input layer of node circuits, adapted to receive an input vector comprising one or more input signals; and an output layer of node circuits, adapted to emit an output signal based on the activation signal of the output layer node circuits (paragraph [0182]) “With reference to FIG. 9B, in some embodiments the present invention provides for a three-layer neural network comprising an input layer having N inputs, an intermediate layer having M neural cells, and an output layer having a single neural cell. In FIG. 9B, I1, I2, . . . IN represent inputs, H1, H2, . . . HM represent interim layers neural cells, and out is the output layer neural cell.”; (paragraph [0183]) “In this configuration, a perceptron-based network may use the logistic activation function” wherein the ANN circuit is trained such that the output signal represents a classification of the one or more input signals (paragraph [0189]) “Both networks were run over a training set comprising 342 samples, with a verification set of 114 samples. However, the classification by perceptgene would need to be tagged by [1,10] rather than [0,1], to enable performing a logarithmic function. Accordingly, the activation function for perceptgene may be modified as follows” Wang, Boahen, Daniel, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuits of Wang in view of Boahen with the interconnected nodes of Daniel because (Daniel paragraph [0068]) “The interactions between non-linear functions (nodes) through the analog connections (weights) lead to global behavior of the network, which cannot be observed only by the node elements.” Regarding Claim 11 Wang teaches: A node analog hardware circuit comprising: two or more […] exponentiation analog circuits, each configured to receive a respective input signal, and produce an exponentiation signal representing a calculation of exponentiation of the respective input signal, by a predetermined, respective exponent value [*Examiner notes: The broadest reasonable interpretation of the limitation includes multiple analog circuits configured together to produce a plurality of exponentiation signals]; (column 4 line 56) “Counter/latch 20[*Examiner notes: circuit configured to produce an exponentiation signal] serves to hold the input data for a desired number of CLK cycles in order to produce the desired gating function.”; (column 4 line 61) “As explained above regarding FIG. 4, inputs x1, x2, . . . , xn are gated by respective gating functions g1, g2, . . . , gn to produce gated inputs having exponential powers. For example, if gi =2, then the gated input corresponding to input xi is xi 2.”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image2.png 444 443 media_image2.png Greyscale a […] multiplication analog circuit, configured to produce a multiplication signal, representing a product of the exponentiation signals of the two or more exponentiation analog circuits; (column 6 line 15) “With reference to FIGS. 4 and 8, the gating function applicable to the inputs xi of the neuron circuit may be expressed by the following: (a) if the gating function gi is 0, pass 1 to the multiplier circuit 22 (refer to box 60 of FIG. 8); (b) if the gating function gi is 1, pass the input xi to the multiplier circuit 22 (refer to box 62 ); and if the gating function is greater than 1, pass the input xi raised to the gi power to the multiplier circuit 22 (refer to box 64 ). The neuron circuit of the embodiment shown in FIG. 4 thus generates an output of the form W x1 g 1 x2 g 2 . . . xn g n[*Examiner notes: product of exponentiation signals].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image3.png 306 388 media_image3.png Greyscale and an activation analog circuit, configured to output an activation signal based on said multiplication signal, (column 5 line 26) “When gi =0, multiplier 22 stops multiplying, and the output of multiplier 22, appearing at the output latch 38, represents the output (OUT) of the neuron circuit[*Examiner notes: output activation signal based on multiplication signal].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image4.png 355 377 media_image4.png Greyscale Wang does not explicitly teach: translinear exponentiation analog circuits wherein each of the two or more exponentiation circuits comprises one or more transistors, tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode; a translinear multiplication analog circuit wherein said exponentiation analog circuits and multiplication analog circuit are configured to work in a translinear work mode, and wherein said activation signal represents a predicted value of a product of a biochemical process However, Boahen teaches: translinear exponentiation analog circuits wherein each of the two or more exponentiation circuits comprises one or more transistors, tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing said exponentiation signal in a translinear work mode; (column 4 line 57) “Translinear circuits (see, e.g., U.S. Pat. No. 3,582,689), traditionally built using bipolar transistors, are a computationally powerful subclass of CM circuits. A translinear circuit is defined as one whose operation depends on a linear relationship between transconductance and channel current in the active device. Such a circuit relies on KVL and on the exponential dependence of drain current on the gate voltage in the MOS transistor (Boltzmann's Law)”; (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents[*Examiner notes: predetermined exponent value]” wherein said exponentiation analog circuits and multiplication analog circuit are configured to work in a translinear work mode, (Abstract) “A two transistor current-controlled current conveyor (C4) circuit is provided which exploits the translinear properties of the MOS transistor in subthreshold and uses unidirectional current signals. […] These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina.” a translinear multiplication analog circuit (column 9 line 14) “If the two transistors forming the bidirectional junction are fabricated in an isolated well, the well voltage can be used to modulate the interaction between neurons in a multiplicative fashion by applying an analog signal to the well.”; (column 9 line 34) “The bidirectional junction communication scheme is by itself also a two-input, two-output translinear circuit.” Wang, Boahen, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuit of Wang by using the translinear circuits of Boahen because (Boahen Abstract) “These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina” and (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents” And Daniel Teaches: and wherein said activation signal represents a predicted value of a product of a biochemical process (paragraph [0158]) “Thus, trans-linear circuits in subthreshold MOS transistors can be utilized to mimic the behavior of biochemical reactions and genetic circuits.” Wang, Boahen, Daniel, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuit of Wang using the biochemical process representation as taught by Daniel because (Daniel paragraph [0158]) “Moreover, the stochastic behavior of biochemical reactions and switching memristors is similar. These analogies suggest that one can efficiently mimic large-scale genetic-processing systems in biological networks on a hybrid memristor-analog-digital electronic chip.” Regarding Claim 12 Wang in view of Boahen and Daniel teaches: The node analog hardware circuit of claim 11 (see rejection of claim 11) And Daniel further teaches: wherein said activation signal represents a predicted value of a product of the biochemical process, according to the Michaelis-Menten function (paragraph [0069]) “Accordingly, an interaction between proteins and DNA that controls the promoter activity (Pr, comprising a region of DNA that initiates transcription of a particular gene) can be viewed either as a node or as an activation function which, in turn, can be simply described by a Michaelis-Menten model.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to further modify the neuron circuits of Wang in view of Boahen and Daniel with the Michaelis-Menten function representation as taught by Daniel because (Daniel paragraph [0069]) “Accordingly, an interaction between proteins and DNA that controls the promoter activity (Pr, comprising a region of DNA that initiates transcription of a particular gene) can be viewed either as a node or as an activation function which, in turn, can be simply described by a Michaelis-Menten model.” Regarding Claim 13 Wang in view of Boahen and Daniel teaches: The node analog hardware circuit of claim 11 (see rejection of claim 11) And Daniel further teaches: wherein at least one input signal is a concentration parameter value, representing a concentration of a protein involved in the biochemical process. (paragraph [0005]) “Additionally, both systems use naturally graded signals for computation (post-synaptic potential in neurons or translation of mRNA to protein concentration in cell biology).” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to further modify the neuron circuits of Wang in view of Boahen and Daniel with the concentration value of Daniel because (Daniel paragraph [0173]) “Biological and physiological signals often have a log-linear input-output transfer function. Therefore, ADC that can directly convert the measured analog signal in a logarithm domain to a digital signal will improve the performance of biomedical devices.” Regarding Claim 14 Wang in view of Boahen and Daniel teaches: The node analog hardware circuit of claim 11 (see rejection of claim 11) And Daniel further teaches: wherein at least one exponent value represents a hill coefficient of a protein involved in the biochemical process (paragraph [0141]) “In the three models, cooperativity often increases the affinity for binding of the other subunits and is modeled by a power law function (xn, where n is known as the Hill coefficient, which denotes the effective number of identical units that interact).” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to further modify the neuron circuits of Wang in view of Boahen and Daniel with the hill coefficient of Daniel because (Daniel paragraph [0163]) “The main advantage of using memristors to implement cooperativity weight (or Hill coefficient) lies in the control of the Hill coefficient values online, and in this case, intelligent and adaptive electronics can be trained by machine learning algorithms. Moreover, because the Hill coefficient is set by the ratio of the resistors, high resistance can be used, and in this case a small amount of power will be dissipated on the resistors.” Regarding Claim 15 Daniel teaches: A method of implementing an Artificial Intelligence (AI) function, the method comprising: providing a network analog hardware circuit, comprising a plurality of interconnected analog hardware node circuits (paragraph [0005]) “Furthermore, both system types are composed of similar complex networks topologies (e.g. feed-forward, negative and positive feedbacks) and highly interconnected nodes.”; (paragraph [0086]) “The disclosed approach may be configured for providing a novel computational framework that inherently exists in synthetic gene networks and is implemented by translinear analog circuits and memristor devices to build adaptive systems with emergent collective parallel computational abilities in electronics and living cells.” said plurality comprising at least (a) an input layer of node circuits, adapted to receive an input vector comprising one or more input signals, and (b) an output layer of node circuits, adapted to emit an output signal (paragraph [0182]) “With reference to FIG. 9B, in some embodiments the present invention provides for a three-layer neural network comprising an input layer having N inputs, an intermediate layer having M neural cells, and an output layer having a single neural cell. In FIG. 9B, I1, I2, . . . IN represent inputs, H1, H2, . . . HM represent interim layers neural cells, and out is the output layer neural cell.” and training the network analog hardware circuit such that the output signal represents application of the Al function on the one or more input signals. (paragraph [0189]) “Both networks were run over a training set comprising 342 samples, with a verification set of 114 samples. However, the classification by perceptgene would need to be tagged by [1,10] rather than [0,1], to enable performing a logarithmic function. Accordingly, the activation function for perceptgene may be modified as follows” Daniel does not explicitly teach: wherein one or more node circuits of the plurality of node circuits comprises: two or more translinear, exponentiation analog circuits, each (i) configured to receive a respective input signal, and (ii) comprises one or more transistors tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing an exponentiation signal representing a calculation of exponentiation of the respective input signal by a predetermined respective exponent value However, Wang teaches: wherein one or more node circuits of the plurality of node circuits comprises: two or more […], exponentiation analog circuits, each (i) configured to receive a respective input signal, [*Examiner notes: The broadest reasonable interpretation of the limitation includes multiple analog circuits configured together to produce a plurality of exponentiation signals]; (column 4 line 56) “Counter/latch 20[*Examiner notes: circuit configured to produce an exponentiation signal] serves to hold the input data for a desired number of CLK cycles in order to produce the desired gating function.”; (column 4 line 61) “As explained above regarding FIG. 4, inputs x1, x2, . . . , xn are gated by respective gating functions g1, g2, . . . , gn to produce gated inputs having exponential powers. For example, if gi =2, then the gated input corresponding to input xi is xi 2.”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image2.png 444 443 media_image2.png Greyscale Daniel, Wang, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuitry of Daniel with the circuits of Wang because (Wang column 42 line 39) “Thus it will be appreciated that a neural network comprising artificial neurons in accordance with the present invention performs with vastly more accurate results, at a vastly improved reduction in computational time, and with a vast reduction in the cost and complexity of its implementation, whether on a semiconductor chip or in a computer program. Thus it is one advantage of the present invention to provide a neuron circuit which comprises a minimum of circuit elements so that a neural network may be built comprising a very large number of such neuron circuits, resulting in a product which is commercially competitive due to its high level of functionality and low cost of manufacture.” Boahen teaches: translinear exponential circuits and (ii) comprises one or more transistors tuned to operate in the transistor subthreshold region in which the transistor exhibits exponential current-voltage characteristics, thus producing an exponentiation signal representing a calculation of exponentiation of the respective input signal by a predetermined respective exponent value (column 4 line 57) “Translinear circuits (see, e.g., U.S. Pat. No. 3,582,689), traditionally built using bipolar transistors, are a computationally powerful subclass of CM circuits. A translinear circuit is defined as one whose operation depends on a linear relationship between transconductance and channel current in the active device. Such a circuit relies on KVL and on the exponential dependence of drain current on the gate voltage in the MOS transistor (Boltzmann's Law)”; (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents[*Examiner notes: predetermined exponent value]” Daniel, Wang, Boahen, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuit of Daniel in view of Wang by using the translinear circuits of Boahen to implement multiplicative and exponential functions because (Boahen Abstract) “These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina” and (Boahen column 4 line 43) “The Translinear Principle can be used to synthesize a wide variety of circuits to perform both linear and non-linear operations on the current inputs, including products, quotients, and power terms with fixed exponents” Regarding Claim 16 Daniel in view of Wang and Boahen teaches: The method of claim 15 (see rejection of claim 15) Wang further teaches: wherein one or more node circuits of the plurality of node circuits further comprises a multiplication analog circuit, configured to produce a multiplication signal, representing a product of the exponentiation signals of the two or more exponentiation analog circuits, (column 6 line 15) “With reference to FIGS. 4 and 8, the gating function applicable to the inputs xi of the neuron circuit may be expressed by the following: (a) if the gating function gi is 0, pass 1 to the multiplier circuit 22 (refer to box 60 of FIG. 8); (b) if the gating function gi is 1, pass the input xi to the multiplier circuit 22 (refer to box 62 ); and if the gating function is greater than 1, pass the input xi raised to the gi power to the multiplier circuit 22 (refer to box 64 ). The neuron circuit of the embodiment shown in FIG. 4 thus generates an output of the form W x1 g 1 x2 g 2 . . . xn g n[*Examiner notes: product of exponentiation signals].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image3.png 306 388 media_image3.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Daniel and Boahen with Wang for the same reasons given in claim 15 above. And Boahen further teaches: wherein said exponentiation analog circuits and multiplication analog circuit are configured to work in a translinear work mode (Abstract) “A two transistor current-controlled current conveyor (C4) circuit is provided which exploits the translinear properties of the MOS transistor in subthreshold and uses unidirectional current signals. […] These circuits are useful for implementing synthetic neural systems such as associative memories and silicon retinas, such as winner-takes-all and pyramidal neuron circuits and the outer-plexiform layer of a retina.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Daniel and Wang with Boahen for the same reasons given in claim 15 above. Regarding Claim 17 Daniel in view of Wang and Boahen teaches The method of claim 16 (see rejection of claim 16) And Wang further teaches: wherein one or more node circuits of the plurality of node circuits comprise an activation analog circuit, configured to emit an activation signal based on said multiplication signal, and wherein the output signal comprises the activation signal of the output layer node circuits (column 5 line 26) “When gi =0, multiplier 22 stops multiplying, and the output of multiplier 22, appearing at the output latch 38, represents the output (OUT) of the neuron circuit[*Examiner notes: output activation signal based on multiplication signal].”; (column 7 line 52) “For example, the neuron circuit of the present invention could be implemented in analog technology or by a combination of analog and digital technologies.”; Figure 5; [*Examiner notes: See figure 5 annotated below] PNG media_image4.png 355 377 media_image4.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to combine Daniel and Boahen with Wang for the same reasons given in claim 16 above. Regarding Claim 18 Daniel in view of Wang and Boahen teaches: The method of claim 17 (see rejection of claim 17) Daniel further teaches: wherein at least one input of a first node analog hardware circuit comprises a weighted function of one or more activation signals output by one or more respective second node analog hardware circuits (paragraph [0182]) “With reference to FIG. 9B, in some embodiments the present invention provides for a three-layer neural network comprising an input layer having N inputs, an intermediate layer having M neural cells, and an output layer having a single neural cell. In FIG. 9B, I1, I2, . . . IN represent inputs, H1, H2, . . . HM represent interim layers neural cells, and out is the output layer neural cell.”; Figure 9B PNG media_image5.png 226 439 media_image5.png Greyscale Regarding Claim 19 Daniel in view of Wang and Boahen teaches: The method of claim 17 (see rejection of claim 17) Daniel further teaches: wherein the Al function comprises prediction of an outcome of a biochemical process, (paragraph [0158]) “Thus, trans-linear circuits in subthreshold MOS transistors can be utilized to mimic the behavior of biochemical reactions and genetic circuits.” and wherein at least one input signal is a concentration parameter value, representing a concentration of a protein involved in the biochemical process, (paragraph [0005]) “Additionally, both systems use naturally graded signals for computation (post-synaptic potential in neurons or translation of mRNA to protein concentration in cell biology).” and wherein at least one exponent value represents a hill coefficient of a protein involved in the biochemical process. (paragraph [0141]) “In the three models, cooperativity often increases the affinity for binding of the other subunits and is modeled by a power law function (xn, where n is known as the Hill coefficient, which denotes the effective number of identical units that interact).” Regarding Claim 20 Daniel in view of Wang and Boahen teaches: The method of claim 19 (see rejection of claim 19) Daniel further teaches: wherein at least one activation signal of the plurality of node analog hardware circuits comprises a prediction value, representing a predicted outcome of the biochemical process. (paragraph [0158]) “Thus, trans-linear circuits in subthreshold MOS transistors can be utilized to mimic the behavior of biochemical reactions and genetic circuits.” Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Boahen, and further in view of NPL reference Minch “Multiple-Input Translinear Element Networks” herein referred to as Minch. Regarding Claim 7 Wang in view of Boahen teaches: The ANN analog circuit of claim 1 (see rejection of claim 1) Wang does not explicitly teach: wherein each of the two or more first analog circuits comprise an adjustable weight hardware element, determining the exponent value However, Minch teaches: wherein the two or more first analog circuits comprise an adjustable weight hardware element, determining the exponent value (page 21 column 2 section II) “Inspired originally by Shibata and Ohmi's neuron MOS concept [6], we recently introduced a new translinear circuit primitive, called the multiple-input translinear element (MITE) [7], [8]. Such an element produces an output current Ithat is exponential in a weighted sum of its Kinput voltages, V1,…,VK, given by” Wang, Minch, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuit of Wang with the adjustable weight hardware element taught by Minch because (Minch page 21 abstract) “We describe a new class of translinear circuits that accurately embody product-of-power-law relationships in the current signal domain. We call such circuits multiple-input translinear element (MITE) networks. A MITE is a circuit element, which we defined recently, that produces an output current that is exponential in a weighted sum of its input voltages. We describe intuitively the basic operation of MITE networks and provide a systematic matrix technique for analyzing the nonlinear relationships implemented by any given circuit” Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Boahen, Minch, and Daniel. Regarding Claim 8 Wang in view of Boahen and Minch teaches: The ANN analog circuit of claim 7 (see rejection of claim 7) Wang in view of Boahen and Minch does not explicitly teach: further comprising a training module, configured to train the ANN by: receiving an input vector, comprising one or more input signals of the two or more first analog circuits; receiving supervisory data, corresponding to the input vector, wherein said supervisory data represents a desired value of one or more activation signals, in response to the input vector; and adjusting the weight hardware element of at least one first analog circuit, based on the input vector and the supervisory data However, Daniel teaches: further comprising a training module, configured to train the ANN by: receiving an input vector, comprising one or more input signals of the two or more first analog circuits; receiving supervisory data, corresponding to the input vector, wherein said supervisory data represents a desired value of one or more activation signals, in response to the input vector; and adjusting the weight hardware element of at least one first analog circuit, based on the input vector and the supervisory data. (paragraph [0142]) “In summary, a perceptron is a binary classifier that makes its decision based on a linear predictor function combining a set of weights with the input vector.”; (paragraph [0167]) “An innovative learning algorithm can be developed based on a perceptgene abstract model. Based on this, SMNs and AMNs can build adaptive biological systems with supervised evolutionary abilities[*Examiner notes: supervisory data], as well as artificial, intelligent ultra-low power, bioinspired translinear electronic circuits for a new era of robust big data computing […] In some embodiments, the present learning algorithm is based on two features: (i) computation of the perceptgene depends on the cooperativity weights and therefore, by adjusting their values[*Examiner notes: adjusting the weight hardware element] (Hill coefficient), a wide range of possible target output can be obtained for specific inputs, and (ii) the training rule minimizes the output error, using its gradient descent in a log-linear domain. The perceptgene-based learning algorithm is shown in FIG. 5A and is given by the following equations:”; (paragraph [00168]) “Equation (8) calculates the error between the desired data (YD) and the actual perceptgene output (y)” Wang, Boahen, Minch, Daniel, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neural network circuits of Wang in view of Boahen and Minch with the training of Daniel because (Daniel paragraph [0167]) “(ii) the training rule minimizes the output error, using its gradient descent in a log-linear domain.” Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Boahen, Minch, Daniel, and further in view of Mallinson (PGPUB no. US 20200302280 A1). Regarding Claim 9 Wang in view of Boahen, Minch and Daniel: The ANN analog circuit of claim 8 (see rejection of claim 8) Wang in view of Boahen, Minch and Daniel does not explicitly teach: wherein at least one first analog circuit comprises an adjustable bias hardware element, determining a bias value of the first analog circuit, and wherein the training module is further configured to adjust the bias hardware element of at least one first analog circuit, based on the input vector and the supervisory data. However, Mallinson teaches: wherein at least one first analog circuit comprises an adjustable bias hardware element, determining a bias value of the first analog circuit, and wherein the training module is further configured to adjust the bias hardware element of at least one first analog circuit, based on the input vector and the supervisory data. (paragraph [0052]) “While weighting elements[*Examiner notes: bias hardware element] of fixed impedance may be used in a weighting circuit, in the illustrated neuron 800 the impedance elements R1, R2 and R3 are adjustable impedances[*Examiner notes: adjust the hardware element] that provide weights to the values of inputs A1 to A3. In one embodiment, the values of the impedance elements R1 to R3 may be programmed by signals on the W1, W2 and W3 busses respectively. (For any number Ai of inputs, there will typically be an equal number Ri of impedances and Wi of control busses.)”; (paragraph [0056]) “In the present approach the offset C is implementable with a fixed input and variable weight[*Examiner notes: bias]. For example, in circuit 800, if the input signal A3 is tied low, i.e., at −1, then Equation 1 becomes: Y=F(A1*W1+A2*W2−W3)” Wang, Boahen Minch, Daniel, Mallinson, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neuron circuits of Wang in view of Boahen, Minch, and Daniel with the adjustable impedance taught by Mallinson because (Mallinson abstract) “The use of a hybrid delta modulator of the present approach provides a simpler solution and better performance than many prior art neurons.” Regarding Claim 10 Wang in view of Boahen, Minch and Daniel teaches: The ANN analog circuit of claim 8 (see rejection of claim 8) Wang in view of Boahen, Minch and Daniel does not explicitly teach: wherein adjusting the weight hardware element comprises adjusting an impedance of the weight hardware element, so as to redetermine the exponent value of the relevant first analog circuit. However, Mallinson teaches: wherein adjusting the weight hardware element comprises adjusting an impedance of the weight hardware element, so as to redetermine the exponent value of the relevant first analog circuit. (paragraph [0052]) “While weighting elements[*Examiner notes: weight hardware element] of fixed impedance may be used in a weighting circuit, in the illustrated neuron 800 the impedance elements R1, R2 and R3 are adjustable impedances[*Examiner notes: adjust the hardware element] that provide weights to the values of inputs A1 to A3. In one embodiment, the values of the impedance elements R1 to R3 may be programmed by signals on the W1, W2 and W3 busses respectively. (For any number Ai of inputs, there will typically be an equal number Ri of impedances and Wi of control busses.)”; [*Examiner notes: The combination of the exponential weighting taught by Minch with the adjustable impedance weight elements of Mallinson teaches on the limitation] Wang, Boahen, Minch, Daniel, Mallinson, and the instant application are analogous because they are all directed to machine learning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the present invention to modify the neuron circuits of Wang in view of Boahen, Minch and Daniel with the adjustable impedance taught by Mallinson because (Mallinson abstract) “The use of a hybrid delta modulator of the present approach provides a simpler solution and better performance than many prior art neurons.” Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ezra J Baker whose telephone number is (703)756-1087. The examiner can normally be reached Monday - Friday 10:00 am - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.J.B./Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Oct 28, 2022
Application Filed
Aug 07, 2025
Non-Final Rejection — §103, §112
Dec 07, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585964
EXHAUSTIVE LEARNING TECHNIQUES FOR MACHINE LEARNING ALGORITHMS
2y 5m to grant Granted Mar 24, 2026
Patent 12579477
FEATURE SELECTION USING FEEDBACK-ASSISTED OPTIMIZATION MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12505379
COMPUTER-READABLE RECORDING MEDIUM STORING MACHINE LEARNING PROGRAM, MACHINE LEARNING METHOD, AND INFORMATION PROCESSING DEVICE OF IMPROVING PERFORMANCE OF LEARNING SKIP IN TRAINING MACHINE LEARNING MODEL
2y 5m to grant Granted Dec 23, 2025
Patent 12373674
CODING OF AN EVENT IN AN ANALOG DATA FLOW WITH A FIRST EVENT DETECTION SPIKE AND A SECOND DELAYED SPIKE
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+77.8%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month