DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This office action is in responsive to communication(s): original application filed on FILLIN "Enter claim identification information" \* MERGEFORMAT 07/20/2023 . Claim FILLIN "Enter appropriate information" \* MERGEFORMAT s 1-20 are pending. Claim FILLIN "Enter appropriate information" \* MERGEFORMAT s 1, 8, and 15 are independent. Drawings Figure FILLIN "Enter figure indentification number" \* MERGEFORMAT 1 should be designated by a legend such as --Prior Art-- because only that which is old is illustrated ( see ¶ [0009]: " Figure 1 is a block diagram of an artificial neuron in a traditional artificial neural network " ) . See MPEP § 608.02(g). Corrected drawings in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. The replacement sheet(s) should be labeled “Replacement Sheet” in the page header (as per 37 CFR 1.84(c)) so as not to obstruct any portion of the drawing figures. If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: FILLIN "Enter the appropriate information " \* MERGEFORMAT "110" in FIG. 1 and "208" in FIG. 2 . Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: FILLIN "Enter reference sign(s) not found in the drawings and include page and line number where they first occur in the specification" \* MERGEFORMAT "306" in ¶ [0026] . Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: FILLIN "Enter appropriate information" \* MERGEFORMAT in ¶ [0060], " Comparator circuit 606 is implemented as an op-amp with the non-inverting input (+) coupled to the first voltage ( Vaf ) and the inverting input (-) coupled to the second voltage first voltage ( Vbf ) … " appears to be " Comparator circuit 606 is implemented as an op-amp with the non-inverting input (+) coupled to the first voltage ( Vaf ) and the inverting input (-) coupled to the second voltage ( Vbf ) … " . Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.— The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim FILLIN "Enter claim indentification information" \* MERGEFORMAT s 2 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim s FILLIN "Enter claim identification information" \* MERGEFORMAT 2 and 9 recite the limitation " FILLIN "Enter appropriate information" \* MERGEFORMAT ... wherein the output node of the max-pooling neuron is coupled to the first input train in response to a spiking rate of the first input train being greater than a spiking rate of the second input train , and wherein the output node of the max-pooling neuron is coupled to the second input train in response to a spiking rate of the second input train being greater than a spiking rate of the first input train " in FILLIN "Enter appropriate information" \* MERGEFORMAT lines 1-5 , , which rendering these claims indefinite because it is unclear whether two instances of " a spiking rate of the first input train " are the same or different and whether two instances of " a spiking rate of the second input train " are the same or different. Clarification is required . Allowable Subject Matter Claim s 1, 3-8, and 10-20 are allowed. Claim FILLIN "Enter claim identification information" \* MERGEFORMAT s 2 and 9 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), 2nd paragraph, set forth in this Office action. The following is a statement of reasons for the indication of allowable subject matter: In regard to independent Claims 1, 8, and 15 , prior arts of records, either singularly or in combination, do not teach or suggest the combination of claimed elements including " a max-pooling neuron, comprising: a first integrator circuit configured to filter a first input train from a first neuron of a previous layer and generate a first filtered input train; a second integrator circuit configured to filter a second input train from a second neuron of the previous layer and generate a second filtered input train; a comparator circuit configured to amplify a difference between the first filtered input train and the second filtered input train and generate an amplified differential signal; a Schmitt trigger circuit configured to generate a binary output signal at an output terminal of the Schmitt trigger circuit based on the amplified differential signal; and a pair of switches comprising a first switch and a second switch, the pair of switches having a common first terminal coupled to an output node of the max-pooling neuron, the pair of switches having a common control terminal coupled to the output terminal of the Schmitt trigger circuit, a second terminal of the first switch coupled to the first input train, and a second terminal of the second switch coupled to the second input train " , " a max-pooling layer comprising a max-pooling neuron, the max-pooling layer comprising: a first integrator circuit configured to filter a first input train from a first neuron of a previous layer and generate a first filtered input train; a second integrator circuit configured to filter a second input train from a second neuron of the previous layer and generate a second filtered input train; a comparator circuit configured to amplify a difference between the first filtered input train and the second filtered input train and generate an amplified differential signal; a Schmitt trigger circuit configured to generate a binary output signal based on the amplified differential signal; and a pair of switches comprising a first switch and a second switch, the pair of switches having a common first terminal coupled to an output node of the max-pooling neuron, the pair of switches having a common control terminal configured to receive the binary output signal of the Schmitt trigger circuit, a second terminal of the first switch coupled to the first input train, and a second terminal of the second switch coupled to the second input train ", or " a neural network, comprising: a first layer having a first neuron and a second neuron, the first neuron configured to provide a first input train, the second neuron configured to provide a second input train; and a max-pooling layer having a max-pooling neuron, the max-pooling neuron comprising: a first integrator circuit configured to filter the first input train and generate a first filtered input train, a second integrator circuit configured to filter the second input train and generate a second filtered input train, a comparator circuit configured to amplify a difference between the first filtered input train and the second filtered input train and generate an amplified differential signal, a Schmitt trigger circuit configured to generate a binary output signal based on the amplified differential signal, and a pair of switches comprising a first switch and a second switch, the pair of switches having a common first terminal coupled to an output node of the max-pooling neuron, the pair of switches having a common control terminal configured to receive the binary output signal of the Schmitt trigger circuit, a second terminal of the first switch coupled to the first input train, and a second terminal of the second switch coupled to the second input train " when interpreted as a whole . Taylor et al. (" CMOS Implementation of Spiking Equilibrium Propagation for Real-Time Learning ", 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) , Jun. 13-15, 2022, pp. 283-286 ) discloses in Abstract of Page 283 that (1) equilibrium propagation ( EqProp ) and its adaptations for spiking neural networks (SNN) are presented as biologically plausible alternatives to back-propagation (BP) which describe a potential low-energy means of learning complex tasks in neuromorphic hardware; (2) these algorithms are conducive to extremely efficient analog computing approaches, but a detailed analog circuit implementation and architectural outline have not yet been presented; (3) furthermore, current theoretical analog designs of EqProp have not addressed synapse circuit-level implementations capable of simultaneous sensing and weight updates for real-time learning; and (4) designed and simulated a circuit-level implementation of a spiking EqProp neuron and synapse in CMOS 65 nm technology capable of concurrent inference and weight updates for real-time learning. Taylor further discloses in Section I of Page 283 that (1) a recent learning framework called Equilibrium Propagation ( EqProp ) avoids the computational expense of BP through more biologically plausible local update rules; (2) the EqProp algorithm, used to train arbitrary symmetric recurrent neural networks, implements both inference and learning with simple neuronal dynamics in a single network, thereby removing the need for the nonlocal storage and operations required by BP; (3) furthermore, EqProp has been shown to approximate the gradients of recurrent neural networks (RNNs) trained with back-propagation-through-time (BPTT); (4) while the algorithm shows extreme promise in making online learning architectures more efficient, there is currently a lack of circuit-level hardware implementations; (5) analog memristor-based implementations have been suggested, and a theoretical design for a spiking version of EqProp has also been presented; (6) however, specific peripheral circuitry for computing weight updates and modulating conductance have not yet been proposed; (6) demonstrate a circuit design of an EqProp -based neuron, explore a candidate synaptic array technology for real-time, concurrent inference and weight updates, and propose algorithm simplifications for robust weight update; and (7) simulate a pair of neuron circuits connected by a synaptic cell to validate functionality of the design. Taylor also discloses in Section II of Pages 283-284 that (1) EqProp was first proposed as a biologically plausible alternative to BP which is considered to be biologically implausible, because it requires a dedicated computational circuit external to the neurons and synapses as well as separate training and inference phases that require buffering of activations; (2) EqProp mitigates the first disadvantage by utilizing dynamics of an RNN to approximate a gradient for learning model parameters; (3) the first phase of EqProp holds the input neurons at a constant activation (spike rate) and allows the network to relax to a fixed-point state of activations (i.e. the “free” phase); (4) at the end of this phase, the network is nudged from its fixed-point by adding the prediction error of each output neuron’s activation, multiplied by a nonzero “nudge” factor β, to its respective activation (i.e. the “nudge” phase); (5) the network relaxes to a new and nearby fixed-point with a lower prediction error, and the change in fixed-point activations for a pair of neurons is used to update their mutual synapse; (6) this originally required buffering activations during the second phase as in BP, but it was later found that synaptic parameters could be updated continually after reaching the first fixed-point for each time-step until reaching the second fixed-point; (7) this removes the need to buffer activations and enables concurrent inference and learning; (8) biological plausibility was further improved by outlining EqProp for continuous time with unidirectional synapses; (9) EqProp can also be used to train SNNs; (10) EqSpike is local in space, which eliminates the need to buffer error gradients and activations, and local in time, which allows synapses to be directly updated through neural events (i.e. action potentials, or spikes) ; (11) EqProp originally used a rate-based formulation where activations representing analog spike rates evolved smoothly over time; (12) the EqProp learning rule, for a network of leaky-integrate and fire (LIF) neurons described by a Hopfield-like energy function, is as follows: Δ W ij ∼ ( ρ i ρ j ) n − ( ρ i ρ j ) f , where ρ represents the rate of a neuron in layer i and j , respectively, and the product ρ i ρ j is measured at equilibrium for both the nudge ( n ) and free phase ( f ), respectively ; (13) a dditionally, W ij represents the synaptic weight that connects neuron i to neuron j ; (14) t his learning rule can be adapted to the case when synapses are continuously updated during the nudge phase: d W ij dt ~ ρ j ρ i + ρ i ρ j ; (15) t herefore , the new learning rule can be described as follows: each time neuron i spikes, the weight should be updated by a quantity proportional to the derivative of the rate of neuron j ( ρ j ) and vice-vers a; (16) to determine spike rate acceleration for a neuron, ρ , EqSpike performs leaky-integration of the emitted spike train, delays the signal, and subtracts it from the non-delayed signal; (17) lastly, a lowpass filter (LPF) smooths out the difference; (18) as the implementation is analog and continuous in time, pass the output spike train through a two-stage LPF to represent the analog spike rate and find the gradient with a differential amplifier circuit ; (19) assume unidirectional synapses to simplify synaptic circuits and enhance biological plausibility, and therefore, a synapse only updates when its presynaptic neuron spikes, and with a value proportional to the derivative of its postsynaptic neuron’s spike rate; (20) in an array of synapses from layer i to layer j , a synapse updates whenever its i neuron spikes, proportionally to the derivative of the rate of its j neuron; (21) furthermore, an array of synapses from layer j to layer i would also exist; (22) finally, to reduce the nonlinearity of synaptic updates in the circuit due to nonlinear transistor behavior, constrain the spike rate gradients to be ternary: the derivative of a neuron’s spike rate is positive, negative, or neutral, only; (23) a fully-software implementation of EqSpike in PyTorch with ternary gradients was compared to the results for full-precision gradients; (24) ternary gradients can be supported with little or no change in accuracy for shallow networks; and (25) while another work has trained a binary neural network with EqProp using ternary gradients, these gradients update a full-precision momentum that flips its binary synapse above a threshold. Taylor further teaches in Section I II with FIG. 1 of Page s 28 4-285 that (1) use of CMOS synapses is motivated by the need for a durable technology that can be repeatedly written without performance loss; (2) while memristor technology has shown promise for energy efficient inference, its low endurance is concerning for learning applications in which weights must change often; (3) furthermore, real-time, concurrent inference and weight updates require a synaptic cell which can update conductance without ceasing sensing functions; (4) the designs therefore were developed with 65 nm CMOS technology to fit these criteria; (5) the neuron circuit is comprised of: the integrate-and-fire circuit (IFC), spike rate and gradient computing, and bias generation, shown in Figure 1(a)-(d); (6) the IFC (Figure 1(a)) receives current from a synaptic array or current source and integrates it on a small capacitor; (7) once the capacitor voltage reaches a threshold, a Schmitt trigger switches high, activating an NMOS to discharge the capacitor; (8) the trigger’s output remains high until the capacitor is discharged, then switches low; (9) the rapid switching of the trigger generates voltage spikes that can be driven to the next stage of the circuit; (10) an analog representation of the spike rate’s gradient is needed to calculate the weight updates for its synapses; (11) utilize a derivative circuit (an RC feedback circuit on a differential amplifier) on the analog spike rate signal; (12) first, the output spikes of the IFC are leaky-integrated with a two-stage LPF formed from two diode-connected NMOS’s in series, with capacitors from each diode’s output to ground. Diodes allow the capacitor voltages to increase rapidly with each spike and decay slowly between spikes; (13) a single-stage amplifier isolates the resulting analog spike rate from the next stage of gradient computation;(14) this circuit is shown in Figure 1(b); (15) next, the analog spike rate connects to the negative input of a differential amplifier through a capacitor; (16) the positive input is tied to V DD , and the output is connected through a feedback resistor to the negative input; (17) the output signal is proportional to the derivative of the spike rate; (18) to support ternary gradients, measured the offset of the derivative circuit and use two comparators with thresholds 20 mV above ( V th1 ) and below ( V th2 ) the offset to detect positive and negative gradients, respectively; (19) a gradient of +1 is represented when both comparators are high, 0 when only one is high, and -1 when both are low; (20) these components are shown in Figure 1(c); (21) the synaptic circuit is based on the resistive processing unit (RPU); (22) cell conductance can be updated in real-time without disrupting sensing function; (23) a single synapse is shown in Figure 1(g); (24) the presynaptic and postsynaptic terminals of the synaptic transistor are connected to their respective neurons, with one driving spikes ( V in ( t )) and the other sensing current ( I out ( t )) ; (25) V bp ( t ) and V bn ( t ) are controlled by the bias circuit of the postsynaptic neuron, indicating the gradient of the spike rate ; (26) t he update signal ( V ΔW ( t )) is driven by the output spikes of the presynaptic neuron; (27) while this signal is identical to V in ( t ), separate inverters ensure adequate drive; (28) the conductance of the synaptic transistor is controlled by the voltage on the capacitor attached to its gate; (29) as the capacitor voltage decreases, the conductance increases and vice-versa; (30) the capacitor is charged by a PMOS connecting it to V DD or discharged by an NMOS connecting it to GND , depending on the V bp ( t ), V bn ( t ), and V ΔW ( t ) signals; (31) when V bp ( t ) = V DD and V bn ( t ) > GND , the NMOS conducts whenever V ΔW ( t ) spikes; (32) when V bn ( t ) = GND and V bp ( t ) < V DD , then the PMOS conducts; (33) when V bn ( t ) = GND and V bp ( t ) = V DD , neither the PMOS nor NMOS conducts, holding the capacitor voltage steady; (34) an example waveform of weight updates is shown in Figure 1(h); (35) the bias circuit is shown in Figure 1(d) and its internal mechanisms in Figure 1(e); (36) when a high V grad is asserted, P2 ties V bp to V DD ; (37) N2 is off, but the current source through N1 pulls the gate/drain voltage over GND ; (38) the same is true in the opposite direction, with the current through P1 pulling V bp below V DD ; (39) t o achieve three stable states for positive, negative, or neutral gradients, the widths of P2 and N2 are increased ; (40) a n operating region of V grad appears (around V grad = 450 mV ) in which V bp ≈ V DD and V bn ≈ GND ; (41) t his turns off the inverters connected to the capacitor to prevent (dis)charging ; (42) a PMOS and NMOS are attached from V DD to GND and V grad is attached to their connecting drains; (43) the lower threshold comparator ( V th2 ) drives the PMOS, and the higher threshold comparator ( V th 1 ) drives the NMOS; (44) a positive gradient asserts both comparators high, activating the PMOS only and connecting V grad to V DD ; (45) a negative gradient asserts both comparators low, activating the NMOS only and connecting V grad to GND ; (46) when only the lower-threshold comparator is asserted high for a neutral gradient, both the PMOS and NMOS are off; (47) this creates a voltage divider between V DD and GND , setting V grad ≈ 450mV; and (48) the bias circuit behavior is shown in Figure 1(f). LORRAIN et al. ( US 2020/0210807 A 1, pub. date: 07/02/2020) discloses in ABSTRACT and ¶¶ [0020]-[0045] that (1) propose a computer based on a spiking neural network, the network comprising layers of neurons, the inputs and outputs of each neuron being coded by spikes, the input spikes being received in sequence at the input of a neuron, each neuron of the network comprising a receptive field comprising at least one synapse, wherein each synapse is associated with a synapse address; (2) the computer is configured so as to compute, for each layer of neurons, the output value of each neuron in response to at least one input spike; (3) the network further comprises at least one maximum pooling layer (" MaxPooling " layer), each pooling layer comprising maximum pooling neurons, each maximum pooling neuron being able to deliver an output spike in response to the reception of an input spike on the most active synapse of its receptive field (i.e. synapse of the receptive field of the maximum pooling neuron having the highest frequency) ; (4) the computer comprises a device for activating the neurons of the maximum pooling layer; (5) in response to an input spike received by a neuron of the maximum pooling layer, the device is configured so as to receive the address of the synapse associated with the received input spike, called activated synapse address ; ( 6 ) the device comprises an address comparator configured so as to compare the address of the activated synapse with a set of reference addresses, wherein each reference address is associated with a hardness value and with a pooling neuron; and ( 7 ) the device activates a neuron of the maximum pooling layer if the address of the activated synapse is equal to one of the reference addresses and the hardness value associated with this reference address has the highest value from among the hardness values associated with the other reference addresses of the set. LORRAIN further discloses in ¶¶ [0229]-[ 0236 ] with FIGS. 13-14 that (1) FIG. 12 shows the device for triggering a neuron of a maximum pooling layer (MAX-pooling) 10 in the form of a digital circuit; (2) the circuit comprises the reference address memory 30, the memory storing the hardness value 32 and the memory storing the initialization value 33; (3) the circuit 10 comprises a counter block 70 configured so as to select the penalty value -b, reward value +a, or the initialization value INIT and update the hardness value Don the basis of the comparison between the activated synapse address @S and the reference address @Smax and the sign of the hardness value D; (4) the block 70 comprises a multiplexer 700 initializing the hardness value D and a multiplexer 701 configured so as to increment or decrement the hardness value D; (5) the block 70 further comprises a comparator 702 for comparing the hardness value D with zero; (6) the comparison between the activated synapse address @S and the reference address @Smax is performed using a comparator block 71 comprising a comparator 710; (7) the results of the comparator block 71 make it possible to activate the counter block 70 in order to update the hardness D; (8) AND OR logic gates (between blocks 70 and 71) make it possible to update the output spike (spike), the signal stop_out on the basis of the results of the comparisons performed by blocks 70, 71, the input signals go and/or stop_i ; (9) FIG. 13 is an analog circuit showing one exemplary implementation of the method for triggering a neuron of a maximum pooling layer (MAX-Pooling; (10) FIG. 13 is functionally similar to FIG. 12, with a hardness value test and update block 70 and a comparison block 71 for comparing the activated synapse address @S and the reference address @Smax ; (11) however, in FIG. 13, these blocks are implemented in analog form (analog signals); (12) in particular, the penalty/benefit values a and b are implemented using resistors 802 and 803, whereas the memories 30, 31 storing @Smax, D are implemented by capacitors. LORRAIN also discloses in ¶¶ [0247]-[0255] with FIGS. 15-16 that (1) FIG. 15 is a depiction, in the form of a digital circuit, of the device 10 for triggering maximum pooling neurons MP; (2) the device 10 comprises P maximum pooling units 1, each associated with a reference address @Smax k , a hardness value D k and with an initialization value INIT k ; (3) in response to receiving an input spike on a synapse of the receptive field of the neuron MP, each maximum pooling computation unit 1 thus receives the address of the activated synapse @S at input; (4) each maximum pooling computation unit 1 further receives the control parameters stop_in and go described above at input; (5) each maximum pooling computation unit 1 delivers an output spike value Spike i = Spike[ i ] and the control parameter stop_out described above; (5) the device 10 comprises the maximization block 15, denoted "Max D", configured so as to determine the hardness value D 1 from among all of the hardness values stored in memory that has the maximum value, and determine the address of this hardness value @MaxD=D 1 ; (6) the device 10 also comprises a multiplexer 16 implementing the output spike computation block, controlled by the control signal @MaxD that is configured so as to select the value of the output spike Spike[@MaxD] that corresponds to the address @MaxD from among the P spike values that are delivered at output by the P maximum pooling units; (7) the hardness values D, are used by the output spike computation block 16 to select the output spike to be considered Spike out , that is to say the spike corresponding to the address of the maximum hardness value from among the hardness values D 1 , D 2 , D 3 as determined by the maximum hardness value address computation block 15, denoted " Max_D "; (8) FIG. 16 is a depiction, in the form of a digital circuit, of a unit 1 for triggering a neuron MP, denoted BM; (9) the unit BM is configured so as to determine the address @SMax, the input and output stop parameters stop_in and stop_out and the general activation input go that make it possible to prevent two different blocks BM from storing the same address @Smax; (10) propose a novel maximum pooling neuron model for computing, through approximation, the response of a neuron of a maximum pooling layer to an input spike; (11) based on an incremental/decremental counter 32 and a memory 31 for storing the value of the most active synapse @Smax, compute the response of a neuron of a maximum pooling layer to an input spike with an optimized number of hardware resources, compatible with the resources conventionally allocated to a neuron IF, while at the same time maintaining good performance; (12) while conventional solutions require N*C bits in memory, the proposed embodiments require only log 2 (N)+C bits (N denoting the number of synapses and therefore the number of elements of the pool and C denotes the accuracy of the activity counter in bits); and (13) only 2 comparison operations per stimulus are useful for each stored reference value ( comparison of @S and @Smax and comparison of D with zero). Asgha r et al. (" Current Multiplier Based Synapse and Neuron Circuits for Compact SNN Chip ", 2021 IEEE International Symposium on Circuits and Systems (ISCAS) , May 22-28, 2021, pp. 1-4) discloses in ABSTRACT of Page 1 that (1) Spiking Neural Networks having biological plausible architecture are considered to be more suitable for energy efficient hardware implementation; (2) when it comes to realize the hardware implementation of a large-scale Neural network for mobile applications, area and power consumption constraints become more critical; (3) optimizing Spiking neural network, constituting of neuron and synapse circuits, for area and power is essential; (4) present a more optimized version of Synapse and neuron circuits; (5) propose an analog CMOS implementation of a current multiplier charge injector-based Synapse and neuron circuit; (6) the synapse circuit modulates the input spike rates by a trained weight value and injects an equivalent current; (7) the neuron circuit integrates the injected synaptic current and evokes an output digital spike event; (8) the circuit implementation is done using 65nm process design kit; (9) the proposed circuit implementation exhibits all the temporal characteristics of Spiking neural networks; (10) the circuit implementation has been optimized for area and power consumption and therefore can be easily constituted into a large-scale spiking neural network; and (11) furthermore, the compact circuit implementation can benefit from the high resolution with very less increase in area and power. Asgha r further discloses in Section I of Pages 1-2 that (1) among different ANNs, Spiking Neural Networks (SNN) demonstrates energy efficiency advantages over von Neumann architecture; (2) based on biological neural system structures, SNNs can simultaneously perform massively parallel computations on chip just like a human brain where a huge number of neurons interconnected with synapses send and receive neural signals containing certain information; (3) SNNs perform low power operations because the processing elements are placed close to the memory and processing is done only when there is a spike event; (4) in SNNs, when the membrane potential of each neuron reaches a certain threshold potential, then the neuron generates an output spike train; (5) the spike trains are transmitted to interconnected neurons via synapses; (6) the spike propagation mechanism of the SNNs allows for high computational speed, high energy efficiency and capturing the temporal dynamics of neuronal membrane; (7) to realize the advantages of SNNs on a chip, it is crucial to optimize the neuron and synapse circuits optimized for low power and compact area; (8) to realize the neural networks as a System On-chip (SoC) necessitate enormous number of transistors acting as processing and communication elements; (9) by virtue of scalability feat of CMOS, compact realization of neural networks with enormous integration can be achieved; and (10) present an analog CMOS based implementation of Synapse and Neuron circuits optimized for power and area using 65nm process design kit. Asgha r also discloses in Section II with FIGS.1-2 of Page 2 that (1) SNNs perform training and inference operations using a network of synapse and neuron models; (2) SNNs operate using discrete events called spikes that occur at discrete time intervals, instead of continuous values like in ANNs; (3) the occurrence of spikes resembles biological processes of information delivery; (4) each neuron receives a spike train from neurons in the previous layer using synapses; (5) the Fig. 1 illustrates a neuron with a number of input synapses that receive input spike trains from pre-synaptic neurons; (6) these input spikes are modulated with the weight (strength) of the respective synapse; (7) t he equivalent charge from all the synapses is then accumulated at neuron’s membrane potential; (8) when this potential reaches a predefined threshold potential then neuron evokes an output spike; (9) for our SNN, choose Leaky Integrate and Fire (LIF) neuronal model which is known as an effective way of implementing biological neuron’s computational features along with simple integration circuit; (10) LIF as illustrated in Fig. 2, represents the neuron as a parallel connection of a resistor (leakage path) and a capacitor (CMEM) mimicking the neuronal membrane; (11) the RC combination can be defined by Kirchhoff’s current law, where V MEM is the potential across the neuronal membrane; (12) multiple current sources serve as the input synaptic current accumulating charge at the CMEM; and (13) upon reaching a predefined threshold the comparator generates an output spiking event and resets the V MEM . Asgha r further teaches in Section III with FIGS. 3-6 of Pages 2 - 3 that (1) to realize the hardware implementation of a compact large scale SNN, it is required to design compact elements; (2) the neuronal model based upon LIF comprises of Synapse circuits and neuron circuits; (3) as developing large scale SNN requires integration of a multitude of Synapse circuits connected to Neuron circuits, therefore designing compact synapse and neuron circuit becomes crucial for maximum integration and low power operations; (4) keeping in view the aforementioned constraints, design compact synapse and neuron circuits which replicate most of the neuromorphic characteristics; (5) the synapse circuit shown in Fig. 3 is a binary weighted “current multiplier charge injector” (CMCI); (6) each synapse consists of excitatory and inhibitory synapses; (7) the excitatory Synapse is implemented by binary weighted NMOS network while the inhibitory synapse is implemented by binary weighted PMOS network; (8) the weight of each synapse determines the amount of injected current upon receiving a spike event in each synapse branch; (9) each bit of weight collocated with each branch of synapse circuit allows binary weighted current through the respective branch; (10) the current in left half of the circuit is mirrored by 1× and the current in the right half of the circuit is mirrored by 4×; (11) thus, all the four branches of the Synapse circuit inject (binary weighted) different amounts of currents; (12) the injected current of all branches is accumulated on the membrane potential capacitor; (13) the proposed Synapse circuit is implemented for a 5-bit weight values; (14) four LSB’s are applied to the four branches of CMCI while one MSB is used to switch between excitatory and inhibitor behaviors; (15) the folded current mirror architecture provides smaller area, less power consumption and can be easily scaled up for high resolution without much increasing the area; (16) the weight values are stored in flip-flops collocated with synapse circuit; (17) the layout along with area measurements of the Synapse are shown in Fig. 4 ; (18) the neuron circuit shown in Fig. 5 implies the LIF neuronal model; (19) the capacitive V MEM is charged by the accumulation of incoming synaptic currents; (20) the decision of when to generate an output spike and when not to generate an output spike is done by a comparing circuit; (21) for which a Schmitt Trigger has been realized as a comparator, which compares the V MEM with the predefined threshold voltage; (22) when the V MEM exceeds the threshold, the Schmitt trigger fires an output spike and the feedback path resets the membrane potential to the initial potential (V REST ); (23) the Schmitt trigger is an important constituent of the neuron circuit and hence requires careful designing; (24) it comes with advantages of less transistor count, high sensitivity and less power consumption; (25) the output buffers provide the reshaping of spike and isolation for driving next layer neurons; (26) the membrane capacitor is realized by memcapacitor (10fF) and leaky resistor is implemented through NMOS transistor; and (27) the layout of the neuron circuits along with area measurement is shown in Fig. 6. Krestinskaya et al. ( " Neuromemristive Circuits for Edge Computing: A Review", IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 31, NO. 1, JANUARY 2020, pp. 4-23 ) discloses in ABSTRACT of Page 4 that (1) provide the review of neuromorphic CMOS- memristive architectures that can be integrated into edge computing devices; and (2) discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing. Krestinskaya further discloses in Section I of Page 4 that (1) the correlation between neuromorphic memristive architectures with the edge computing trends is illustrated; (2) discuss the different set of neuromorphic architectures for the edge computing that can be integrated directly to the edge devices ; (3) illustrate the most recent approaches to implement neuron cell and synaptic connections and show the correlation with biological concepts; (4) present the clear overview of various neuromorphic architectures, such as different types of neural networks, hierarchical temporal memory (HTM), long short-term memory (LSTM), learning architectures, and circuits for memory-based computing and storage; and (5) discuss the advantages and primary challenges in the simulation and implementation of such architectures; (6) present the main drawbacks and challenges that should be improved in existing neuromorphic architecture to use them in edge computing applications. Krestinskaya also discloses in Section II I with FIGS. 3- 5 of Page s 6-10 that (1) focus on the memristive models of neuron cells and synaptic connections that can be adapted and scaled for the edge computing applications; (2) neuromorphic circuits and architectures attempt to mimic different types of biological neural networks responsible for information processing in human brain;(3) the biological neuron architecture is shown in Fig. 3(a), wherein a biological neuron consists of the soma (cell body) with many dendrites that serve as connections to the other neurons and carry the information; (4) the axon (output of the neuron) collects the information from all the dendrites and transmits it to the other neurons; (5) the transmission of a signal from one neuron to another happens through the synapses; (6) synapses can either reinforce or inhibit the transmitted signals; (7) the neuron fires (generates the output response), if the information that is collected in the axon exceeds the particular threshold; (7) the equivalent structural and mathematical representation of biological neuron is shown in Fig. 3(b) and Fig. 3(c); (8) the neuron models can be divided into two categories: a) simple threshold logic-based linear neuron-based models, where the neuron is presented as a most straightforward linear computing unit and b) dendritic threshold nonlinear neuron-based models, which has more complex computing units and is inspired by recent works; (9) the simplest threshold logic-based linear neuron model is known as McCulloch–Pitts neuron model and Rosenblatt‘s perceptron; (10) Fig. 3(b) and eqn. (1) show the threshold logic-based linear neuron model, wherein the synapses are represented as weighted connections, and the parameter w j represent the weights of the synapses, and y o is a neuron output; (11) the central concept of this model is that the weighted summation of the inputs x j is higher than the threshold θ which determines the neuron firing ; (12) in the dendritic threshold nonlinear neuron model, the dendrites of the neuron can be nonlinear; (13) each dendritic unit in the neuron consists of various subunits (dendritic branches), and neurons are represented as complex computing unit; (14) Fig. 3(c) and e q n. (3) show the structure of nonlinear dendritic neuron model, wherein a single dendrite can have multiple inputs and specific threshold function; (15) comparing to threshold linear neuron model, like perceptron, which fails to compute particular functions, threshold nonlinear neuron can compute linearly non-separable functions; (16) the volatility principle in human brain-inspired architectures is also important; (17) it is of importance not only to remember important data but also forget the unnecessary information; (18) an HTM neuron emulates this process, which is a particular case of dendritic threshold nonlinear neuron model recently proposed to mimic functionality of pyramidal neurons in human neocortex; (19) the HTM neuron is shown in Fig. 3(d), wherein the neuron cell has three different inputs: feedforward, feedback, and contextual inputs; (20) the feedforward input corresponds to the synapses of proximal soma known as proximal dendritic connections; (21) the feedback inputs correspond to apical connections learned from the previous inputs, and the contextual inputs correspond to distal connections that connect different cells; (22) most of the implementations of the neuron models propose to use memristor as a synapse; (23) the least complex representations of the synapse in memristive architectures are a single memristor (1M) structure; (24) the 1M synapses in a memristive crossbar array are shown in Fig. 4(a), wherein the 1M structure is more efficient in terms of on-chip area and power consumption; (25) the recent works attempt to use 1M synapses for neural networks to avoid additional CMOS elements in the architectures; however, the neuromorphic circuits with 1M synapses usually required additional control circuits and suffered from sneak path problems; (26) moreover, the update process of the memristor values in such structures requires complex switch circuits, which disconnect the memristors from presynaptic and postsynaptic neurons and connect the input signals used for memristor programming; (27) also, such configurations do not allow to obtain negative synaptic weights, and additional circuits should be involved to obtain the negative weights in neural networks; (28) the alternative to 1M synapses is the synapses with two memristors (2M) shown in Fig. 4(b), wherein this architecture doubles the size of the crossbar and requires complex postsynaptic neurons; however, this allows implementing negative weights of the synapses; (29) in 2M structure, the weight of the synapse is represented as W ij = G ij + − G ij - , where G ij + is an effective conductance of a memristor; (30) the two memristor one resistor (2M1R) synapse is shown in Fig. 4(c), wherein the research work in [53] proposes the modified dynamic synapse for spiking neural network (SNN) based on the two memristors and the resistor adjusted for TaOx devices, which includes temporal transformations and static weight and helps to realize the spiking behavior in large-scale simulations; (31) the memristive synapses with transistors are also popular because the transistor is used as a switch, especially for read and update cycles; (32) the synapse with one transistor and one memristor (1T1M) is shown in Fig. 4(d) [47], wherein this architecture is one of the possible solutions for sneak path problems; (33) the synapse with two transistors and one memristor (2T1M) is illustrated in Fig. 4 (e); (34) w hile 1T1M architecture is used to control memristor switching, program the memristor within a crossbar, and eliminate sneak path problems, 2T1M also allows to control the sign of the memristor, as it is connected to two inputs: original and inverted input signal ; (35) t he enabling signal e controls the switching of the CMOS transistors ; (36) t he transistors control the current flowing through the memristor and voltage across the memristor , wherein t he parameter e represents the enable signal ; (37) i f e = 0, the state variable of memristor does not change ; (38) i f e = V DD or e = − V DD , the current is flowing either through nMOS transistor or pMOS transistor, respectively ; (39) t he enable signal is used to control the direction of current and to update the memristor value , which also allows achieving negative and positive sign of memristor weight ; (40) i n this circuit, it is important to ensure that the transistor is in a linear state , and t he drawback of such circuit is a size of the synapse, which is appropriate for small-scale problems, and can be a critical issue for large-scale edge computing systems; (41) the other type of synaptic weight implementations is a bridge arrangement; (42) the memristor-bridge synapse with four memristors (4M) shown in Fig. 4(f) was tested in various neural network architectures and applications; (43) the circuit consists of four memristors that form Wheatstone bridgelike circuit and is able to represent zero, positive, and negative synaptic weights; (44) to increase the resistance of M2 and M3 and decrease of resistance of M1 and M4, positive pulse should be applied as an input and vice versa; (45) the weight is positive, if (M2/M1) >(M4/M3); (46) the negative weight can be formed as (M2/M1) <(M4/M3); (47) a zero weight is formed as (M2/M1) = (M4/M3); (48) this ensures the implementation of positive and negative weights and allows to change the weight sign, which depends on the direction of the current; (48) the earliest neuron cell models are based on capacitors that emulate the membrane of a biological neuron and integrate current; (49) one of the basic and first neuron models is integrate and fire (I and F) neuron model, wherein in this model, single membrane capacitance sums the currents flowing into the neuron from all the synapses and membrane resistance causes the leakage of the membrane current; (50) however, due to the large on-chip area and power consumption, such neurons are not applicable for large-scale circuits and edge devices, where the power consumption is limited; (51) the modified I and F neuron used for neural network implementation in large-scale architectures is shown in Fig. 5(a), wherein the neuron circuit consists of current integration part with capacitor Cu , spike generation Schmitt trigger circuit, reset circuit, and control circuit for current input range and injection; (52) when the voltage is applied to the terminals of transistors M 1 and M 2 , the input current I in is injected to the leaky integration part of the neuron through the current mirror, and this current is integrated and leaked through M3; (54) then, the Schmitt trigger generates a spike, and the neuron is reset using M4; (55) the firing threshold of the neuron is determined by the Schmitt trigger circuit; (56) in one of the recent works, the I and F effect was achieved by a neuron based on a single diffusive memristive device, illustrated in Fig. 5(b); (57) the diffusive memristor exhibits capacitive effect and a temporal behavior due to the doping of Ag nanoclusters between two electrodes of memristive material; (58) in the application of such memristor as a neuron, it integrates the presynaptic signals, and when the memristor threshold is reached, the diffusive memristor changes its state and resistance of a memristor decreases causing a spike; (59) the delay of a spike depends on the internal material properties and Ag doping in the diffusive memristor; (60) most of the artificial neural network (ANN) implementations use the neuron structures based on the summing amplifiers and comparators; (61) this model is usually used to represent threshold logic-based linear neuron model, and in most of the cases, this structure is used for postsynaptic neurons, while presynaptic neurons have various configurations depending on the application of the architectures or are not even shown in several research works; e.g., different variations of such neurons are shown in Fig. 5(c)–(f); (62) Fig. 5(c) represents the conventional summing and thresholding neuron configuration, wherein the summing amplifier sums the input currents and outputs the equivalent voltage, and the comparator output the spike or pulse (depending on the configuration of the circuit), when the amplifier output is above the threshold; (63) Fig. 5(d) shows a similar configuration of the output neuron with the summing amplifier combining the outputs from negative and positive memristive arrays and comparator circuit; (64) t