Prosecution Insights
Last updated: April 17, 2026
Application No. 18/313,036

Analog Neural Network and Method for Advanced Process Node Integration

Non-Final OA §101§102§112
Filed
May 05, 2023
Examiner
FEITL, LEAH M
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
4y 2m
To Grant
32%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
21 granted / 84 resolved
-30.0% vs TC avg
Moderate +7% lift
Without
With
+7.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
34 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
30.8%
-9.2% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 84 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/05/2023 was filed before the mailing date of the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 16 is directed to “a semiconductor device” and neither the claims nor the specification define a “semiconductor device” as reciting sufficient hardware structure that would explicitly exclude transitory signals (see MPEP 2106.03). Therefore, claim 16 does not fall within at least one of the four categories of patent eligible subject matter and is considered non-statutory matter directed to software per se. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6, 8-9, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “a steering circuit coupled to an output of the synapse module”; however, the metes and bounds of a “steering circuit” are not clear. For purposes of examination, Examiner is interpreting that a steering circuit provides a multiplexing feature between synapses and processing elements as described in paragraph [0040] of Applicant’s specification. Claims 8 and 17 recite a similar limitation to claim 1 and are rejected for the same reasons. Dependent claims 2-6 and 9 are also rejected because they fail to correct the deficiencies of the claims on which they depend. Claim 9 is additionally rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 9 recites the limitation “wherein each of the processing elements can reuse the synapses of the synapse module through the steering circuit in a time interleaved operation”; however, the metes and bounds of a “time interleaved operation” are not clear. For purposes of examination, Examiner is interpreting that the synapses can be reused over time. Claim Rejections - 35 USC § 102 (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Arima et al (US 5293457, herein Arima). Regarding claim 1, Arima teaches a neural network (col. 14 lines 3-5 recites “An object of the present invention is to provide an improved semiconductor neural network which can be easily integrated at a high degree”)), comprising: a synapse module including a plurality of synapses (col. 15 lines 9-18 recite “Each synapse representing circuit includes a synapse load representing circuit representing a synapse load which indicates the connection strength between an axon signal line and a dendrite signal line, a learning control circuit connected to first and second axon signals, which are different from each other, for processing the first and second axon signals in accordance with 15 the prescribed learning rules and outputting synapse load variation information for supplying the same to the synapse load representing circuit”. Col. 22 lines 50-56 recite “FIG. 10 shows the layout of a semiconductor neural network according to an embodiment of the present invention on a semiconductor chip. Referring to FIG. 10, the semiconductor neural network includes a synapse representing part 1 for performing arithmetic processing in correspondence to synapses for respective neurons” (i.e., a synapse of a plurality of synapses in a neural network)); a steering circuit coupled to an output of the synapse module (Examiner’s Note: this limitation is interpreted in light of the 112(b) rejection such that a steering circuit provides a multiplexing feature between synapses and processing elements as described in paragraph [0040] of Applicant’s specification. Given this interpretation, at least column 34 lines 38-41 teach a multiplexing component)); and a plurality of processing elements coupled to an output of the steering circuit, wherein each of the processing elements share the synapses of the synapse module through the steering circuit (fig. 27 and col. 16 lines 55-58 recite “The synapse load change means includes a charge pump circuit for increasing/decreasing the amount of charges stored in the synapse load value storage means in response to a pulsing signal from the learning control circuit. This charge pump circuit includes a series body of first and second diodes implemented by one insulated gate field effect transistor, and a capacitor for capacitively coupling the pulsing signal from the learning control circuit to the series body of the diodes” (i.e., the synapse circuit includes processing elements comprising at least capacitors)). Regarding claim 2, Arima teaches the neural network of claim 1, wherein a first synapse of the plurality of synapses includes: a first transistor conducting a selectable current; a second transistor coupled to a node and conducting the selectable current (col. 7 lines 20-24 recite “FIG. 4 shows the structure of each synapse load part (resistive element). The synapse load part includes four transistor groups TR1, TR2, TR3 and TR4, in order to provide positive coupling (excitatory coupling) and negative coupling (inhibitory coupling)”. Fig. 9 and col. 10 lines 19-21 recite “The constant current circuit 210' includes n-channel MOS transistors NT10 and NT11 and p-channel MOS transistors PT11 and PT12” (i.e., the synapse circuit includes transistors capable of conducting a current)); a first switching circuit coupled between the node and a first output of the first synapse; a second switching circuit coupled between the node and a second output of the first synapse (col. 34 lines 59-61 recite “Referring to FIG. 28, a synapse polarity converting circuit part includes a gate bias applying circuit 220 and gate bias switching circuits 221 to 224”. (i.e., the synapse includes switching circuits)); and a logic circuit controlling the first switching circuit and second switching circuit (fig. 49 and col. 52 lines 60-66 recite “the circuit 616 may be formed by buffers outputting complementary signal pairs from the control signals CT1 and CT2, AND gates provided in correspondence to the inputs A, B and C for receiving two of four buffer output signal lines, and switches which have an on or off state controlled by the outputs of the AND gates for selecting corresponding inputs” (i.e., control logic for controlling the switching circuits)). Regarding claim 3, Arima teaches the neural network of claim 2, wherein the selectable current is set by a threshold of the first transistor (col. 10 lines 19-21 recite “The constant current circuit 210' includes n-channel MOS transistors NT10 and NT11 and p-channel MOS transistors PT11 and PT12”. Col. 10 11 lines 19-23 recite “the output state of the neuron unit i is determined by connecting a plurality of synapse representing circuits in parallel with the signal lines 211a and 211b and comparing the current flowing on the signal line 212' with the threshold value θi” (i.e., a current is set by a threshold associated with the transistors in the synapse circuit)). Regarding claim 4, Arima teaches the neural network of claim 1, wherein a first processing element of the plurality of processing elements receives a current from a first output of a first synapse of the plurality of synapses (col. 16 lines 38-54 recite “A self-organized synapse representing circuit according to another aspect of the present invention includes a learning control circuit which receives a first axon signal Si and a second axon signal Sj and outputs a pulsing signal representing an amount of a synapse load variation in accordance with predetermined learning rules, synapse load value storage means including a first capacitor for storing an excitatory synapse load value and a second capacitor for storing an inhibitory synapse load value, means for changing the synapse load value stored in the synapse load value storage means in response to the pulsing signal from the learning control circuit, and means for transferring a received axon signal onto a corresponding dendrite signal line in the form of a current signal with a weight represented by the stored load value” (i.e., a capacitor processing element receives the current from the synapse)). Regarding claim 5, Arima teaches the neural network of claim 4, wherein the first processing element includes a capacitor receiving the current (col. 16 lines 55-58 recite “The synapse load change means includes a charge pump circuit for increasing/decreasing the amount of charges stored in the synapse load value storage means in response to a pulsing signal from the learning control circuit. This charge pump circuit includes a series body of first and second diodes implemented by one insulated gate field effect transistor, and a capacitor for capacitively coupling the pulsing signal from the learning control circuit to the series body of the diodes” (i.e., at least one capacitor receives the current)). Regarding claim 6, Arima teaches the neural network of claim 4, further including a polarity inversion circuit coupled for receiving the current and reversing flow direction of the current (fig. 12, fig. 32 and col. 38 line 64 – col. 39 line 4 recite “Although the p-channel MOS transistors are employed in the current decoder and the constant current transistors, the conductivity type of the transistors can be changed by changing the employed voltage polarity. In other words, n-channel MOS transistors may alternatively be employed. Further, it is also possible to change the conductivity type of the MOS transistors forming the synapse polarity converting circuit by changing the employed voltage polarity” (i.e., a polarity reversing component of the circuit)). Claim 7 is a method claim and its limitation is included in claim 1. The only difference is that claim 7 requires a method (col. 29 recites “A method of forming a chip for representing synapse loads of a diagonal part and a semiconductor chip for representing synapse loads of a non-diagonal part is conductor chip now described”). Therefore, claim 7 is rejected for the same reasons as claim 1. Regarding claim 8, Arima teaches the method of claim 7, further including providing a steering circuit coupled to an output of the synapse module and an input of the processing elements (Examiner’s Note: this limitation is interpreted in light of the 112(b) rejection such that a steering circuit provides a multiplexing feature between synapses and processing elements as described in paragraph [0040] of Applicant’s specification. Given this interpretation, at least column 34 lines 38-41 teach a multiplexing component)). Regarding claim 9, Arima teaches the method of claim 8, wherein each of the processing elements can reuse the synapses of the synapse module through the steering circuit in a time interleaved operation (Examiner’s Note: this limitation is interpreted in light of the 112(b) rejection such that the synapses can be reused over time. Given this interpretation, at least col. 42 and col. 64 lines 58-62 of Arima teach wherein the synapses in the neural network can be reused over time)). Regarding claim 10, Arima teaches the method of claim 7, wherein activation outputs of the plurality of processing elements are selectively digitally coupled to a subsequent layer of inputs of the synapse module (col. 25 lines 53-63 recite “Fig. 16 illustrates the hierarchical structure of this hierarchical neural network, in which buffers 1’, 2’, and 3’, derive signals transferred to an input layer and the signal lines 220 indicate signal lines of an intermediate layer (hidden layer), while the signal lines 240 indicate data output lines from an output layer. Referring to Fig. 15, therefore, 1’, 2’, and 3’ denote input layer neurons, 4’ and 5’ denote neurons of the hidden layer, and 6’, 7’, and 8’ denote neurons of the output layer (i.e., outputs of the processing elements of a given layer are connected to a subsequent layer of the neural network)). Claim 11 is a method claim and its limitation is included in claim 2. Claim 11 is rejected for the same reasons as claim 2. Claim 12 is a method claim and its limitation is included in claim 3. Claim 12 is rejected for the same reasons as claim 3. Claim 13 is a method claim and its limitation is included in claim 4. Claim 13 is rejected for the same reasons as claim 4. Claim 14 is a method claim and its limitation is included in claim 5. Claim 14 is rejected for the same reasons as claim 5. Claim 15 is a method claim and its limitation is included in claim 6. Claim 15 is rejected for the same reasons as claim 6. Claim 16 is a semiconductor device claim and its limitation is included in claim 1. The only difference is that claim 16 requires a semiconductor device (col. 14 lines 3-5 recites “An object of the present invention is to provide an improved semiconductor neural network which can be easily integrated at a high degree”). Therefore, claim 7 is rejected for the same reasons as claim 1. Claim 17 is a semiconductor device claim and its limitation is included in claim 8. Claim 17 is rejected for the same reasons as claim 8. Claim 18 is a semiconductor device claim and its limitation is included in claim 2. Claim 18 is rejected for the same reasons as claim 2. Claim 19 is a semiconductor device claim and its limitation is included in claim 3. Claim 19 is rejected for the same reasons as claim 3. Claim 20 is a semiconductor device claim and its limitation is included in claim 4. Claim 20 is rejected for the same reasons as claim 4. Claim 21 is a semiconductor device claim and its limitation is included in claim 5. Claim 21 is rejected for the same reasons as claim 5. Claim 22 is a semiconductor device claim and its limitation is included in claim 6. Claim 22 is rejected for the same reasons as claim 6. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200356848 A1 (Lesso et al) teaches computing circuitry for analogue neuromorphic computing. US 20230369329 A1 (Kurokawa et al) teaches a semiconductor device for converting digital data to be implemented by an analog neural network. US 20220366211 A1 (Ma et al) teaches a device for in-memory computation of layers of an analog neural network, wherein the current activation scales from one layer in analog domain using a current-steering circuit. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEAH M FEITL whose telephone number is (571) 272-8350. The examiner can normally be reached on M-F 0900-1700 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.M.F./ Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

May 05, 2023
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572720
METHODS AND APPARATUSES FOR RESOURCE-OPTIMIZED FERMIONIC LOCAL SIMULATION ON QUANTUM COMPUTER FOR QUANTUM CHEMISTRY
2y 5m to grant Granted Mar 10, 2026
Patent 12572723
METHODS AND APPARATUSES FOR RESOURCE-OPTIMIZED FERMIONIC LOCAL SIMULATION ON QUANTUM COMPUTER FOR QUANTUM CHEMISTRY
2y 5m to grant Granted Mar 10, 2026
Patent 12555023
REINFORCEMENT LEARNING EXPLORATION BY EXPLOITING PAST EXPERIENCES FOR CRITICAL EVENTS
2y 5m to grant Granted Feb 17, 2026
Patent 12530434
Classifying Data by Manipulating the Quantum States of Qubits
2y 5m to grant Granted Jan 20, 2026
Patent 12462173
QUANTUM CIRCUIT AND METHODS FOR USE THEREWITH
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
32%
With Interview (+7.0%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 84 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month