Prosecution Insights
Last updated: April 19, 2026
Application No. 18/327,350

NEURO-SYNAPTIC PROCESSING CIRCUITRY

Non-Final OA §112
Filed
Jun 01, 2023
Examiner
ALLI, KASIM A
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Stichting IMEC Nederland
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
120 granted / 183 resolved
+10.6% vs TC avg
Strong +38% interview lift
Without
With
+38.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
22 currently pending
Career history
205
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 183 resolved cases

Office Action

§112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/01/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claims 1 and 6 are objected to because of the following informalities: Claim 1 line 3- delete “for” to positively describe that the data memory stores the synaptic weights and neuron states. Claim 1 line 7- insert --first-- before “memory port” to clarify that this refers to the first memory port introduced in line 4. Similar clarifications should be made for this term in line 9 and line 21 Claim 1 line 19- delete the comma to improve readability. Claim 6- insert --configurable-- after “said” in line 4 to clarify that this refers to the configurable data-type introduced in line 2. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 6, the phrase "such as" in line 2 renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). For purposes of examination the configurable data type will be interpreted as comprising one of the data types following the phrase. Claim 7 recites “said sub-portion” in line 4. It is unclear which sub-portion this refers to as lines 2-3 introduce plural sub-portions. For purposes of examination this limitation will be interpreted as any one of the introduced sub-portions. Claim 8 recites “the execution” in line 2. It is unclear whether this refers to the execution of the micro-code kernel by the load buffer providing NPE instructions to the NPEs or if it refers to the execution by the NPEs. For purposes of examination, this will be interpreted as referring to the execution by the NPEs. Claim 10 recites “the output event” in line 3. It is unclear whether this refers to the output events introduced in claim 9 or if it refers to one of the one or more of the output events introduced in line 2. For purposes of examination, the latter interpretation will be taken. Claim 11 recites “a plurality of neuro-synaptic processing circuitries according to claim 1”. This limitation is unclear because claim 1 does not introduce a plurality of neuro-synaptic processing circuitries. For purposes of examination this limitation will be interpreted as “a plurality of neuro-synaptic processing circuitries each configured according to the neuro-synaptic processing circuitry of claim 1”. Claim 12 recites “a plurality of neuro-synaptic processing circuitries according to claim 7”. This limitation is unclear because claim 7 does not introduce a plurality of neuro-synaptic processing circuitries. For purposes of examination this limitation will be interpreted as “a plurality of neuro-synaptic processing circuitries each configured according to the neuro-synaptic processing circuitry of claim 7”. Claim 13 recites “the NoC” in line 1. There is insufficient antecedent basis for this limitation as the claim does not introduce a NoC in this chain. For purposes of examination this claim will be interpreted as depending from claim 12, which introduces a NoC. Claim 14 recites “the program code” in line 3. It is unclear which program code this refers to as claim 11 introduces “a plurality of neuro-synaptic processing circuitries according to claim 1” which indicates that there is a program code for each neuro-synaptic processing circuitry. For purposes of examination, this will be interpreted as the program code of any one of the neuro-synaptic processing circuitries. Claim 14 recites “the synaptic weights and/or neuron states” in lines 3-4. It is unclear which synaptic weights and neuron states this refers to as claim 11 introduces “a plurality of neuro-synaptic processing circuitries according to claim 1” which indicates that there are synaptic weights and neuron states for each neuro-synaptic processing circuitry. For purposes of examination, this will be interpreted as the synaptic weights and neuron states of any one of the neuro-synaptic processing circuitries. Claim 14 recites “the data memory” in line 4. It is unclear which data memory this refers to as claim 11 introduces “a plurality of neuro-synaptic processing circuitries according to claim 1” which indicates that there is a data memory for each neuro-synaptic processing circuitry. For purposes of examination, this will be interpreted as the data memory of any one of the neuro-synaptic processing circuitries. Claims dependent on a rejected base claim are further rejected based on their dependence. Allowable Subject Matter Claims 1-5 are allowed. While no prior art rejection is given for claims 6-14, these claims are currently rejected under 112(b) and are not allowable at the current point. The following is a statement of reasons for the indication of allowable subject matter: The known prior art, taken alone or in combination, was not found to teach, in combination with other limitations in the claim, a loop buffer having a register-based memory, and address calculation unit, and a program counter, wherein the loop buffer receives a micro-code kernel from a GP-CPU and iteratively provides instructions of the micro-code kernel to NPEs for execution, and upon a load or store instruction the loop buffer provides a memory address stored in the loop buffer to a memory port and updates the memory address, as required by claim 1. The closest prior art of record was found to be: US 2007/0113058 which teaches a coprocessor fetching instructions for a loop of SIMD type, placing them in an instruction queue dedicated to SIMD instructions, and executing the loop from there, freeing up the instruction execution logic unit that is not dedicated to SIMD instructions for other activities, see [0022], and a load queue of the coprocessor that loads data for loop operation and keeps track of the base address, index, offset, and the adder to generate addresses to L2 cache, see [0026]-[0027]. US 2019/0311251 which teaches reusing instructions for different neural network layers when they are substantially similar, see [0015] US 2016/0283439 which teaches a SIMD processing module comprising multiple vector processing units that execute an instruction on respective parts of a vector, see Abstract, and that a load instruction may be executed by a vector processing unit which increments the address pointer when the instruction is repeated, see [0004]. US 2020/0293867 which teaches a program executed on a host processor that encodes a command stream in a buffer that provides workloads to a parallel processing unit, see [0116], and exploiting data reuse opportunities across multiple loop levels, see [0028] US 2005/0102659 teaches a loop buffer holding program loop instructions and a register file holding loop control parameters, see [0007] While the known prior art of record was found to generally teach loop buffers and processing elements executing instructions in parallel from a CPU, the known prior art of record was not found to teach a loop buffer that receives a micro-code kernel from a GP-CPU and iteratively provides NPE instructions of the micro-code kernel to neuron processing elements upon instruction of the GP-CPU, where upon a load or store instruction, the loop buffer provides a stored memory address to a memory port and updates the memory address by an address calculation unit of the loop buffer as specifically described in claim 1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2022/0156077 teaches reusing the same operation code in the same storage area for repeated instructions in two instruction sets, see Abstract US 2017/0316312 teaches kernel reuse, wherein a same kernel is kept and repeatedly applied on different data at each convolution layer, see [0030] and Fig. 7A US 2014/0281373 teaches a local queue that repeats a sequence of instructions to a vector execution unit, see Abstract US 2012/0216012 teaches an array of ALUs executing the same or different instructions on the same or different data, where issued instructions may be kept for multiple cycles in order to process an inner loop efficiently, see [0024]-[0025] US 2013/0185516 teaches a hardware prefetcher that recognizes an auto-increment address instruction and prefetches based on a stride determined from an increment field of the instruction, see [0037] Any inquiry concerning this communication or earlier communications from the examiner should be directed to KASIM ALLI whose telephone number is (571)270-1476. The examiner can normally be reached Monday - Friday 9am 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at (571) 270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KASIM ALLI/Examiner, Art Unit 2183 /JYOTI MEHTA/Supervisory Patent Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Jun 01, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578963
IMPLIED FENCE ON STREAM OPEN
2y 5m to grant Granted Mar 17, 2026
Patent 12541369
EXECUTING PHANTOM LOOPS IN A MICROPROCESSOR
2y 5m to grant Granted Feb 03, 2026
Patent 12536131
VECTOR COMPUTATIONAL UNIT
2y 5m to grant Granted Jan 27, 2026
Patent 12498930
STORE TO LOAD FORWARDING USING HASHES
2y 5m to grant Granted Dec 16, 2025
Patent 12468530
ASSOCIATIVELY INDEXED CIRCULAR BUFFER
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+38.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 183 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month