Prosecution Insights
Last updated: April 18, 2026
Application No. 18/953,180

NEURAL NETWORK PROCESSOR CAPABLE OF REUSING MEMORY ADDRESS VALUE

Non-Final OA §103
Filed
Nov 20, 2024
Examiner
METZGER, MICHAEL J
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Deepx Co. Ltd.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
435 granted / 482 resolved
+35.2% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
27 currently pending
Career history
509
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 482 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 1. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections 2. Claims 1-20 are objected to because of the following informalities: In claim 1, the phrase “neural network processing (NPU)” should be replaced with “neural network processing unit (NPU)”. In claims 1 and 13, the phrase “processing an neural network model” should be replaced with “processing a neural network model”. In claims 2-6, the phrase “The NPU of claim 1,” should be replaced with “The NPU of claim 1, wherein”. In claims 14-18, the phrase “The NPU of claim 13,” should be replaced with “The NPU of claim 13, wherein”. In claim 19, the phrase “the artificial neural network model” should be replaced with “an artificial neural network model”. In claim 20, the phrase “The NPU of claim 19,” should be replaced with “The NPU of claim 19, wherein”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et al (US 2019/0279072, herein Gao) in view of Yu et al (US 2021/0174177, herein Yu). Regarding claim 1, Gao teaches a neural network processing (NPU) for processing a neural network model (NN model) comprising: a processing element array configured to process the NN model (Figs 3, 8, [0041], processor unit, [0119], various neural network units, [0123], multiprocessor embodiments); a memory configured to store at least one data of the NN model processed in the processing element array (Fig 3, [0041], memory); and a processing control circuit configured to control the processing element array and the memory to use a value corresponding to an output data of a first layer of the NN model as a value corresponding to an input data of a second layer of the NN model ([0119], [0121], control unit 406 & [0051], [0074], [0097], output of each layer is used as input to next layer in calculations). Gao fails to teach wherein the NPU is to reuse a memory address value corresponding to the output data of the first layer as a memory address value corresponding to the input of the second layer. Yu teaches a neural network processing unit (NPU) configured to reuse a memory address value corresponding to output data of a first layer as a memory address value corresponding to input of a second layer ([0075-0076], reuse output feature map of first layer as input to second layer, [0087], [0095-0096], [0130], [0145], feature map to be reused accessed via address in external or other memory). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Gao and Yu to reuse shared attributes of data such as a memory address when using it in multiple neural network layers. While both Gao and Yu disclose neural network operations wherein an output of a first layer is used as input to a second layer, Gao does not explicitly disclose wherein the memory address of the output is necessarily reused by the second layer. However, one of ordinary skill in the art would understand that the usage of the same set of data by two processing elements or neural network layers would necessarily entail accessing the same location in memory where the data is stored, and therefore reusing the address of such data, as taught by Yu, would be an obvious means to efficiently implement this sharing of the data. As both Gao and Yu disclose neural network processors for processing neural network modes, the combination would merely entail a simple substitution of known prior art elements to achieve predictable results, and thus would have been obvious to one of ordinary skill in the art. Regarding claim 2, the combination of Gao and Yu teaches the NPU of claim 1, The NN model is optimized based on the NN model structure data or a neural network data locality information (Gao Abstract, [0038], [0054], [0113], optimizing model structure). Regarding claim 3, the combination of Gao and Yu teaches the NPU of claim 1, The NN model is optimized based on at least one of a structure data of the NPU and the structure data of the memory (Gao Abstract, [0038], [0054], [0113], optimizing model structure). Regarding claim 4, the combination of Gao and Yu teaches the NPU of claim 1, The NN model is optimized so as to satisfy a condition that a deterioration of inference accuracy of the NN model is maintained above a threshold value (Gao [0030], [0089], maintain operation accuracy according to threshold parameter). Regarding claim 5, the combination of Gao and Yu teaches the NPU of claim 1, The NN model is optimized so that a data size of the NN model becomes less than or equal to a threshold value in a condition of a degradation of inference accuracy is minimized (Gao [0030], [0089], maintain operation accuracy according to threshold parameter). Regarding claim 6, the combination of Gao and Yu teaches the NPU of claim 1, The NN model is optimized by utilizing at least one of a quantization algorithm, a pruning algorithm, a retraining algorithm, a quantization aware retraining algorithm and a model compression algorithm (Gao [0038], quantization algorithm, [0023], model training). Regarding claim 7, the combination of Gao and Yu teaches the NPU of claim 1, where the processing control circuit is configured to control the processing element array and the memory based on sequence information configured to schedule a processing sequence from an input layer to an output layer of the NN model (Gao [0071-0072], model structure sequences, Yu [0075-0076], reuse output feature map of first layer as input to second layer). Regarding claim 8, the combination of Gao and Yu teaches the NPU of claim 1, wherein the processing control circuit is configured to control the processing element array and the memory by analyzing predefined operation order information of the NN model (Gao [0049], [0071-0073], defined sequence order for processing model layer). Regarding claim 9, the combination of Gao and Yu teaches the NPU of claim 1, wherein the processing control circuit is configured to schedule an operation order of the NN model based on a structural data of the NN model or an neural network data locality information (Gao [0049], [0071-0073], defined sequence order for processing model layer & Abstract, [0038], optimizing model structure). Regarding claim 10, the combination of Gao and Yu teaches the NPU of claim 1, wherein the processing control circuit is configured to access a memory address value where a node data and a weight data of layers of the NN model are stored based on a predefined operation order information of the NN model (Gao [0049], [0071-0073], defined sequence order for processing model layer & Yu [0075-0076], reuse output feature map of first layer as input to second layer). Regarding claim 11, the combination of Gao and Yu teaches the NPU of claim 1, wherein the processing control circuit is configured to schedule a processing order based on a structural data from an input layer to an output layer of the neural network or an neural network data locality information (Gao [0049], [0071-0073], defined sequence order for processing model layer & Yu [0075-0076], reuse output feature map of first layer as input to second layer). Regarding claim 12, the combination of Gao and Yu teaches the NPU of claim 1, Wherein the processing control circuit is configured to recognize reusable variable values and reusable constant values based on predefined operation order information of the NN model and configured to control to reuse the memory using the reusable variable value and the reusable constant value (Gao [0049], [0071-0073], defined sequence order for model layer & Yu [0075-0076], reuse output feature map of first layer as input to second layer). Regarding claim 13, Gao teaches a neural network processing unit (NPU) for processing an artificial neural network model (ANN model) comprising: a plurality of processing elements; (Figs 3, 8, [0041], processor unit, [0119], various neural network units, [0123], multiprocessor embodiments); a data storage circuit configured to store at least one data of the ANN model processed in the plurality of processing elements (Fig 3, [0041], memory); and a NPU control circuit configured to control the data storage circuit to store a value corresponding to an output data of a first layer of the ANN model as a value corresponding to an input data of a second layer of the ANN model ([0119], [0121], control unit 406 & [0051], [0074], [0097], output of each layer is used as input to next layer in calculations). Gao fails to teach wherein the NPU is to reuse a memory address value corresponding to the output data of the first layer as a memory address value corresponding to the input of the second layer. Yu teaches a neural network processing unit (NPU) configured to reuse a memory address value corresponding to output data of a first layer as a memory address value corresponding to input of a second layer ([0075-0076], reuse output feature map of first layer as input to second layer, [0087], [0095-0096], [0130], [0145], feature map to be reused accessed via address in external or other memory). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Gao and Yu to reuse shared attributes of data such as a memory address when using it in multiple neural network layers. While both Gao and Yu disclose neural network operations wherein an output of a first layer is used as input to a second layer, Gao does not explicitly disclose wherein the memory address of the output is necessarily reused by the second layer. However, one of ordinary skill in the art would understand that the usage of the same set of data by two processing elements or neural network layers would necessarily entail accessing the same location in memory where the data is stored, and therefore reusing the address of such data, as taught by Yu, would be an obvious means to efficiently implement this sharing of the data. As both Gao and Yu disclose neural network processors for processing neural network modes, the combination would merely entail a simple substitution of known prior art elements to achieve predictable results, and thus would have been obvious to one of ordinary skill in the art. Claims 14-18 refer to an alternate NPU embodiment of the NPU embodiment of claims 2-6. Therefore, the above rejections for claims 2-6 are applicable to claims 14-18, respectively. Regarding claim 19, Gao teaches a neural network processing unit (NPU) comprising: a processing element array; (Figs 3, 8, [0041], processor unit, [0119], various neural network units, [0123], multiprocessor embodiments); a memory configured to store at least one data of the artificial neural network model processed in the processing element array (Fig 3, [0041], memory); and a processing control circuit configured to use a value in which an operation value of a first layer of a first scheduling is stored as a value corresponding to an input data of a second layer of a second scheduling, which is a next scheduling of the first scheduling, ([0119], [0121], control unit 406 & [0051], [0074], [0097], output of each layer is used as input to next layer in calculations), wherein the artificial neural network model is optimized by utilizing at least one of a quantization algorithm, a pruning algorithm, a retraining algorithm, a quantization aware retraining algorithm and a model compression algorithm ([0038], quantization algorithm, [0023], model training). Gao fails to teach wherein the NPU is to reuse a memory address value corresponding to the output data of the first layer as a memory address value corresponding to the input of the second layer. Yu teaches a neural network processing unit (NPU) configured to reuse a memory address value corresponding to output data of a first layer as a memory address value corresponding to input of a second layer ([0075-0076], reuse output feature map of first layer as input to second layer, [0087], [0095-0096], [0130], [0145], feature map to be reused accessed via address in external or other memory). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the teachings of Gao and Yu to reuse shared attributes of data such as a memory address when using it in multiple neural network layers. While both Gao and Yu disclose neural network operations wherein an output of a first layer is used as input to a second layer, Gao does not explicitly disclose wherein the memory address of the output is necessarily reused by the second layer. However, one of ordinary skill in the art would understand that the usage of the same set of data by two processing elements or neural network layers would necessarily entail accessing the same location in memory where the data is stored, and therefore reusing the address of such data, as taught by Yu, would be an obvious means to efficiently implement this sharing of the data. As both Gao and Yu disclose neural network processors for processing neural network modes, the combination would merely entail a simple substitution of known prior art elements to achieve predictable results, and thus would have been obvious to one of ordinary skill in the art. Regarding claim 20, the combination of Gao and Yu teaches the NPU of claim 19, The NN model is optimized based on the NN model structure data or a neural network data locality information, a structure data of the NPU and a structure data of the memory (Gao Abstract, [0038], optimizing model structure). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Phan (US 2020/0293876) discloses a neural network processor for optimizing the accuracy of a model according to the neural network structure. Jung (US 2019/0347550) discloses a neural network processor for optimizing the accuracy of a model without performance degradation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J METZGER whose telephone number is (571)272-3105. The examiner can normally be reached Monday-Friday 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL J METZGER/ Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Nov 20, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103
Mar 18, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591517
FETCHING VECTOR DATA ELEMENTS WITH PADDING
2y 5m to grant Granted Mar 31, 2026
Patent 12578965
Biased Indirect Control Transfer Prediction
2y 5m to grant Granted Mar 17, 2026
Patent 12566610
MICROPROCESSOR WITH APPARATUS AND METHOD FOR REPLAYING LOAD INSTRUCTIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566607
ROBUST, EFFICIENT MULTIPROCESSOR-COPROCESSOR INTERFACE
2y 5m to grant Granted Mar 03, 2026
Patent 12561139
ENCODING AND DECODING VARIABLE LENGTH INSTRUCTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
98%
With Interview (+8.1%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 482 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month