Prosecution Insights
Last updated: April 19, 2026
Application No. 17/714,677

HARDWARE NOISE-AWARE TRAINING FOR IMPROVING ACCURACY OF IN-MEMORY COMPUTING-BASED DEEP NEURAL NETWORK HARDWARE

Final Rejection §103
Filed
Apr 06, 2022
Examiner
YAARY, MICHAEL D
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
Arizona Board of Regents
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
872 granted / 1001 resolved
+32.1% vs TC avg
Moderate +8% lift
Without
With
+8.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
1019
Total Applications
across all art units

Statute-Specific Performance

§101
24.5%
-15.5% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1001 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 1-20 are pending in the application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 3. Applicant’s arguments with respect to claim(s) have been considered but are moot in view of the new grounds of rejection. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kilani et al (hereafter Kilani)(US Pub. 2023/0229870) in view of Yudanov et al (hereafter Yudanov)(US Pat. 11,537,861) and Baskin et al (hereafter Baskin)(US Pub. 2021/0241096). Kilani and Yudanov were cited in the previous office action dated 10/10/2025. As to claims 1 and 12, Kilani discloses a method for performing hardware noise-aware training for a deep neural network (DNN) (abstract and [0001]), the method comprising: Performing software training of the DNN for deployment on in-memory computing (IMC) hardware ([0005] IMC and [0049] training). 7. Kilani does not disclose the software training comprises injecting hardware noise into a forward pass of the DNN. However, Yudanov discloses the software training comprises injecting hardware noise into a forward pass of the DNN (column 2, lines 33-65 forward propagation, performing PIM dot product operations). 8. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the teachings of Kilani with the injecting of Kilani, for the benefit of saving time and/or conserve power by reducing and possibly eliminating external communications (Yudanov column 2, lines 54-56). 9. The combination of Kilani and Yudanov does not teach or suggest the injecting is emulated hardware noise that corresponds to noise in the IMC hardware. However, Baskin discloses the injecting is emulated hardware noise that corresponds to noise in the IMC hardware ([0044]-[0045], injection of values, using noise injection to emulate the uniform quantizer; thus in combination with Kilani and Yudanov teach the claimed limitations as currently recited.). Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention, to modify the teachings of Kilani and Yudanov with the noise injection as taught by Baskin, for the benefit of reducing computation time and computation resources required to train a quantized neural network (Baskin, [0045]). 10. As to claim 2, the combination of Kilani, Yudanov, and Baskin discloses wherein injecting emulated hardware noise comprises emulating a dot-product computation of the IMC hardware (Yudanov, column 11, lines 5-8, emulation). 11. As to claim 3, the combination of Kilani, Yudanov, and Baskin discloses wherein injecting emulated hardware noise further comprises using conditional probability tables to transform partial sums (Yudanov, column 12, lines 31-67 summing circuit). 12. As to claim 4, the combination of Kilani and Yudanov discloses wherein the performing of the software training of the DNN for deployment on the IMC hardware comprises dividing multiply-and-accumulate (MAC) operations of the DNN into a plurality of data blocks based on a parameter of the IMC hardware (Kilani [0048]-[0049]). 13. As to claim 5, the combination of Kilani, Yudanov, and Baskin discloses wherein a size of each data block is equal to a number of rows of an IMC memory array of the IMC hardware (Kilani [0042]-[0045]). 14. As to claim 6, the combination of Kilani, Yudanov, and Baskin discloses wherein the performing of the software training of the DNN for deployment on the IMC hardware further comprises: obtaining a partial sum for each of the plurality of data blocks; and accumulating results of the partial sum for each of the plurality of data 30 blocks into a full sum (Yudanov, column 13, line 54-column 14, line 38). 15. As to claims 7-9, the combination of Kilani, Yudanov, and Baskin discloses wherein injecting emulated hardware noise into the forward pass of the DNN comprises performing stochastic quantization of each partial sum; wherein the performing of the software training of the DNN for deployment on IMC hardware comprises using a forward pass through a plurality of convolution layers and at least one fully-connected layer of the DNN and a backward pass through the plurality of convolution layers and the at least one fully-connected layer; and further comprising using a straight-through estimator on the backward pass to correct the software training. (Yudanov, column 11, lines 21-31, forward and backward pass). 16. As to claim 10, the combination of Kilani, Yudanov, and Baskin discloses further comprising performing an inference evaluation using a forward pass through the plurality of convolution layers and the at least one fully-connected layer of the DNN (Yudanov, column 18, lines 13-41, inference operation). 17. As to claims 11 and 20, the combination of Kilani, Yudanov, and Baskin discloses performing noise-aware training using a single noise model approximation of the IMC hardware (Yudanov column 2, lines 33-65). 18. As to claims 13 and 14, the combination of Kilani, Yudanov, and Baskin discloses wherein the IMC hardware comprises resistive IMC hardware (Yudanov, column 30, lines 32-50, resistive elements); and wherein the IMC engine comprises capacitive IMC hardware (Yudanov, column 20, lines 47-67). 19. As to claims 15-17, the claims are rejected for similar reasons as claims 8-10 above. 20. As to claims 18-19, the combination of Kilani, Yudanov, and Baskin discloses wherein during training weights of the plurality of convolution layers and the at least one fully-connected layer are trained to minimize a loss function; and wherein the weights are updated during the backward pass through the plurality of convolution layers and the at least one fully-connected layer. Conclusion 21. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL D YAARY whose telephone number is (571)270-1249. The examiner can normally be reached Mon-Fri 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL D. YAARY/Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Apr 06, 2022
Application Filed
Oct 08, 2025
Non-Final Rejection — §103
Nov 13, 2025
Applicant Interview (Telephonic)
Nov 13, 2025
Examiner Interview Summary
Jan 06, 2026
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591537
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591411
SYSTEM AND METHOD TO ACCELERATE GRAPH FEATURE EXTRACTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585434
COMPUTING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12585430
FLOATING-POINT CONVERSION WITH DENORMALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585725
NON-RECTANGULAR MATRIX COMPUTATIONS AND DATA PATTERN PROCESSING USING TENSOR CORES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+8.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 1001 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month