Prosecution Insights
Last updated: April 19, 2026
Application No. 17/845,543

AUTOMATIC ERROR PREDICTION FOR PROCESSING NODES OF DATA CENTERS USING NEURAL NETWORKS

Non-Final OA §103
Filed
Jun 21, 2022
Examiner
GUYTON, PHILIP A
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
666 granted / 795 resolved
+28.8% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
822
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 795 resolved cases

Office Action

§103
NON-FINAL OFFICE ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) s 1, 2, 4-11, and 13-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Pub. No. 2022/0351744 to Krishnan et al. (hereinafter Krishnan) in view of U.S. Patent Pub. No. 2020/0351171 to Ozonat et al. (hereinafter Ozonat). Krishnan discloses: 1. A method comprising: receiving first telemetry data corresponding to a first processing device type (paras. [0025]-[0026] - training node 105A receives data from sensor 104A); and computing, using a first machine learning model and based at least in part on the first telemetry data corresponding to one or more first processing devices associated with the first processing device type, one or more error predictions corresponding to the one or more first processing devices (paras. [0025]-[0026] – local ML model 106A predicts anomalies and when maintenance should be performed), wherein one or more parameters of the first machine learning model having been updated from one or more outputs generated using a second machine learning model based at least in part on second telemetry data corresponding to the first processing device type (paras. [0020], [0042] – updated weights sent to edge nodes to update local ML models). Krishnan does not disclose expressly the second machine learning model being trained using historical telemetry data comprising telemetry data corresponding to a plurality of processing device types that comprises at least the first processing device type and at least one other processing device type. Ozonat teaches the second machine learning model being trained using historical telemetry data comprising telemetry data corresponding to a plurality of processing device types that comprises at least the first processing device type and at least one other processing device type (abstract and paras. [0028], [0036], [0052]). Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art to modify Krishnan by training using historical data, as taught by Ozonat. A person of ordinary skill in the art would have been motivated to do so in order to use machine learning to design data centers, enhance risk management, and pinpoint failures, as discussed by Ozonat (para. [0017]). Modified Krishnan discloses: 2. The method of claim 1, further comprising: generating one or more feature sets using the historical telemetry data, wherein the second machine learning model is trained using the one or more feature sets (Ozonat – abstract and paras. [0068]-[0069]) and the first machine learning model is trained using a subset of the one or more feature sets (Krishnan – paras. [0020], [0026]). 4. The method of claim 1, wherein a second processing device type of the at least one other processing device type corresponds to graphics processing units (GPUs), and the first processing device type corresponds to one or more GPUs in a data center (Krishnan – para. [0074] and Ozonat – paras. [0028], [0052]). 5. The method of claim 1, further comprising determining whether to perform a preventative action corresponding to the one or more first processing devices based at least in part on the one or more error predictions (Krishnan – paras. [0025], [0028], [0037]). 6. The method of claim 1, wherein the first machine learning model is smaller in size than the second machine learning model (Krishnan – paras. [0020], [0021]). 7. The method of claim 1, wherein the first processing device type is a subset of the plurality of processing device types (Krishnan – para. [0022] and Ozonat – para. [0035]). 8. The method of claim 1, wherein the one or more first processing devices form a processing cluster of a data center (Ozonat – paras. [0019]-[0020]). 9. The method of claim 1, wherein the first machine learning model is configured with at least one of: one or more fewer layers than the second machine learning model or one or more fewer nodes for at least one layer than the second machine learning model (Ozonat – para. [0060]). 10. A processor comprising processing circuitry to: receive historical telemetry data corresponding to one or more devices of a device type (Krishnan - paras. [0025]-[0026]); generate, based at least in part on an output produced using a first machine learning model trained to generate one or more first error predictions corresponding to the device type, one or more second error predictions using a second machine learning model and corresponding to the device type (Krishnan - paras. [0025]-[0026]), wherein the one or more second error predictions are generated using the second machine learning model further based at least in part on (i) a subset of the historical telemetry data and (ii) a subset of the one or more first error predictions of the first machine learning model, the subset of the one or more first error predictions generated using the first machine learning model based at least in part on the subset of the historical telemetry data (Krishnan - paras. [0020], [0042] and Ozonat - abstract and paras. [0028], [0036], [0052]). 11. The processor of claim 10, wherein the processing circuitry is further to: generate one or more feature sets from the historical telemetry data, wherein one or more parameters of the first machine learning model is updated based at least in part using the one or more feature sets generated from the historical telemetry data (Ozonat – abstract and paras. [0068]-[0069]); and wherein one or more parameters of the second machine learning model is updated based at least in part on a subset of the one or more feature sets generated from the subset of the historical telemetry data (Krishnan – paras. [0020], [0026]). 13. The processor of claim 10, wherein the device type corresponds to one or more of a graphics processing unit (GPU), a data processing unit (DPU), a central processing unit (CPU), or a parallel processing unit (PPU) (Krishnan – para. [0074] and Ozonat – paras. [0028], [0052]). 14. The processor of claim 10, wherein, after the second machine learning model is trained, the second machine learning model generates one or more error predictions corresponding to one or more other devices of the device type, and the one or more error predictions are used to determine whether to perform a preventative action with respect to the one or more other devices (Krishnan – paras. [0025], [0028], [0037]). 15. The processor of claim 10, wherein the second machine learning model is smaller in size than the first machine learning model (Krishnan – paras. [0020], [0021]). 16. The processor of claim 15, wherein the second machine learning model is configured with at least one of: one or more fewer layers than the first machine learning model or one or more fewer nodes for at least one layer than the first machine learning model (Ozonat – para. [0060]). 17. The processor of claim 10, wherein the processing circuitry is further to: update one or more parameters of a third machine learning model to generate one or more third error predictions corresponding to the device type based at least in part on (i) another subset of the historical telemetry data that is associated with the device type and (ii) the one or more first error predictions of the first machine learning model (Krishnan – paras. [0020], [0026], [0031]). 18. The processor of claim 10, wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Krishnan – para. [0020] and Ozonat - abstract). 19. A system comprising: one or more processing units to generate, using one or more machine learning models and based at least in part on telemetry data corresponding to one or more first devices of a device type, one or more error predictions corresponding to the one or more first devices (Krishnan - paras. [0025]-[0026]), the one or more machine learning models being trained, at least in part, by comparing one or more first outputs of the one or more machine learning models to one or more second outputs of one or more trained machine learning models (Krishnan - paras. [0020], [0042] and Ozonat - abstract and paras. [0028], [0036], [0052]), the one or more first outputs and the one or more second outputs generated using a same training telemetry data corresponding to one or more second devices of the device type (Krishnan – paras. [0020], [0026]). 20. The system of claim 19, wherein the one or more processing units are further to determine a preventative action based at least in part on the one or more error predictions (Krishnan – paras. [0025], [0028], [0037]). 21. The system of claim 19, wherein the one or more machine learning models corresponding to the one or more first outputs are smaller in size than the one or more machine learning models corresponding to the one or more second outputs (Krishnan – paras. [0020], [0021]). 22. The system of claim 19, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Krishnan – para. [0020] and Ozonat - abstract). Allowable Subject Matter Claims 3 and 12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip Guyton whose telephone number is (571)272-3807. The examiner can normally be reached M-F 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571)272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP GUYTON/Primary Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Jun 21, 2022
Application Filed
Dec 19, 2025
Non-Final Rejection — §103
Mar 06, 2026
Interview Requested
Mar 20, 2026
Applicant Interview (Telephonic)
Mar 20, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596604
SYSTEMS AND METHODS FOR DATA MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596600
VERIFYING PROCESSING LOGIC OF A GRAPHICS PROCESSING UNIT
2y 5m to grant Granted Apr 07, 2026
Patent 12579038
METHOD AND APPARATUS FOR BACKING UP GLOBAL MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572404
DETECTING AND RECOVERING FROM TIMEOUTS IN SCALABLE MESH NETWORKS IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Mar 10, 2026
Patent 12554571
ERROR CAUSE ESTIMATION DEVICE AND ESTIMATION METHOD
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
92%
With Interview (+8.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 795 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month