Prosecution Insights
Last updated: April 19, 2026
Application No. 18/219,013

APPLICATION PROGRAMMING INTERFACE TO MONITOR SOFTWARE WORKLOADS

Final Rejection §103§DP
Filed
Jul 06, 2023
Examiner
CHEN, QING
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
542 granted / 678 resolved
+24.9% vs TC avg
Strong +52% interview lift
Without
With
+51.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
28 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 678 resolved cases

Office Action

§103 §DP
DETAILED ACTION This Office action is in response to the amendment submitted on November 5, 2025. Claims 1-20 are pending. Claims 1, 5, 6, 8, 12, 13, and 15 are currently amended. The objection to the title of the invention is withdrawn in view of the Examiner’s reconsideration of the title of the invention. The objection to the abstract is withdrawn in view of the Applicant’s amendments to the abstract. The provisional non-statutory obviousness-type double patenting rejections of Claims 1, 2, 8, 9, 15, and 16 as being unpatentable over Claims 1, 2, 9, 10, 15, and 16 of co-pending U.S. Application No. 18/219,011 (hereinafter “‘011”) in view of US 2004/0267548 (hereinafter “Jones”) are held in abeyance until allowance of the instant application. The provisional non-statutory obviousness-type double patenting rejections of Claims 1-20 as being unpatentable over Claims 1-20 of co-pending U.S. Application No. 18/219,017 (hereinafter “‘017”) in view of US 2006/0168194 (hereinafter “Lake”) are held in abeyance until allowance of the instant application. The 35 U.S.C. § 112(b) rejections of Claims 5, 6, 12, and 13 are withdrawn in view of the Applicant’s amendments to the claims. The 35 U.S.C. § 101 rejections of Claims 1-20 are withdrawn in view of the Applicant’s amendments to the claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-8, 12-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0342337 (hereinafter “Lu”) in view of US 2009/0249369 (hereinafter “Itoh”). As per Claim 1, Lu discloses: A processor (paragraph [0102], “[…] the one or more computing devices may include […] one or more hardware processors […].”), comprising: one or more circuits (paragraph [0102], “[…] the one or more computing devices may include […] one or more hardware processors […].”) to: perform a first application programming interface (API), responsive to an API call identifying one or more software workloads to be monitored (paragraph [0544], “A module input mechanism may include custom scripts that can call third-party APIs [perform a first application programming interface (API), responsive to an API call] to pull large volumes of metrics data from distributed computing sources.”; paragraph [0546], “Specifically, the LSDC 2430 can be a centralized process that manages multiple modular inputs that can receive multiple data streams from different sources. The LSDC 2430 is a distributed task scheduler that can manage different APIs to coordinate scheduling across multiple collectors for one or more indexers, which can result in significant performance improvements.”; paragraph [0720], “A source of machine data can include, for example, a software application, a module, an operating system, a script, an application programming interface, etc. For example, machine data 5010B may be log data that is produced by the operating system of entity 5004A. In another example, machine data 5010C may be produced by a script that is executing on entity 5004A. In yet another example, machine data 5010A may be about an entity 5004A and produced by a software application 5020A that is hosted by another entity to monitor the performance of the entity 5004A through an application programming interface (API) [identifying one or more software workloads to be monitored].”), to monitor performance of the one or more software workloads identified by the first API call (paragraph [0721], “[…] entity 5004A may be a virtual machine and software application 5020A may be executing outside of the virtual machine (e.g., on a hypervisor or a host operating system) to monitor the performance of the virtual machine via an API [monitor performance of the one or more software workloads identified by the first API call]. The API can generate network packet data including performance measurements for the virtual machine, such as, memory utilization, CPU usage, etc.”); and obtain status information for the identified one or more software workloads running on multiple compute nodes connected via one or more networks (paragraph [0103], “[…] one or more client devices 102 are coupled to one or more host devices 106 and a data intake and query system 108 via one or more networks 104 [multiple compute nodes connected via one or more networks].”; paragraph [0465], “[…] the user selects a different metric and the lower-tier DIQS dynamically analyzes (in real-time or in near-real-time) the collected data and dynamically updates the user interface 1844, 1846 to present the user with a visualization of the status of the monitored entities for the new metric according to the alert threshold value.”; paragraph [0544], “The disclosed collections technologies may optionally include the large scale data collector (LSDC) 2430 that supports metrics data. For example, the data intake and query system may include numerous modular input mechanism to stream metrics data from different collectors over one or more computer networks. A module input mechanism may include custom scripts that can call third-party APIs to pull large volumes of metrics data from distributed computing sources [one or more software workloads running on multiple compute nodes connected via one or more networks].”; paragraph [0721], “[…] entity 5004A may be a virtual machine and software application 5020A may be executing outside of the virtual machine (e.g., on a hypervisor or a host operating system) to monitor the performance of the virtual machine via an API. The API can generate network packet data including performance measurements for the virtual machine, such as, memory utilization, CPU usage, etc. [obtain status information for the identified one or more software workloads]”). Lu discloses “an API call,” but Lu does not explicitly disclose: an API call including one or more arguments; select a second API; and via the selected second API. However, Itoh discloses: an API call including one or more arguments (paragraph [0070], “[…] the sensor control module 28 is used by calling the client API 25 as an interface provided by the adaptor 26 for use of sensor control, and then by calling the service API 27 of the sensor control module 28 via the adaptor 26 (emphasis added).”; paragraph [0129], “In the client API, the first argument ‘id’ of the ‘getData’ function denotes the device ID of the sensor 30, the second argument ‘info’ thereof denotes the type of the internal sensor in charge of measurement and the sensing interval, the third argument ‘count’ thereof denotes the number of elements of the second argument ‘info’, and the fourth argument ‘cb’ thereof designates the callback object for asynchronous data reception [an API call including one or more arguments].”); select a second API (paragraph [0017], “[…] a step of selecting the first API function for use by the API of the selected first module; a step of acquiring first API function semantic information about the selected first API function, determining second API function semantic information matching the acquired first API function semantic information, and selecting the second API function corresponding to the determined second API function semantic information for use by the selected second module as a target to associate with the selected first API function […] [select a second API].”); and via the selected second API (paragraph [0017], “[…] a step of selecting the first API function for use by the API of the selected first module; a step of acquiring first API function semantic information about the selected first API function, determining second API function semantic information matching the acquired first API function semantic information, and selecting the second API function corresponding to the determined second API function semantic information for use by the selected second module as a target to associate with the selected first API function […] [via the select a second API].”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Itoh into the teaching of Lu to include “an API call including one or more arguments; select a second API; and via the selected second API.” The modification would be obvious because one of ordinary skill in the art would be motivated to allow a module to be used by other modules, i.e., service module, makes public an interface to allow the use of its own capabilities, and the remaining modules using the service module, i.e., client modules (Itoh, paragraph [0002]). As per Claim 5, the rejection of Claim 1 is incorporated; and Lu further discloses: wherein the one or more software workloads are performed using a distributed computing system (paragraph [0515], “In the example of FIG. 23, each component can run on one or more nodes (e.g., hosting devices). As used herein, the term host may refer to a computing device, a communication device, a storage device, or any electronic device capable of running a software component.”). As per Claim 6, the rejection of Claim 1 is incorporated; and Lu further discloses: wherein the one or more software workloads are performed using one or more nodes of a distributed computing system (paragraph [0515], “In the example of FIG. 23, each component can run on one or more nodes (e.g., hosting devices). As used herein, the term host may refer to a computing device, a communication device, a storage device, or any electronic device capable of running a software component.”). As per Claim 7, the rejection of Claim 1 is incorporated; and Lu further discloses: wherein the first API is to provide one or more output values indicating one or more workload statuses of the one or more software workloads (paragraph [0465], “[…] the user selects a different metric and the lower-tier DIQS dynamically analyzes (in real-time or in near-real-time) the collected data and dynamically updates the user interface 1844, 1846 to present the user with a visualization of the status of the monitored entities for the new metric according to the alert threshold value.”). Lu does not explicitly disclose: the second API. However, Itoh discloses: a second API (paragraph [0017], “[…] a step of selecting the first API function for use by the API of the selected first module; a step of acquiring first API function semantic information about the selected first API function, determining second API function semantic information matching the acquired first API function semantic information, and selecting the second API function corresponding to the determined second API function semantic information for use by the selected second module as a target to associate with the selected first API function […].”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Itoh into the teaching of Lu to include “the second API.” The modification would be obvious because one of ordinary skill in the art would be motivated to allow a module to be used by other modules, i.e., service module, makes public an interface to allow the use of its own capabilities, and the remaining modules using the service module, i.e., client modules (Itoh, paragraph [0002]). Claims 8 and 12-14 are computer system claims corresponding to the processor claims hereinabove (Claims 1 and 5-7, respectively). Therefore, Claims 8 and 12-14 are rejected for the same reasons set forth in the rejections of Claims 1 and 5-7, respectively. Claims 15 and 20 are computer-implemented method claims corresponding to the processor claims hereinabove (Claims 1 and 7, respectively). Therefore, Claims 15 and 20 are rejected for the same reasons set forth in the rejections of Claims 1 and 7, respectively. Claims 2-4, 9-11, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Itoh as applied to Claims 1, 8, and 15 above, and further in view of US 2017/0026312 (hereinafter “Hrischuk”). As per Claim 2, the rejection of Claim 1 is incorporated; and the combination of Lu and Itoh does not explicitly disclose: wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads. However, Hrischuk discloses: wherein a first API is to receive one or more input values indicating one or more job identifiers of one or more software workloads (paragraph [0096], “Workload change API 281C may receive a resource identifier that identifies a resource and a workload identifier which identifies one or more workloads that may be added or removed from the resource as inputs.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Hrischuk into the combined teachings of Lu and Itoh to include “wherein the first API is to receive one or more input values indicating one or more job identifiers of the one or more software workloads.” The modification would be obvious because one of ordinary skill in the art would be motivated to identify a resource and a workload identifier which identifies one or more workloads that may be added or removed from a resource as inputs (Hrischuk, paragraph [0096]). As per Claim 3, the rejection of Claim 1 is incorporated; and the combination of Lu and Itoh does not explicitly disclose: wherein the one or more software workloads are to be identified by the first API based, at least in part, on an output value of a third API to perform the one or more software workloads. However, Hrischuk discloses: wherein one or more software workloads are to be identified by a first API based, at least in part, on an output value of a third API to perform the one or more software workloads (paragraph [0091], “[…] the performance manager 121 provides a plurality of application programming interfaces (APIs) 281, for example, a service level API 281A that may be used to execute the machine implemented process of FIG. 4F, a physical headroom API 281B that is used to execute the process of FIG. 4G and a workload change API 281C that executes the process of FIG. 4E, described below in detail. The APIs receive various inputs and provide various outputs that are described below in detail.”; paragraph [0095], “Physical headroom API 281B is configured to provide an output including the computed headroom, a latency value at the optimal point, a confidence factor and a time range over which the headroom value is calculated.”; paragraph [0096], “Workload change API 281C may receive a resource identifier that identifies a resource and a workload identifier which identifies one or more workloads that may be added or removed from the resource as inputs.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Hrischuk into the combined teachings of Lu and Itoh to include “wherein the one or more software workloads are to be identified by the first API based, at least in part, on an output value of a third API to perform the one or more software workloads.” The modification would be obvious because one of ordinary skill in the art would be motivated to identify a resource and a workload identifier which identifies one or more workloads that may be added or removed from a resource as inputs (Hrischuk, paragraph [0096]). As per Claim 4, the rejection of Claim 1 is incorporated; and the combination of Lu and Itoh does not explicitly disclose: wherein the one or more software workloads are to be identified by the first API based, at least in part, on performing a third API to launch the one or more software workloads. However, Hrischuk discloses: wherein one or more software workloads are to be identified by a first API based, at least in part, on performing a third API to launch the one or more software workloads (paragraph [0091], “[…] the performance manager 121 provides a plurality of application programming interfaces (APIs) 281, for example, a service level API 281A that may be used to execute the machine implemented process of FIG. 4F, a physical headroom API 281B that is used to execute the process of FIG. 4G and a workload change API 281C that executes the process of FIG. 4E, described below in detail. The APIs receive various inputs and provide various outputs that are described below in detail.”; paragraph [0095], “Physical headroom API 281B is configured to provide an output including the computed headroom, a latency value at the optimal point, a confidence factor and a time range over which the headroom value is calculated.”; paragraph [0096], “Workload change API 281C may receive a resource identifier that identifies a resource and a workload identifier which identifies one or more workloads that may be added or removed from the resource as inputs.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Hrischuk into the combined teachings of Lu and Itoh to include “wherein the one or more software workloads are to be identified by the first API based, at least in part, on performing a third API to launch the one or more software workloads.” The modification would be obvious because one of ordinary skill in the art would be motivated to identify a resource and a workload identifier which identifies one or more workloads that may be added or removed from a resource as inputs (Hrischuk, paragraph [0096]). Claims 9-11 are computer system claims corresponding to the processor claims hereinabove (Claims 2-4, respectively). Therefore, Claims 9-11 are rejected for the same reasons set forth in the rejections of Claims 2-4, respectively. Claims 16 and 17 are computer-implemented method claims corresponding to the processor claims hereinabove (Claims 1 and 4, respectively). Therefore, Claims 16 and 17 are rejected for the same reasons set forth in the rejections of Claims 1 and 4, respectively. Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Itoh as applied to Claim 15 above, and further in view of US 2021/0150371 (hereinafter “Marder”). As per Claim 18, the rejection of Claim 15 is incorporated; and the combination of Lu and Itoh does not explicitly disclose: wherein the one or more software workloads are performed using a deep-learning computing system. However, Marder discloses: wherein one or more software workloads are performed using a deep-learning computing system (paragraph [0023], “As illustrated, a computing apparatus or system 105 is to process a deep learning workload 160 for an [sic] particular client or other agent 150 utilizing computer hardware. The compute [sic] hardware includes a set of processing resources, which may include resources of one or more processors 110 (shown as Proc-1 through Proc-N with processing cores 115), resources of a hardware accelerator, or other processing elements.”; paragraph [0024], “In a particular scenario, the deep learning workload 160 is to be processed according to preferences regarding a set of multiple performance indicators, shown as KPI preferences 165, wherein the set of performance indicators may include, but are not limited to, throughput, latency, core coverage, and power consumption.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Marder into the combined teachings of Lu and Itoh to include “wherein the one or more software workloads are performed using a deep-learning computing system.” The modification would be obvious because one of ordinary skill in the art would be motivated to provide effective mapping of processing hardware for deep learning to have a great impact on the performance of a neural network in processing a particular workload (Marder, paragraph [0015]). As per Claim 19, the rejection of Claim 15 is incorporated; and the combination of Lu and Itoh does not explicitly disclose: wherein the one or more software workloads are performed using one or more nodes of a deep-learning computing system. However, Marder discloses: wherein one or more software workloads are performed using one or more nodes of a deep-learning computing system (paragraph [0023], “As illustrated, a computing apparatus or system 105 is to process a deep learning workload 160 for an [sic] particular client or other agent 150 utilizing computer hardware. The compute [sic] hardware includes a set of processing resources, which may include resources of one or more processors 110 (shown as Proc-1 through Proc-N with processing cores 115), resources of a hardware accelerator, or other processing elements.”; paragraph [0024], “In a particular scenario, the deep learning workload 160 is to be processed according to preferences regarding a set of multiple performance indicators, shown as KPI preferences 165, wherein the set of performance indicators may include, but are not limited to, throughput, latency, core coverage, and power consumption.”; paragraph [0057], “As illustrated in FIG. 7A, a neural network 740 includes a collection of connected units or nodes 745, also referred to as artificial neurons. Typically, nodes are arranged in multiple layers.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Marder into the combined teachings of Lu and Itoh to include “wherein the one or more software workloads are performed using one or more nodes of a deep-learning computing system.” The modification would be obvious because one of ordinary skill in the art would be motivated to provide effective mapping of processing hardware for deep learning to have a great impact on the performance of a neural network in processing a particular workload (Marder, paragraph [0015]). Response to Arguments Applicant’s arguments with respect to Claims 1, 8, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to the Applicant’s disclosure. They are as follows: US 2005/0021736 (hereinafter “Carusi”) discloses monitoring performance of distributed applications. US 2019/0095478 (hereinafter “Tankersley”) discloses information technology networked entity monitoring with automatic reliability scoring. Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Qing Chen whose telephone number is 571-270-1071. The Examiner can normally be reached on Monday through Friday from 9:00 AM to 5:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, the Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Wei Mui, can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for more information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO customer service representative, call 800-786-9199 (in USA or Canada) or 571-272-1000. /Qing Chen/ Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Jul 06, 2023
Application Filed
Aug 01, 2025
Non-Final Rejection — §103, §DP
Nov 05, 2025
Response Filed
Nov 30, 2025
Final Rejection — §103, §DP
Apr 03, 2026
Request for Continued Examination
Apr 08, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591415
INTELLIGENT AND PREDICTIVE MODULES FOR SOFTWARE DEVELOPMENT AND CODING USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12591416
INTELLIGENT AND PREDICTIVE MODULES FOR SOFTWARE DEVELOPMENT AND CODING USING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585460
SOFTWARE OBFUSCATION METHOD USING AN OPAQUE PREDICATE BASED ON MULTIPLYING MIXED BOOLEAN-ARTHMETIC EXPRESSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12572348
Secure Application Acceleration System and Apparatus
2y 5m to grant Granted Mar 10, 2026
Patent 12572339
ACCELERATE INFERENCE PERFORMANCE ON ARTIFICIAL INTELLIGENCE ACCELERATORS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+51.9%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 678 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month