Prosecution Insights
Last updated: April 19, 2026
Application No. 17/722,233

IDENTIFYING IDLE PROCESSORS USING NON-INTRUSIVE TECHNIQUES

Non-Final OA §103
Filed
Apr 15, 2022
Examiner
BRACERO, ANDREW ANGEL
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
5 granted / 5 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to amendment and/or remarks filed on 2025-09-17. In the current amendments, claims 1-5, 7-19, and 21-23 are presented for examination in this application (17364125) filed April 15, 2022. Claims 1-5, 7-8, and 12-19 are currently amended. The arguments made against the 35 U.S.C 101 and 35 U.S.C 103 rejections have been considered but were not all found to be persuasive. The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2025-1-14 has been entered. Response to Arguments Applicant’s arguments and remarks filed 2025-12-04 have been fully considered. The arguments and remarks regarding the 35 U.S.C 101 rejections were found to be persuasive. The arguments and remarks regarding the 35 U.S.C 103 rejections were not found to be persuasive. The 35 U.S.C 101 rejections have been overcome and the 35 U.S.C 103 rejections have been maintained. 35 U.S.C 103 Applicant asserts: Applicant asserts “Shinde is directed to using "machine learning models to predict datacenter behavior at multiple hardware levels of a datacenter." (Shinde, Abstract). Shinde also describes collecting data from out-of-band sensors to collect data from different computing devices in a datacenter. (Shinde, col. 6, lines 28-34). The Office Action asserts that Shinde teaches the bolded features of claim 1 and cites to column 28 lines 34-38 of Shinde, which states "[t]he BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit." (Office Action, p. 30). However, the cited portion of Shinde only states that IPMI can be used to access the BMC. Notably, the portion of Shinde cited by the Office Action is Shinde's only mention of IPMI in the entire reference. There are no additional mentions of using the IPMI for another purpose than accessing the BMC and there is no discussion of IPMI commands, much less an IPMI command used to collect data from out-of- band sensors, contained in the reference. Rather than teaching that the use of IPMI or IPMI commands to collect data, Shinde consistently describes using SNMP polling as the mechanism for collecting data from the sensors. For example, Shinde recites "[c]ollecting data from out-of-band sensor subsystems, e.g., via SNMP polling, is described in further detail below" and "[a]ccording to embodiments, SNMP polling is used to collect data from the out-of-band sensors of a given device." (Shinde, col. 6, lines 33-35 & col. 28, lines 48-50). Applicant respectfully submits that simply stating "[t]he BMC may be accessed via a local area network or serial bus using ... (IPMI)" without describing how the IPMI would be used to access the BMC, and without describing any specific IPMI commands to be used, does not teach or suggest at least ". .. wherein the service processor uses out-of-band functionality to collect the power consumption data independently from an operating system (OS) of the computing device in response to receiving the one or more commands, and wherein the one or more commands comprise an Intelligent Platform Management Interface (IPMI) command or a power distribution unit (PDU) command;" as recited in amended claim 1. Shinde's reliance on SNMP polling for data collection similarly fails to teach or suggest these features” Examiner’s response: The Examiner respectfully disagrees. While the only explicit mention of an Intelligent Platform Management Interface (IPMI) is mentioned in col 28 lines 35-36, there is a reference to an “interface” in the next paragraph at col 28 lines 41 which can be inferred to represent the IPMI, as there are no other interfaces mentioned between these two references of an interface. In addition, this paragraph mentions that the BMC manages the interface between OS and/or out-of-band subsystem and that the data may be accessed using an instruction set (which can be reasonably interpreted as a set of commands). Shinde does rely on SNMP polling for data collection but SNMP polling is not an exclusive process that has to be separate from an IPMI. Under broadest reasonable interpretation, it is the Examiner’s view that Shinde does teach the limitations argued of the independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7, 9-19, and 21-22 are rejected under 35 U.S.C 103 as being unpatentable over Shinde et al. (US11443166B2 hereinafter referred to as Shinde) in view of Varadarajan (US20180232036A1 hereinafter referred to Varadarajan). Regarding claim 1 (currently amended): Shinde teaches a method comprising: issuing, using a processing device, one or more commands to a service processor to collect power consumption data corresponding to a computing device, wherein the service processor uses out-of-band functionality to collect the power consumption data independently from an operating system (OS) of the computing device in response to receiving the one or more commands and wherein the one or more commands comprise an Intelligent Platform Management Interface (IPMI) command or a power distribution unit (PDU) command (see col 28 lines 25-32: “Device sensor data from out-of-band sensors 356 for a given device is collected by a sub-system (“out-of-band subsystem”) that is separate from the device's main processing unit. An out-of-band subsystem comprises a main controller, referred to herein as a baseboard management controller (BMC), that is connected to various components, including sensors and other controllers (“satellite controllers”) distributed among different computer components.”. Also see col 7 lines 49-59: “A server utilization model predicts future server utilization, memory, and network I/O utilization of a given device based on information from out-of-band sensors 356 associated with server device that detect information such as power consumption, temperature, fan speed, etc. Different types of server device utilization (e.g., I/O intensive, CPU intensive, and memory intensive) have different patterns of power consumption by the server, different temperatures of computer elements on the mother board, different fan speeds for different fans present in typical rack-mounted server configuration, etc.”. Also see col 28 lines 34-38: “The BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit.”. Also see col 6 lines 35-47: “SNMP polling is further used to gather any of a variety of network metrics stored at SNMP counters 354, such as packet sizes, throughput counters maintained by switches, etc. Also, a network traffic analysis system 352 (such as IPFIX, sFlow, or Netflow, etc.) provides information from routers and switches servicing computing devices, including information about packet flows to and from the machines, and information about packet headers (when allowed by applicable management policies), while avoiding access to packet payloads themselves. A network management system 350 (such as OpenConfig) provides information about the configuration of networking elements and the relative states of the networking elements.” Also see col 28 lines 39-47:“BMCs can support complex instructions and provide complete out-of-band functionality of a service processor. The BMC manages the interface between operating system and/or hypervisor and the out-of-band subsystem. A BMC may use a dedicated memory to store device sensor data that stores metrics captured by sensors or satellite controllers, such metrics being about temperature, fan speed, and voltage. The sensor data may be accessed using the complex instruction set.”); determining, using the processing device, a set of features from the power consumption data for a first time period by calculating a set of power metrics for the first time period, (see claim 1: “wherein the training data comprises first hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine learning model” [Examiner note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], classifying, using a machine learning (ML) model and the set of features, whether the computing device as idle or busy in the first time period (see col 15 line 66 to col 16 line 8: “Furthermore, predictions from hierarchy of models 300 may also be used to detect anomalies in datacenter usage at any level of datacenter hardware, including at the switch or server device level. According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”), receiving, at an application programming interface (API) provided by the processing device, user input specifying the computing device and the first time period (see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”. Also see col 30 lines 19-24: “Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704.”); and outputting, using the API, a response with an indication of the computing device being idle in response to a classification that the computing device is idle (see col 5 lines 1-16: “The hardware utilization predictions provided by the hierarchy of models provide a comprehensive, and accurate, forecast of datacenter usage. Based on datacenter-level network utilization predictions, the datacenter automatically configures the datacenter hardware to avoid any predicted over-utilization of any given section of hardware in the datacenter. For example, when the hierarchy of models predicts that a certain switch in the datacenter will become a hot spot in the next three minutes, the datacenter automatically configures the datacenter hardware to route network traffic away from the switch, thereby preemptively alleviating the potential hot spot. Furthermore, the datacenter-level predictions provide administrators with trends in datacenter usage, which can help with datacenter administration and appropriate hardware provisioning without requiring over-provisioning.”. Also see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”.). Even though Shinde implicitly teaches classifying, using a machine learning (ML) model and the set of features, whether the computing device is idle or busy in the first time period as Shinde explains deviations in hardware utilization which account for times where power utilization is idle or busy, Varadarajan explicitly teaches doing so taking account of the specific idle or busy times (see [0007]: “The intelligent power management device may be configured to determine, using the IPM agent, that the sub-device is active when the IPM agent detects an interaction by any of the plurality of users with the sub-device, or is idle when the IPM agent detects no interaction by any of the plurality of users with the sub-device”. Also see [0038]: “In the embodiments herein, the dependency of the IPM agent on the various power states of the IT device and its component is also considered. Various timeouts may adaptively change in every system that runs the IPM agent, based on data collected using basic machine learning techniques.”. [Examiner’s note i.e., emphasis added. Active, taken in its broadest reasonable interpretation by a person having ordinary skill in the art could take “active” to mean “busy” as supported by [0031]: “A device or a device component idleness may refer to lack of significant activity associated with user's interactions directly with the device or interaction that impact device components. A device idle time duration may refer to an unbroken time duration associated with idleness of the device. Idleness may also be referred to as inactivity.”.)]. Even though Shinde implicitly teaches outputting, using the API, a response with indication of the computing device being idle in response to a classification that the computing device is idle as Shinde explains outputting hardware utilization, and therefore power utilization, Varadarajan explicitly teaches outputting an indication of the idle state evidenced both by turning off a display after a certain time of being idle and by displaying a value of the timeouts which is based off of an idle duration table (see [0002]: “Typical examples include turning off a display after inactivity for a certain predefined time interval, or putting a computer to sleep after inactivity for a certain predefined time interval, or turning off a component of a device—like a network port, a sensor, etc. after inactivity for a certain predefined time interval.”. Also see [0008]: “The intelligent power management device may be configured to determine, using the IPM agent and the plurality of records in the interacting devices idle duration table, a plurality of timeouts for the sub-device, wherein a timeout of the plurality of timeouts indicates a value for a timer for the sub-device, and wherein when the timer reaches a predetermined time, a predetermined action of the plurality of actions from the power management policy occurs. The display device may be configured to display a value of the timeouts and the timer for the sub-device.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 1 to include attributes of explicitly classifying a status as idle in order to optimize power savings by learning about patterns of idleness (see [0150]: “An embodiment herein observes the inactivity durations of each user and dynamically changes the timeouts per user to optimize power savings on the device. A goal of embodiments herein is to maximize power savings by continuously learning about inactivity or idleness pattern of user and accordingly adapting the timeout values without affecting user experience.”). Shinde does not explicitly teach causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device. Varadarajan, however, analogously teaches causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device (see para [0078]: “Each power manageable sub-device of the device 101 has multiple power states, denoted as P0-PN, where P0 is the OFF state/lowest power state, and PN is the highest power and performance state. The number of power states available is specific to every sub-device and the OS running on the device 101. If a sub-device D is in a state PK, then if performance demand increases, the sub-device D can be put in state P(K+1) which is the adjacent higher power and performance state. ”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 1 to include attributes of causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device in order to optimize power savings (see para [0006]: “wherein the IPM agent is configured to adaptively change, using historic usage data of a plurality of users of the intelligent power management device, the power management actions to optimize a power saving on the plurality of sub-devices for each of the plurality of users.”). Regarding claim 2 Shinde in view of Varadarajan teaches the method of claim 1. Shinde further teaches the power consumption data comprises a plurality of power measurements (see col 7 lines 49-59 “A server utilization model predicts future server utilization, memory, and network I/O utilization of a given device based on information from out-of-band sensors 356 associated with server device that detect information such as power consumption, temperature, fan speed, etc. Different types of server device utilization (e.g., I/O intensive, CPU intensive, and memory intensive) have different patterns of power consumption by the server, different temperatures of computer elements on the mother board, different fan speeds for different fans present in typical rack-mounted server configuration, etc.”), determining the set of features comprises determining, from the plurality of power measurements, the set of power metrics for the first time period, the set of metrics comprising at least one of a maximum power consumption value, a minimum power consumption value, or an average power consumption value by the computing device for the first time period (see col 16 lines 2-8: “According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”. Also see col 16 lines 9-22: “For example, a pre-determined deviation threshold percentage for datacenter 200 is 10%. Deployed ToR switch model 420 predicts that the maximum utilization of the switch will be 30% during a given period of time, and, during that period of time, the actual maximum utilization of switch 220B is 50%. Because the actual utilization of switch 220B during the time period is different than the prediction of switch network utilization by more than the pre-determined deviation threshold, ML service 250 automatically detects a deviation event for ToR switch 220B. Given a deviation threshold percentage of 10%, actual maximum network utilization of ToR switch 220B that is outside of the range of 20%-40% during that time would be classified as a deviation event for switch 220B.”. Also see claim 1: “wherein the training data comprises first hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine learning model;”) [(Examiner’s note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], , and the set of features comprises the first set of power metrics (see col 9 lines 17-32:“At step 504, using training data, a second machine learning model is trained to predict hardware utilization at a second hardware level of the plurality of hardware levels given hardware utilization features recorded in the training data to produce a second trained machine learning model, where the training data comprises hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine-learning model. For example, a machine learning service 250 (FIG. 2) trains a ToR switch ML model to predict ToR switch utilization (i.e., at ToR switch hardware level 262 of the datacenter) based, at least in part, on a set of historical hardware utilization information and predictions made by one or more trained server utilization models.”). [(Examiner’s note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], Regarding claims 13 and 17: Claims 13 and 17 recite analogous limitations to claim 2 and are therefore rejected on the same grounds as claim 2. Regarding claim 3: Shinde in view of Varadarajan teaches the method of claim 2. Shinde further teaches wherein the set of features further comprises a device type of the computing device (see col 8 line 65 to col 9 line 7: “At step 502, first one or more predictions of hardware utilization at a first hardware level of a plurality of hardware levels in a system of networked computing devices are generated using a first trained machine learning model. To illustrate in the context of datacenter 200 of FIG. 2 (which includes networked computing devices, such as server devices 212-218), trained server utilization model 416 of FIG. 4 receives data from instantaneous sensor readings of out-of-band sensors 402, which measures physical statistics for the associated server device 216”). Regarding claims 14 and 18: Claims 14 and 18 recite analogous limitations to claim 3 and are therefore rejected on the same grounds as claim 3. Regarding claim 4: Shinde in view of Varadarajan teaches the method of claim 2. Shinde further teaches wherein set of features further comprises a central processing unit (CPU) type of a CPU of the computing device and a graphics processing unit (GPU) type of a GPU of the computing device (see col 1 lines 61-64 “Specifically, acquiring utilization information from an operating system requires executing instructions on the host CPU, and those CPU cycles are no longer available to clients.”. Also col 24 lines 24-30: “An ANN is amenable to vectorization for data parallelism, which may exploit vector hardware such as single instruction multiple data (SIMD), such as with a graphical processing unit (GPU). Matrix partitioning may achieve horizontal scaling such as with symmetric multiprocessing (SMP) such as with a multicore central processing unit (CPU) and or multiple coprocessors such as GPUs. ”.). Regarding claim 5: Shinde in view of Varadarajan teaches the method of claim 1. Shinde further teaches the power consumption data comprises a plurality of power measurements (see col 7 lines 49-59: “A server utilization model predicts future server utilization, memory, and network I/O utilization of a given device based on information from out-of-band sensors 356 associated with server device that detect information such as power consumption, temperature, fan speed, etc. Different types of server device utilization (e.g., I/O intensive, CPU intensive, and memory intensive) have different patterns of power consumption by the server, different temperatures of computer elements on the mother board, different fan speeds for different fans present in typical rack-mounted server configuration, etc.”), determining the set of features comprises determining, from the plurality of power measurements, the set of power metrics for the first time period, the set of power metrics comprising at least one of a first maximum power consumption value, a first minimum power consumption value, and a first average power consumption value by the computing device for the first time period (see col 16 lines 2-8: “According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”. Also see col 16 lines 9-22: “For example, a pre-determined deviation threshold percentage for datacenter 200 is 10%. Deployed ToR switch model 420 predicts that the maximum utilization of the switch will be 30% during a given period of time, and, during that period of time, the actual maximum utilization of switch 220B is 50%. Because the actual utilization of switch 220B during the time period is different than the prediction of switch network utilization by more than the pre-determined deviation threshold, ML service 250 automatically detects a deviation event for ToR switch 220B. Given a deviation threshold percentage of 10%, actual maximum network utilization of ToR switch 220B that is outside of the range of 20%-40% during that time would be classified as a deviation event for switch 220B.”. Also see claim 1: “wherein the training data comprises first hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine learning model;”) [(Examiner’s note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], determining, from the plurality of power measurements, a second set of power metrics for a second time period, the second set of power metrics comprising at least one of a second maximum power consumption value, a second minimum power consumption value, or a second average power consumption value by the computing device for the second time period (see col 16 lines 2-8: “According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”. Also see col 9 lines 17-32: “At step 504, using training data, a second machine learning model is trained to predict hardware utilization at a second hardware level of the plurality of hardware levels given hardware utilization features recorded in the training data to produce a second trained machine learning model, where the training data comprises hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine-learning model. For example, a machine learning service 250 (FIG. 2) trains a ToR switch ML model to predict ToR switch utilization (i.e., at ToR switch hardware level 262 of the datacenter) based, at least in part, on a set of historical hardware utilization information and predictions made by one or more trained server utilization models.”. Also see claim 1: “… based, at least in part, on second hardware utilization data for the one or more hardware levels collected during a second time period subsequent to the first time period, and the second one or more predictions of hardware utilization at the first hardware level, generating a prediction of hardware utilization at the second hardware level using the second trained machine learning model”.) [(Examiner’s note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], and aggregating the set of power metrics and the second set of power metrics into the set of features (see claim 8: “a sensor coupled to at least one of the plurality of non-throttled subsystems to obtain an operating signal from the at least one of the non-throttled subsystems at a first time interval, the sensor configured to determine a state of the at least one of the non-throttled subsystems from the operating signal, wherein the state is associated with one of the different power consumption levels, and wherein the sensor is configured to dynamically derive a combined measurement of power consumption of the plurality of non-throttled subsystems at a first time interval based at least in part on the state of the at least one of the non-throttled subsystems, wherein the combined measurement of power consumption of the plurality of non-throttled subsystems is used to compute a power usage requirement for the at least one of the non-throttled components at a second time interval after the first time interval to maintain an average power consumption of the data processing system over time that is associated with a history of the power consumption of the data processing system including the first time interval and the power usage requirement under a limit.”). Regarding claims 15 and 19: Claims 15 and 19 recite analogous limitations to claim 5 and are therefore rejected on the same grounds as claim 5. Regarding claim 7: Shinde in view of Varadarajan teaches the method of claim 1. Shinde further wherein the service processor is a baseboard management controller (BMC) located on a circuit board of the computing device (see col 28 lines 25-32“Device sensor data from out-of-band sensors 356 for a given device is collected by a sub-system (“out-of-band subsystem”) that is separate from the device's main processing unit. An out-of-band subsystem comprises a main controller, referred to herein as a baseboard management controller (BMC), that is connected to various components, including sensors and other controllers (“satellite controllers”) distributed among different computer components.”.) Regarding claims 15 and 19: Claim 22 recites analogous limitations to claim 7 and is therefore rejected on the same grounds as claim 7. Regarding claim 9: Shinde in view of Varadarajan teaches the method of claim 1. Shinde further teaches wherein the ML model is trained based on historical power consumption data and ground truth data and is deployed as an object to an endpoint device comprising the processing device (see col 11 lines 24-26: “During an initial training phase, the model is trained based on the labeled training data subset to identify correlations between observed historical data and ground truths.”. Also see col 11 lines 41-45: “Trained models that are deployed to perform predictions for particular hardware of datacenter 200 continue to identify patterns of live input data. As such, the trained models become specialized to the hardware for which the models respectively produce predictions.”.) Regarding claim 10: Shinde in view of Varadarajan teaches the method of claim 1. Varadarajan further teaches providing a user interface (UI) dashboard, the UI dashboard presenting the indication (see [0058]: “A user interface may be used to specify timeout values for different timers and associated power management actions that are made available by the OS and also by the IPM agent.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 10 to include attributes of having a user interface dashboard presenting indications of idleness in order to optimize power savings (see [0006]: “… wherein a power management policy of the plurality of power management policies comprise power management actions for controlling power consumption of a sub-device among the plurality of sub-devices, wherein the power management policy is received from the remote server over the communication network, and wherein the IPM agent is configured to adaptively change, using historic usage data of a plurality of users of the intelligent power management device, the power management actions to optimize a power saving on the plurality of sub-devices for each of the plurality of users.”.). Regarding claim 11: Shinde in view of Varadarajan teaches the method of claim 1. Shinde further teaches wherein the ML model is at least one of a logistics regression model, a k-nearest neighbor model, a random forest classification model, a gradient boost model, or an Extreme Gradient Boost (XGBoost) model (see col 8 lines 3-8 :“There are different potential supervised regression algorithms that can be used to train a server utilization models 310 to predict future utilization of server devices, according to embodiments, such as the Random Forest Regression algorithm described in connection with training ML models in further detail below.”) Regarding claim 12 (currently amended): Shinde teaches a processor, comprising (see col 29 lines 42-52According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination ): one or more processor units to issue one or more commands to a service processor to collect power consumption data corresponding to a computing device, wherein the service processor uses out-of-band functionality to collect the power consumption data independently from an operating system (OS) of the computing device in response to receiving the one or more commands (see col 28 lines 25-32: “Device sensor data from out-of-band sensors 356 for a given device is collected by a sub-system (“out-of-band subsystem”) that is separate from the device's main processing unit. An out-of-band subsystem comprises a main controller, referred to herein as a baseboard management controller (BMC), that is connected to various components, including sensors and other controllers (“satellite controllers”) distributed among different computer components.”. Also see col 7 lines 49-59: “A server utilization model predicts future server utilization, memory, and network I/O utilization of a given device based on information from out-of-band sensors 356 associated with server device that detect information such as power consumption, temperature, fan speed, etc. Different types of server device utilization (e.g., I/O intensive, CPU intensive, and memory intensive) have different patterns of power consumption by the server, different temperatures of computer elements on the mother board, different fan speeds for different fans present in typical rack-mounted server configuration, etc.”. Also see col 28 lines 34-38: “The BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit.”. Also see col 6 lines 35-47: “SNMP polling is further used to gather any of a variety of network metrics stored at SNMP counters 354, such as packet sizes, throughput counters maintained by switches, etc. Also, a network traffic analysis system 352 (such as IPFIX, sFlow, or Netflow, etc.) provides information from routers and switches servicing computing devices, including information about packet flows to and from the machines, and information about packet headers (when allowed by applicable management policies), while avoiding access to packet payloads themselves. A network management system 350 (such as OpenConfig) provides information about the configuration of networking elements and the relative states of the networking elements.”); determine a set of features from the power consumption data for a first time period by calculating a set of power metrics for the first time period, (see claim 1: “wherein the training data comprises first hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine learning model” [Examiner note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], classify the computing device as idle or busy in the first time period (see col 15 line 66 to col 16 line 8: “Furthermore, predictions from hierarchy of models 300 may also be used to detect anomalies in datacenter usage at any level of datacenter hardware, including at the switch or server device level. According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”), receive, at an application programming interface (API) provided by the processing device, user input specifying the computing device and the first time period (see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”. Also see col 30 lines 19-24: “Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704.”); and output, using the API, a response with an indication of the computing device being idle in response to a classification that the computing device is idle (see col 5 lines 1-16: “The hardware utilization predictions provided by the hierarchy of models provide a comprehensive, and accurate, forecast of datacenter usage. Based on datacenter-level network utilization predictions, the datacenter automatically configures the datacenter hardware to avoid any predicted over-utilization of any given section of hardware in the datacenter. For example, when the hierarchy of models predicts that a certain switch in the datacenter will become a hot spot in the next three minutes, the datacenter automatically configures the datacenter hardware to route network traffic away from the switch, thereby preemptively alleviating the potential hot spot. Furthermore, the datacenter-level predictions provide administrators with trends in datacenter usage, which can help with datacenter administration and appropriate hardware provisioning without requiring over-provisioning.”. Also see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”.) and wherein the one or more commands comprise an Intelligent Platform Management Interface (IPMI) command or a power distribution unit (PDU) command (See col 28 lines 34-38: “The BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit.”.) Even though Shinde implicitly teaches classify the computing device as idle or busy in the first time period as Shinde explains deviations in hardware utilization which account for times where power utilization is idle or busy, Varadarajan explicitly teaches doing so taking account of the specific idle or busy times (see [0007]: “The intelligent power management device may be configured to determine, using the IPM agent, that the sub-device is active when the IPM agent detects an interaction by any of the plurality of users with the sub-device, or is idle when the IPM agent detects no interaction by any of the plurality of users with the sub-device”. Also see [0038]: “In the embodiments herein, the dependency of the IPM agent on the various power states of the IT device and its component is also considered. Various timeouts may adaptively change in every system that runs the IPM agent, based on data collected using basic machine learning techniques.”. [Examiner’s note i.e., emphasis added. Active, taken in its broadest reasonable interpretation by a person having ordinary skill in the art could take “active” to mean “busy” as supported by [0031]: “A device or a device component idleness may refer to lack of significant activity associated with user's interactions directly with the device or interaction that impact device components. A device idle time duration may refer to an unbroken time duration associated with idleness of the device. Idleness may also be referred to as inactivity.”.)]. Even though Shinde implicitly teaches outputting, using the API, a response with indication of the computing device being idle in response to a classification that the computing device is idle as Shinde explains outputting hardware utilization, and therefore power utilization, Varadarajan explicitly teaches outputting an indication of the idle state evidenced both by turning off a display after a certain time of being idle and by displaying a value of the timeouts which is based off of an idle duration table (see [0002]: “Typical examples include turning off a display after inactivity for a certain predefined time interval, or putting a computer to sleep after inactivity for a certain predefined time interval, or turning off a component of a device—like a network port, a sensor, etc. after inactivity for a certain predefined time interval.”. Also see [0008]: “The intelligent power management device may be configured to determine, using the IPM agent and the plurality of records in the interacting devices idle duration table, a plurality of timeouts for the sub-device, wherein a timeout of the plurality of timeouts indicates a value for a timer for the sub-device, and wherein when the timer reaches a predetermined time, a predetermined action of the plurality of actions from the power management policy occurs. The display device may be configured to display a value of the timeouts and the timer for the sub-device.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 1 to include attributes of explicitly classifying a status as idle in order to optimize power savings by learning about patterns of idleness (see [0150]: “An embodiment herein observes the inactivity durations of each user and dynamically changes the timeouts per user to optimize power savings on the device. A goal of embodiments herein is to maximize power savings by continuously learning about inactivity or idleness pattern of user and accordingly adapting the timeout values without affecting user experience.”). Shinde does not explicitly teach causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device. Varadarajan, however, analogously teaches causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device (see para [0078]: “Each power manageable sub-device of the device 101 has multiple power states, denoted as P0-PN, where P0 is the OFF state/lowest power state, and PN is the highest power and performance state. The number of power states available is specific to every sub-device and the OS running on the device 101. If a sub-device D is in a state PK, then if performance demand increases, the sub-device D can be put in state P(K+1) which is the adjacent higher power and performance state. ”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 12 to include attributes of causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device in order to optimize power savings (see para [0006]: “wherein the IPM agent is configured to adaptively change, using historic usage data of a plurality of users of the intelligent power management device, the power management actions to optimize a power saving on the plurality of sub-devices for each of the plurality of users.”). Regarding claim 16 (currently amended): Shinde teaches a system comprising: a memory device, and a processing device coupled to the memory device (see col 29 lines 42-52: “According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination”.), wherein the processing device is to : issue one or more commands to a service processor to collect power consumption data corresponding to a computing device, wherein the service processor uses out-of-band functionality to collect the power consumption data independently from an operating system (OS) of the computing device in response to receiving the one or more commands (see col 28 lines 25-32: “Device sensor data from out-of-band sensors 356 for a given device is collected by a sub-system (“out-of-band subsystem”) that is separate from the device's main processing unit. An out-of-band subsystem comprises a main controller, referred to herein as a baseboard management controller (BMC), that is connected to various components, including sensors and other controllers (“satellite controllers”) distributed among different computer components.”. Also see col 7 lines 49-59: “A server utilization model predicts future server utilization, memory, and network I/O utilization of a given device based on information from out-of-band sensors 356 associated with server device that detect information such as power consumption, temperature, fan speed, etc. Different types of server device utilization (e.g., I/O intensive, CPU intensive, and memory intensive) have different patterns of power consumption by the server, different temperatures of computer elements on the mother board, different fan speeds for different fans present in typical rack-mounted server configuration, etc.”. Also see col 28 lines 34-38: “The BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit.”. Also see col 6 lines 35-47: “SNMP polling is further used to gather any of a variety of network metrics stored at SNMP counters 354, such as packet sizes, throughput counters maintained by switches, etc. Also, a network traffic analysis system 352 (such as IPFIX, sFlow, or Netflow, etc.) provides information from routers and switches servicing computing devices, including information about packet flows to and from the machines, and information about packet headers (when allowed by applicable management policies), while avoiding access to packet payloads themselves. A network management system 350 (such as OpenConfig) provides information about the configuration of networking elements and the relative states of the networking elements.”); determine a set of features from the power consumption data for a first time period by calculating a set of power metrics for the first time period, (see claim 1: “wherein the training data comprises first hardware utilization data for one or more hardware levels of the plurality of hardware levels collected during a first time period and the first one or more predictions of hardware utilization generated using the first trained machine learning model” [Examiner note: i.e., emphasis added. Hardware utilization as recited in Shinde includes power utilization as evidenced by col 6 lines 21-32: “Different non-OS sources of hardware utilization information are used in a complementary fashion to provide several types of information about datacenter hardware utilization at the various levels of datacenter hardware to give a more complete picture of hardware utilization at the datacenter scale. FIG. 3 depicts a set of non-OS sources 350-356 utilized by a hierarchy of ML models 300 as described in detail below. Out-of-band sensors 356 comprise out-of-band sensor data collection subsystems, associated with respective devices in datacenter 200, that detect information about the physical state of the associated device, including power utilization, temperature, fan speed, etc.”.)], classify the computing device as idle or busy in the first time period (see col 15 line 66 to col 16 line 8: “Furthermore, predictions from hierarchy of models 300 may also be used to detect anomalies in datacenter usage at any level of datacenter hardware, including at the switch or server device level. According to an embodiment, a potential anomaly (referred to herein as a “deviation event”) for particular hardware in the datacenter is detected when the actual usage of the hardware differs (e.g., by a threshold percentage) from the predicted utilization generated by any applicable kind of deployed ML model in hierarchy of models 300.”), receive, at an application programming interface (API) provided by the processing device, user input specifying the computing device and the first time period (see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”. Also see col 30 lines 19-24: “Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704.”); and output, using the API, a response with an indication of the computing device being idle in response to a classification that the computing device is idle (see col 5 lines 1-16: “The hardware utilization predictions provided by the hierarchy of models provide a comprehensive, and accurate, forecast of datacenter usage. Based on datacenter-level network utilization predictions, the datacenter automatically configures the datacenter hardware to avoid any predicted over-utilization of any given section of hardware in the datacenter. For example, when the hierarchy of models predicts that a certain switch in the datacenter will become a hot spot in the next three minutes, the datacenter automatically configures the datacenter hardware to route network traffic away from the switch, thereby preemptively alleviating the potential hot spot. Furthermore, the datacenter-level predictions provide administrators with trends in datacenter usage, which can help with datacenter administration and appropriate hardware provisioning without requiring over-provisioning.”. Also see col 31 lines 49-54: “Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.”.) and wherein the one or more commands comprise an Intelligent Platform Management Interface (IPMI) command or a power distribution unit (PDU) command (See col 28 lines 34-38: “The BMC may be accessed via a local area network or serial bus using Intelligent Platform Management Interface (IPMI) and Simple Network Management Protocol (SNMP) polling, without participation of the device's main processing unit.”.) Even though Shinde implicitly teaches classify the computing device as idle or busy in the first time period as Shinde explains deviations in hardware utilization which account for times where power utilization is idle or busy, Varadarajan explicitly teaches doing so taking account of the specific idle or busy times (see [0007]: “The intelligent power management device may be configured to determine, using the IPM agent, that the sub-device is active when the IPM agent detects an interaction by any of the plurality of users with the sub-device, or is idle when the IPM agent detects no interaction by any of the plurality of users with the sub-device”. Also see [0038]: “In the embodiments herein, the dependency of the IPM agent on the various power states of the IT device and its component is also considered. Various timeouts may adaptively change in every system that runs the IPM agent, based on data collected using basic machine learning techniques.”. [Examiner’s note i.e., emphasis added. Active, taken in its broadest reasonable interpretation by a person having ordinary skill in the art could take “active” to mean “busy” as supported by [0031]: “A device or a device component idleness may refer to lack of significant activity associated with user's interactions directly with the device or interaction that impact device components. A device idle time duration may refer to an unbroken time duration associated with idleness of the device. Idleness may also be referred to as inactivity.”.)]. Even though Shinde implicitly teaches outputting, using the API, a response with indication of the computing device being idle in response to a classification that the computing device is idle as Shinde explains outputting hardware utilization, and therefore power utilization, Varadarajan explicitly teaches outputting an indication of the idle state evidenced both by turning off a display after a certain time of being idle and by displaying a value of the timeouts which is based off of an idle duration table (see [0002]: “Typical examples include turning off a display after inactivity for a certain predefined time interval, or putting a computer to sleep after inactivity for a certain predefined time interval, or turning off a component of a device—like a network port, a sensor, etc. after inactivity for a certain predefined time interval.”. Also see [0008]: “The intelligent power management device may be configured to determine, using the IPM agent and the plurality of records in the interacting devices idle duration table, a plurality of timeouts for the sub-device, wherein a timeout of the plurality of timeouts indicates a value for a timer for the sub-device, and wherein when the timer reaches a predetermined time, a predetermined action of the plurality of actions from the power management policy occurs. The display device may be configured to display a value of the timeouts and the timer for the sub-device.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 1 to include attributes of explicitly classifying a status as idle in order to optimize power savings by learning about patterns of idleness (see [0150]: “An embodiment herein observes the inactivity durations of each user and dynamically changes the timeouts per user to optimize power savings on the device. A goal of embodiments herein is to maximize power savings by continuously learning about inactivity or idleness pattern of user and accordingly adapting the timeout values without affecting user experience.”). Shinde does not explicitly teach causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device. Varadarajan, however, analogously teaches causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device (see para [0078]: “Each power manageable sub-device of the device 101 has multiple power states, denoted as P0-PN, where P0 is the OFF state/lowest power state, and PN is the highest power and performance state. The number of power states available is specific to every sub-device and the OS running on the device 101. If a sub-device D is in a state PK, then if performance demand increases, the sub-device D can be put in state P(K+1) which is the adjacent higher power and performance state. ”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde and Varadarajan before him or her, to modify the method of claim 16 to include attributes of causing, based on the response with the indication of the computing device being idle, a corrective action to be taken to increase usage of the computing device in order to optimize power savings (see para [0006]: “wherein the IPM agent is configured to adaptively change, using historic usage data of a plurality of users of the intelligent power management device, the power management actions to optimize a power saving on the plurality of sub-devices for each of the plurality of users.”). Regarding claim 21: Shinde in view of Varadarajan teaches the system of claim 16. Shinde further teaches wherein the system comprises one or more of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system for generating synthetic data; a system for generating multi-dimensional assets using a collaborative content platform (see col 12 lines 36-53: “According to embodiments, estimations of server workload type and operating system by deployed workload/OS ML models 320 are used in connection with clustering algorithms to group the machines of datacenter 200 based on their behavior. The Out-of-Band Server Utilization Estimation Application, referred to above, describes using ML algorithms trained on ILOM-based information to transform this raw utilization information (from the ILOM) into higher-level information about workload types. This information about workload types can be used to group the machines that are doing similar work. According to an embodiment, ML algorithms about workload types (i.e., workload/OA models 320) output multi-dimensional data, and clustering algorithms work well in grouping the multi-dimensional data. Such grouping can be used to analyze trends in datacenter hardware usage and also may be used to administer the groups of devices according to the workload type needs.”); a system implemented using an edge device; a system implemented using a robot; a system incorporating one or more virtual machines (VMs) (see col 32 lines 37-38: VMM 830 instantiates and runs one or more virtual machine instances (“guest machines”).; a system implemented at least partially in a data center (see col 1 lines 20-24: “The present invention relates to using machine learning models to predict hardware utilization in a datacenter comprising a network of computing devices and, more particularly, to predicting hardware utilization at the datacenter level.”.) ; or a system implemented at least partially using cloud computing resources (col 2 lines 53-61: “As such, given the privacy and performance needs of clients of a cloud service provider, it would be beneficial to allow a cloud service provider to gather and utilize valuable datacenter resource utilization information without requiring the service provider to run software on the resources utilized by the client. Furthermore, it would be beneficial to forecast utilization of datacenter resources without stressing client devices, which would facilitate avoidance of future potential networking and hardware issues.”.). Claims 8 and 23 are rejected under 35 U.S.C 103 as being unpatentable over Shinde et al. (US11443166B2 hereinafter referred to as Shinde) in view of Varadarajan et al. (US20180232036A1 hereinafter referred to Varadarajan) in further view of Riekstin et al. (“A Survey on Metrics and Measurement Tools for Sustainable Distributed Cloud Networks” hereinafter referred to as Riekstin). Regarding claim 8: Shinde in view of Varadarajan teaches the method of claim 1. Shinde in view of Varadarajan does not teach sending a command to the service processor, the command comprising a request for the power consumption data, wherein the service processor is a power distribution unit (PDU) comprising a power outlet coupled to a power cable of the computing device. Riekstin, however, teaches in analogous wherein the service processor is a power distribution unit (PDU) comprising a power outlet coupled to a power cable of the computing device (see section IV. ‘Measurement Methods and Measurement Tools’ subsection B ‘Measurement Tools’: “The authors use power models of vendor-specific devices to calculate energy metrics, such as PUE and TEEE, in different granularities. The hardware-based approach is deployed to measure energy consumption directly from PDUs, and a software-based approach is used to calculate the energy consumption from counters obtained via SNMP/OpenFLow. At the time of this study, the authors were not able to find the source code or download the tool.”.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Shinde, Varadarajan, and Riekstin before him or her, to modify the method of claim 8 to include attributes of a request for the power consumption data, wherein the service processor is a power distribution unit (PDU) in order to serve as a source of measuring energy consumption (see section IV. ‘Measurement Methods and Measurement Tools’ subsection B ‘Measurement Tools’ “The hardware-based approach is deployed to measure energy consumption directly from PDUs, and a software-based approach is used to calculate the energy consumption from counters obtained via SNMP/OpenFLow.”). Regarding claim 23: Claim 23 recites analogous limitations to claim 8 and therefore is rejected on the same grounds as claim 8. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: US8332665B2 — discloses determining power consumption of first time and second time intervals US20220050714A1 — discloses power consumption with the use of a deep neural network Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew A Bracero whose telephone number is (571)270-0592. The examiner can normally be reached Monday - Thursday 9:00 a.m. - 5:00 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached Monday - Friday 9:00 a.m. - 5:00 p.m. ET at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW BRACERO/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Apr 15, 2022
Application Filed
Jun 12, 2025
Non-Final Rejection — §103
Aug 25, 2025
Interview Requested
Sep 02, 2025
Applicant Interview (Telephonic)
Sep 17, 2025
Response Filed
Oct 16, 2025
Final Rejection — §103
Dec 04, 2025
Response after Non-Final Action
Jan 14, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 06, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month