Prosecution Insights
Last updated: April 19, 2026
Application No. 17/731,290

POWER OPTIMIZATION BASED ON WORKLOAD PLACEMENT IN A CLOUD COMPUTING ENVIRONMENT

Final Rejection §101§103
Filed
Apr 28, 2022
Examiner
HOANG, PHUONG N
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
4y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
240 granted / 345 resolved
+14.6% vs TC avg
Strong +51% interview lift
Without
With
+50.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
21 currently pending
Career history
366
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 21 are pending for examination. Claims 1, 3 – 4, 7 – 9, 12, 14 - 16, 18 and 21 are amended. Examiner’s Note The prior art rejection below cites particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to claim 1, the claim recites A power optimization system comprising: a cloud management server coupled to a plurality of clusters via a network, wherein each cluster has a plurality of physical hosts with at least one virtual machine (VM) running on each physical host; a resource management module residing in the cloud management server; and a cloud power optimizer module residing in the resource management module, wherein the cloud power optimizer module is to: determine background and active power usages of each physical host in the plurality of clusters; determine power usage of each VM based on the determined background and active power usages of each physical host; compute a cost score for each physical host based on estimated power utilization of the physical host and a thermal hotspot proximity factor associated with the physical host; simulate a migration of a VM from the physical host to determine if power utilization drops below a lower bound of a power supply efficiency band, and modify the cost score of the physical host based on the simulation result; and continuously balance a distribution of workload on the plurality of physical hosts based on the determined power usage of each VM. Step 2A: Prong 1: The limitations “determine background and active power usages of each physical host in the plurality of clusters ;determine power usage of each VM based on the determined background and active power usages of each physical host; compute a cost score for each physical host based on estimated power utilization of the physical host and a thermal hotspot proximity factor associated with the physical host; simulate a migration of a VM from the physical host to determine if power utilization drops below a lower bound of a power supply efficiency band, and modify the cost score of the physical host based on the simulation result; and continuously balance a distribution of workload on the plurality of physical hosts based on the determined power usage of each VM” are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. Prong 2: the additional elements “A power optimization system comprising: a cloud management server coupled to a plurality of clusters via a network, wherein each cluster has a plurality of physical hosts with at least one virtual machine (VM) running on each physical host; a resource management module residing in the cloud management server; and a cloud power optimizer module residing in the resource management module, wherein the cloud power optimizer module is to:” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea. Thus, these additional elements do not integrate the judicial exception into a practical application. Step 2B: the additional elements “A power optimization system comprising: a cloud management server coupled to a plurality of clusters via a network, wherein each cluster has a plurality of physical hosts with at least one virtual machine (VM) running on each physical host; a resource management module residing in the cloud management server; and a cloud power optimizer module residing in the resource management module, wherein the cloud power optimizer module is to:” and “on the plurality of physical hosts” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea. Accordingly, the additional element do not mount to significantly more than the abstract idea. As to claim 2, “The system of claim 1, wherein the cloud power optimizer module further obtains thermal hotspot proximity of each physical host based on received cloud computing environment thermal hotspot location information, and wherein the cloud computing environment thermal hotspot location information is based on a thermal hotspot located on at a physical host, a rack and/or a room level” merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d). “and wherein the cloud power optimizer module further continuously balances the distribution of workload on the physical hosts based on the determined thermal hotspot proximity” merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea. As to claim 3, The system of claim 1, wherein the cloud power optimizer module is to further: obtain power profiles of the plurality of physical hosts based on a type of each physical host in each cluster, and wherein the type of each physical host is based on information including older generation, newer generation, power hungry, limited power management capability, advance power management capability, and/or compute per watt usage; label the plurality of clusters based on the obtained power profiles merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d); determine a desired power profile of each VM running on each physical host in each cluster based on a resource usage; map the desired power profile of each VM to one of the labeled plurality of clusters; and continuously balance the distribution of workload on the plurality of physical hosts based on the mapping are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. As to claim 4, The system of claim 3, wherein the cloud power optimizer module determines the active power usages of each physical host in the cloud computing environment based on the background power usage associated with each physical host ” are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion; obtained utilization statistics at VM level of sub-systems in each physical host, and/or power usage requirement of each VM associated with the physical host, and wherein the sub-system is a graphics processing unit (GPU), central processing unit (CPU), memory, and/or field programmable gate array (FPGA) merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d). As to claim 5, The system of claim 3, wherein the cloud power optimizer module further obtains utility rate structures, wherein the utility rate structures comprise time-based tariffs, demand-based tariffs, and/or usage-based tariffs merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d), and the cloud power optimizer module then continuously balances the distribution of workload on the plurality of physical hosts based on the mapping and the obtained utility rate structures are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. As to claim 6, The system of claim 1, wherein the cloud power optimizer module continuously balances the distribution of workload on physical hosts based on performance and/or power profiles of the plurality of clusters are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. As to claim 7, The system of claim 1, further comprising: a plurality of storage systems that are communicatively coupled to the plurality of clusters, wherein each storage system includes data sets, and wherein the cloud power optimizer module further to merely recite instructions to implement an abstract idea on a generic computer, or merely uses a generic computer or computer components as a tool to perform the abstract idea. receive a call to balance a data set residing in the plurality of storage systems merely recite insignificant extra solution activity such as gathering, displaying, updating, transmitting and storing data which does not integrate the judicial exception into a practical application. See MPEP 2106.05(d); determine whether migration of the data set is to be performed to balance hot data, balance a warm data, and/or re-tier the plurality of storage systems to improve storage efficiency; and migrate the data set from one storage system to another storage system based on a result of determination of whether the migration of the data set is to be performed to balance hot data, balance the warm data, and/or re-tier the plurality of storage systems in combination with the determined power usage, physical location, and/or background power usage of the physical hosts are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. As to claim 8, The system of claim 1, wherein the cloud power optimizer module further continuously rebalances the distribution of workload on the plurality of physical hosts such that a power utilization of a physical host is within a high efficiency band of a power supply unit in the physical host, wherein the high efficiency band is based on a power supply efficiency curve associated with the power supply unit in the physical host are all functions that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgement and opinion. As to claim 9, this is a non-transitory computer-readable storage medium claim of claim 1. As to claims 10 - 14, see rejection for claims 2 – 8 above. As to claim 15, this claim recite a method claim of claim 1. See rejection for claim 1 above. As to claims 15 - 21, see rejection for claims 2 – 8 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3 – 6, 9, 11, 14 – 15, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jain et al., (US PUB 2011/0239010 hereinafter Jain) in view of Gupta et al., (US PUB 2010/0070784 hereinafter Gupta), and further in view of Moore et al., (US PUB 2006/0259793 hereinafter Moore). As to claim 1, Jain teaches a power optimization system comprising: a cloud management server (“…computing cloud…” para. 0013) and (“…The management system 130 obtains information about power usage for various levels of organization, such as individual devices, and/or clusters, and/or sub-clusters, and/or circuits, etc. The management system 130 may obtain usage levels of devices 104 and derive usage levels of organizational units such as racks, clusters, containers, colos, etc., as well as performance, service-level agreements (SLAs), and priorities, etc. of hosted applications and VMs (virtual machines). The management system 130 may in turn determine which organizational units have excess and/or insufficient power. The management system 130 may then instruct (e.g., via network messages or directly via console terminals) various devices 104 to take actions that will reduce and/or increase power consumption…” para. 0016) coupled to a plurality of clusters (“…Clusters of servers…” para. 0013 and 0016 and element clusters 136 of figure 2) via a network (“…network…” para. 0016), wherein each cluster has a plurality of physical hosts (figure 2 shows cluster 136 has plurality of hosts) [with at least one virtual machine (VM) running on each physical host]; a resource management module residing in the cloud management server (“….The management system 130…” para. 0016 and 0025); and a cloud power optimizer module residing in the resource management module, wherein the cloud power optimizer module (“…. The management system 130 obtains information about power usage for various levels of organization, such as individual devices, and/or clusters, and/or sub-clusters, and/or circuits, etc. The management system 130 may obtain usage levels of devices 104 and derive usage levels of organizational units such as racks, clusters, containers, colos, etc., as well as performance, service-level agreements (SLAs), and priorities, etc. of hosted applications and VMs (virtual machines). The management system 130 may in turn determine which organizational units have excess and/or insufficient power. The management system 130 may then instruct (e.g., via network messages or directly via console terminals) various devices 104 to take actions that will reduce and/or increase power consumption…” para. 0016. Note: management module server is resource management module and cloud power optimizer module since it manages power to optimize power by minimizing the impact of power performance): determine background (“…background…” para. 0025) and active power (“…foreground…” para. 0025) usages (“…first migrate (or assign highest priority to) VMs 186 processing background tasks (e.g., computing index for web search, map-reduce jobs, scientific workloads, DryadLINQ/Map-Reduce jobs, etc.) from racks having power capacity overload to under-utilized servers hosted on racks below their power budget, and if that still doesn't suffice to meet the power cap on power overloaded racks, then the policy may be to migrate VMs processing foreground tasks and to assign interactive VMs the lowest processing priority for migration.…” para. 0025. Note: background and foreground/active tasks using power on overload or under-utilized servers) of each physical host in the plurality of clusters (“…The management system 130 also collects the power usage of individual VMs 186 running on each server…” para. 0024); determine power usage of each VM based on the determined background and active power usages of each physical host (“…Further, hybrid schemes based on combining power utilization, priority, revenue-class, and user interactiveness (e.g., SLA penalty on performance), among other factors, can be used to prioritize VMs for migration to meet power budgets across racks. Two examples of such policies are as follows. The first hybrid example policy assigning priorities to VMs for migration would be to assign higher priority to VMs with higher power consumption and if two VMs have the same power usage, prioritize the VM with a lower SLA penalty on performance impact. The second hybrid example policy would be to assign higher priority to VMs with the least SLA penalty on performance degradation and if the SLA penalty on performance is the same for two VMs, select the VM with the higher power consumption. As above, if under-utilized servers are unavailable to host migrated VMs or migration costs are more expensive than their benefits, then the VMs may be temporarily suspended from execution and resumed at a later time either on the same server or on a different server, among other policies” para. 0025. Note: determine power usage of VMs that run the background and foreground tasks in order to migrate VMs of the overload server to under-utilized power server); [compute a cost score for each physical host based on estimated power utilization of the physical host and a thermal hotspot proximity factor associated with the physical host; simulate a migration of a VM from the physical host to determine if power utilization drops below a lower bound of a power supply efficiency band, and modify the cost score of the physical host based on the simulation result; and] continuously balance a distribution of workload on the plurality of physical hosts based on the determined power usage of each VM (“…first migrate (or assign highest priority to) VMs 186 processing background tasks (e.g., computing index for web search, map-reduce jobs, scientific workloads, DryadLINQ/Map-Reduce jobs, etc.) from racks having power capacity overload to under-utilized servers hosted on racks below their power budge …” para. 0025. Note: balancing by migrating VMs of servers that overload power to under-utilized power). While Jain teaches management system, clusters of servers 136 and sub-clusters 138 (para. 0013), Jain does not but Gupta teaches each cluster has a plurality of physical hosts with at least one virtual machine (VM) running on each physical host (“..a server cluster of host systems with virtual machines executing on the host system....” abstract); compute a cost score for each physical host (“...DPM module 74 computes for each resource a score denoted highScore as a sum of the weighted distance above the target utilization for each host system above that target.... “ para. 0023. Note: the specification, para. 0010, 0033 – 0034 and 0038, defines cost core to be cost of power) and (“...To determine the cost/benefit of powering-off a particular host system of server cluster 20 DPM module 74 compares the risk-adjusted costs of power-off with a conservative projection of the power-savings benefit, and rejects the host system power-off unless the benefit exceeds the cost by a configurable factor...” para. 0071 – 0072) based on estimated power utilization of the physical host (“..wherein considering recommending host system power-off comprises iterating as follows: for each host system, determining utilization, and if the utilization for any host system is under a target utilization, iterating through powered on host systems by determining a "what if" plan assuming the powered on host system was powered off, and quantifying an impact of powering off the host system by determining a sum of a weighted distance below the target utilization for each host system below the target utilization, assuming the powered on host system is powered on and with the powered on host system powered off, and if the sum improves with the powered on host system powered off and the sum of target utilizations above the target utilization is not worse than that with the host system kept powered on, recommending that the host system be powered off.” Para. 0011) [and a thermal hotspot proximity factor associated with the physical host]; simulate a migration of a VM from the physical host to determine if power utilization drops below a lower bound of a power supply efficiency band (“..simulating moving some virtual machines from highly utilized host systems to the standby host system being recommended to be powered on. Recommending host system power-off includes calculating impact of powering off a host system with respect to decreasing the number of less-utilized host systems in the server cluster. The impact of powering off is calculated by simulating moving all virtual machines from the host system, which is being recommended to be powered-off, to less-utilized host systems...” abstract and para. 0006. Note: recommend would comprise determining before recommending), and modify the cost score of the physical host based on the simulation result (“....And a third factor is that DPM module 74 chooses not to power down a host system if the conservatively-projected benefit of placing that host system into standby does not exceed by a specified multiplier the potential risk-adjusted cost of doing so, as described in cost/benefit analysis below.” Para. 0026 – 0027, 0030, 0034) and (“...DPM module 74 host system power-off cost/benefit computes the risk-adjusted costs of power-off of host system H as the sum of:” para. 0072 - 0081). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain by applying the teachings of Gupta because Gupta would provide power cost computation for the system and to migrate virtual machines to archive the optimized system (para. 0010 and 0016). Jain and Gupta do not but Moore teaches a thermal hotspot proximity factor associated with the physical host information (“…In addition, this type of energy consumption may lead to hot spots in the data center 100 as relatively large numbers of servers 112 consuming excess amounts of energy may dissipate relatively large amounts of heat…” para. 0042) and (“...The "discretization" of the server 112a-112n thermal multipliers is based upon a proximity-based heat distribution and "poaching" and is performed in a way that minimizes errors over the entire data center 100 as well as over individual physically localized zones in the data center 100. In addition, the operational mode 400 may be employed to discourage the resource manager 120 from placing a relatively large amount of workload in a relatively small area...” para. 0068). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain, Gupta by applying the teachings of Moore because Moore would provide details where to balance the power on servers to obtain power optimization (para. 0022 – 0023, and 0068). As to claim 2, Jain modified by Gupta teaches the system of claim 1, Jain teaches the cloud power optimizer module (“….The management system 130…” para. 0016 and 0025); Jain and Gupta do not but Moore teaches obtains thermal hotspot proximity of each physical host based on received cloud computing environment thermal hotspot location information (“…In addition, this type of energy consumption may lead to hot spots in the data center 100 as relatively large numbers of servers 112 consuming excess amounts of energy may dissipate relatively large amounts of heat…” para. 0042) and (“...The "discretization" of the server 112a-112n thermal multipliers is based upon a proximity-based heat distribution and "poaching" and is performed in a way that minimizes errors over the entire data center 100 as well as over individual physically localized zones in the data center 100. In addition, the operational mode 400 may be employed to discourage the resource manager 120 from placing a relatively large amount of workload in a relatively small area...” para. 0068), and wherein the cloud computing environment thermal hotspot location information is based on a thermal hotspot located on at a physical host, a rack and/or a room level (“…. More particularly, at step 312, the target power consumption levels for a plurality of racks in a row of racks are calculated. In addition, the one of the plurality of power states to assign to the servers contained in the plurality of racks is determined at step 314…” para. 0056), and wherein the cloud power optimizer module further continuously balances the distribution of workload on the physical hosts based on the determined thermal hotspot proximity (“As described below, power distribution algorithms are implemented to maintain a substantially balanced temperature distribution in a geographically collocated cluster of compute equipment (hereinafter "data center"), such as, a data center, a collection of racks, a single rack, a cluster of servers, etc…” para. 0022 - 0023). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain and Gupta and Gupta by applying the teachings of Moore because Moore would provide details where to balance the power on servers to obtain power optimization (para. 0022 – 0023). As to claim 3, Jain modified by Gupta teaches the system of claim 1, wherein the cloud power optimizer module is to further: Jain teaches obtain power profiles of the plurality of physical hosts based on a type of each physical host in each cluster, and wherein the type of each physical host is based on information including older generation, newer generation, power hungry, limited power management capability, advance power management capability, and/or compute per watt usage (“…Incoming user requests may be re-directed and workloads may be migrated across geo-distributed data centers based on available power capacity, dynamic power pricing/availability, availability and capacity of hosting compute elements, migration costs such as the bandwidth and latency incurred in migration, among other factors. Other power control options provided by the software and hardware deployed may also be invoked.” para. 0004); [label the plurality of clusters based on the obtained power profiles]; determine a desired power profile of each VM running on each physical host in each cluster based on a resource usage (“…Similarly, if there is an increase in workload, the powered-off cluster devices are powered-on and the VMs are migrated across the powered-on devices to balance the load. For both scale down and scale-up powering in case of such prior art systems, migration of VMs is involved, which is resource and performance intensive, and which requires substantial processing, to migrate VMs across the cluster devices…” para. 0034) (“…migrate (or assign highest priority to) VMs 186 processing background tasks (e.g., computing index for web search, map-reduce jobs, scientific workloads, DryadLINQ/Map-Reduce jobs, etc.) from racks having power capacity overload to under-utilized servers hosted on racks below their power budget, and if that still doesn't suffice to meet the power cap on power overloaded racks, then the policy may be to migrate VMs processing foreground tasks and to assign interactive VMs the lowest processing priority for migration. Further, hybrid schemes based on combining power utilization, priority, revenue-class, and user interactiveness (e.g., SLA penalty on performance), among other factors, can be used to prioritize VMs for migration to meet power budgets across racks…” para. 0025); [map the desired power profile of each VM to one of the [labeled] plurality of clusters]; and continuously balance the distribution of workload on the plurality of physical hosts based on the mapping (“…When a physical server exceeds its power cap, an Energy Enforcement Module (an embodiment of or component of management system 130) running on any server or device 104) can enforce the cap by selectively throttling resource allocations to individual applications or VMs, temporarily suspending a subset of running VMs from execution and resuming them at a later time either on the same server or on a different server…” para. 0019) and (“…Some or all of the workload causing increased power usage may be migrated to those portions (e.g., different colos) of the power infrastructure where power budget is not being exceeded. For stateless services, application instances or virtual machines (VM) hosting application instances may be terminated at overloaded sites and new instances or VMs hosting them instantiated at a later time on the same server or on a different server…” para. 0027). Jain and Moore do not but Gupta teaches label the plurality of clusters based on the obtained power profiles (“..In particular, DPM module 74 saves power in a cluster by recommending evacuation and power-off of hosts when both CPU and memory resources are lightly utilized. DPM module 74 recommends powering hosts back on when either CPU or memory resource utilization increases appropriately or host resources are needed to meet other user-specified constraints...” para. 0017) and map the desired power profile of each VM to one of the [labeled] plurality of clusters (“To determine the cost/benefit of powering-off a particular host system of server cluster 20 DPM module 74 compares the risk-adjusted costs of power-off with a conservative projection of the power-savings benefit, and rejects the host system power-off unless the benefit exceeds the cost by a configurable factor...” para. 0071). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain by applying the teachings of Gupta because Gupta would provide power cost computation for the system and to migrate virtual machines to archive the optimized system (para. 0010 and 0016). As to claim 4, Jain modified by Gupta and Moore teaches the system of claim 3, Jain teaches wherein the cloud power optimizer module determines the active power usage of each physical host in the cloud computing environment based on the background power usages associated with each physical host (“…first migrate (or assign highest priority to) VMs 186 processing background tasks (e.g., computing index for web search, map-reduce jobs, scientific workloads, DryadLINQ/Map-Reduce jobs, etc.) from racks having power capacity overload to under-utilized servers hosted on racks below their power budget, and if that still doesn't suffice to meet the power cap on power overloaded racks, then the policy may be to migrate VMs processing foreground tasks and to assign interactive VMs the lowest processing priority for migration.…” para. 0025), obtained utilization statistics at VM level of sub-systems in each physical host (“…monitoring component 135 that provides the management system 130 with performance, resource usage, availability, power consumption, and network statistics, among other metrics and properties…” para. 0015), and/or power usage requirement of each VM associated with the physical host (“By allowing a power policy to be specified for individual applications or VMs, it may be possible to regulate power consumption on a per-application or a per-VM basis…” para. 0019), and wherein the sub-system is a graphics processing unit (GPU), central processing unit (CPU) (“…a power cap may be enforced, among other ways, by changing the CPU time or portion allocated to the VMs on the server. In one embodiment, processor time throttling may itself be sufficient because processors typically contribute the majority of total power consumption of small form-factor servers…” para. 0022), memory, and/or field programmable gate array (FPGA). As to claim 5, Jain modified by Gupta and Moore teaches the system of claim 3, Jain teaches wherein the cloud power optimizer module further obtains utility rate structures, wherein the utility rate structures comprise time-based tariffs, demand-based tariffs, and/or usage-based tariffs (“…When the management system 130 detects that the power consumption of a rack, for instance rack1 188 is above a predetermined power budget…0025” para. 0024), and the cloud power optimizer module then continuously balances the distribution of workload on the plurality of physical hosts based on the mapping and the obtained utility rate structures (“…it selects the requisite number of VMs 186 (to meet the power budget) with the highest power utilization on that rack 188 for migration to under-utilized servers 190 on other racks--such as rack2 192--operating below their power budget…” para. 0024). As to claim 6, Jain modified by Gupta and Moore teaches the system of claim 1, Jain teaches wherein the cloud power optimizer module continuously balances the distribution of workload on physical hosts based on performance and/or power profiles of the plurality of clusters (“…Incoming user requests may be re-directed and workloads may be migrated across geo-distributed data centers based on available power capacity, dynamic power pricing/availability, availability and capacity of hosting compute elements, migration costs such as the bandwidth and latency incurred in migration, among other factors. Other power control options provide migrate (or assign highest priority to) VMs 186 processing background tasks (e.g., computing index for web search, map-reduce jobs, scientific workloads, DryadLINQ/Map-Reduce jobs, etc.) from racks having power capacity overload to under-utilized servers hosted on racks below their power budget, and if that still doesn't suffice to meet the power cap on power overloaded racks by the software and hardware deployed may also be invoked.” para. 0004). As to claim 9, this claim recite a non-transitory computer-readable storage medium claim of claim 1. See rejection for claim 1 above. Further, Jain teaches a non-transitory computer-readable storage medium storing instructions executable by a computing device having a cloud power optimizer module in a cloud computing environment (“…device readable media. This is deemed to include at least media such as optical storage (e.g., CD-ROM), magnetic media, flash ROM, or any current or future means of storing rapidly accessible digital information. The stored information can be in the form of machine executable instructions (e.g., compiled executable binary code), source code, bytecode, or any other information that can be used to enable or configure computing devices to perform the various embodiments discussed above…” para. 0028). As to claims 10 - 11, these claims recite similar scope of claims 2 - 3. See rejection for claims 2 - 3 above. As to claim 15, this claim recite a method claim of claim 1. See rejection for claim 1 above. As to claims 16 and 17, these claims recite similar scope of claims 2 - 3. See rejection for claims 2 - 3 above. As to claims 14 and 18, these claims recite similar scope of claim 5. See rejection for claim 5 above. As to claim 20, this claim recites similar scope of claim 6. See rejection for claim 6 above. Claims 7, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Jain in view of Gupta and Moore, as applied to claims 1, 9, and 15 above, and further in view of Duong et al., (US PAT 11,169,835 hereinafter Duong). As to claim 7, Jain modified by Gupta and Moore teaches the system of claim 1, Jain teaches a plurality of storage systems (“…datacenters…” para. 0004) that are communicatively coupled to [the plurality of clusters (“…Clusters of servers…” para. 0013 and 0016 and element clusters 136 of figure 2), wherein each storage system includes data sets and wherein the cloud power optimizer module further to: Jain, Gupta and Gupta do not but Duong teaches receive a call to balance a data set (element 602 of figure 6 indicates “Receive command to execute a load balancing recommendation) residing in the plurality of storage systems (“…VM data migration between storage devices is disclosed. The techniques described here find application in various data migration situations including carrying out a load balancing recommendation…” col. 2 lines 34 - 38); determine whether migration of the data set is to be performed (“…determining, based on a request to migrate a VM from a source device to a destination device, snapshot data and live data corresponding to the VM…” col. 2 lines 35 – 50) to balance hot data (“…live data…” col. 2 lines 35 – 50), balance the warm data, and/or re-tier the plurality of storage systems to improve storage efficiency (“…The techniques described here may be applied to a system such as system 100 to migrate VM data between storage devices 102-108. In the example, system 100 includes storage device 102, storage device 104, storage device 106, network 110, storage device 108, and VM load balancing server 112…” col. 2 lines 56 – 67); and migrate the data set from one storage system to another storage system based on a result of determination of whether the migration of the data set is to be performed to balance hot data, balance the warm data, and/or re-tier the plurality of storage systems in combination with the determined power usage, physical location (“…determining, based on a request to migrate a VM from a source device to a destination device…” col. 2 lines 35 - 55), and/or background power usage of the physical hosts. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain and Gupta by applying the teachings of Duong because Duong would teaches the same field of the invention of data migration to improve load balance and therefore can be combined (title, abstract and figure 1). Duong further teaches migrating between storage systems to improve balance between storages (col. 2). As to claims 12 and 19, these claims recite similar scope of claim 7. See rejection for claim 7 above. Claims 8, 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Jain in view of Gupta and Moore, as applied to claims 1, 9, and 15 above, and further in view of Ravichandran et al., (US PUB 2011/0154066 et al., (hereinafter Ravichandran). As to claim 8, Jain modified by Gupta and Moore teaches the system of claim 1, Jain teaches wherein the cloud power optimizer module further continuously rebalances the distribution of workload on the plurality of physical hosts such that a power utilization of a physical host (“…When a physical server exceeds its power cap, an Energy Enforcement Module (an embodiment of or component of management system 130) running on any server or device 104) can enforce the cap by selectively throttling resource allocations to individual applications or VMs, temporarily suspending a subset of running VMs from execution and resuming them at a later time either on the same server or on a different server…” para. 0019) and (“…Some or all of the workload causing increased power usage may be migrated to those portions (e.g., different colos) of the power infrastructure where power budget is not being exceeded. For stateless services, application instances or virtual machines (VM) hosting application instances may be terminated at overloaded sites and new instances or VMs hosting them instantiated at a later time on the same server or on a different server…” para. 0027). Jain, Gupta and Moore do not but Ravichandran teaches is within a high efficiency band of a power supply unit (“…a power supply 510…” para. 0074 and figure 10) in the physical host (“…the power manager determines the performance state of the load. (Block 210). This performance state may be, for example, any of the aforementioned C-type states, one of a plurality of power states of a computer…” para. 0048, wherein the high efficiency band is based on a power supply efficiency curve associated with the power supply unit in the physical host (“Using the fixed current values in the sub-bands of the load current range of FIG. 5 generates a new power curve B, which effectively represents a shift in the position of power curve A in FIG. 6. This shifted curve allows for the supply voltage to be reduced closer to the minimum supply voltage …” para. 0042). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention was made to modify Jain and Gupta and Moore by applying the teachings Ravichandran because Ravichandran also teaches the same field of the invention of power management technique that controls loads to provide efficient power consumption (para. 0044). Ravichandran further provide details of managing band value to output efficient power curves (para. 0044 - 0046). As to claims 13 and 21, this claim recites similar scope of claim 8. See rejection for claim 8 above. Response to Arguments Claim Objections Applicant’s arguments, with respect to claims 1 – 20 under Claim Objection, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn Rejections under 35 U.S.C. § 101 Applicant’s arguments, with respect to claims 1 – 20 under 101 rejection, have been fully considered but are not persuasive. Applicant argued that “Claims 1-21 stand rejected under 35 U.S.C. § 101 for allegedly being directed to non- statutory subject matter. These rejections are respectfully traversed. Claim 1, as amended, recites "compute a cost score for each physical host based on estimated power utilization of the physical host and a thermal hotspot proximity factor associated with the physical host; simulate a migration of a VM from the physical host to determine if power utilization drops below a lower bound of a power supply efficiency band, and modify the cost score of the physical host based on the simulation result", which is at least disclosed in paragraphs 36-39 of the Specification. Applicant respectfully submits that amended limitations involve a series of interrelated data inputs, real-time system telemetry, and predictive modeling that cannot be carried out by a human being, even with pen and paper. First, the computation of a cost score for each physical host is dependent on metrics such as estimated power utilization, the power supply efficiency score, compute-per-watt efficiency, and an exponential function based on a thermal hotspot proximity facto. These inputs themselves are not mere abstract values but are derived from real- time sensor data, vendor-supplied power statistics, and dynamically changing operating conditions. The power supply efficiency curve, for example, is a hardware-specific metric that varies depending on workload, temperature, and utilization patterns. The human mind is not equipped to intake, monitor, and process this variety of data across potentially hundreds or thousands of machines in real time. Moreover, the simulation of migrating a VM and determining whether the post-migration power utilization falls below a high-efficiency band introduces an inherently computer-specific operation. This step does not simply involve a rule-based decision or a straightforward calculation. Instead, it requires forecasting the effect of a proposed change in system configuration (the VM migration) on the power efficiency of a host, by estimating future resource utilization levels and comparing them to hardware-specific power supply efficiency thresholds. This forward-looking simulation process is not something a human could perform mentally or on paper in any practical sense, particularly given the number of interacting elements and the real-time nature of data involved. Therefore, rejections under 35 USC101 are overcome.” (pages 9 - 10 of remark). In response, Predictive modeling is just a thinking, guesting, estimating that human can plan and/or think with or without pen and paper. Computing a cost score is basically a math. Further, claimed the computing is based on estimated power utilization, not in real environment. Claims do not recite limitation “monitor”, equip and process hundreds or thousands machines in real time. The process is just a simulation; therefore, it is abstract idea. Rejections under 35 U.S.C. § 112(b) Applicant’s arguments, with respect to the rejections of claims 1 - 20 under 103 rejection have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. Rejections under 35 U.S.C. § 103 Applicant’s arguments, with respect to the rejections of claims 1 - 20 under 103 rejection have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Jain, Gupta, Moore, Duong, and Ravichandran. Conclusion The prior art made of record but not relied upon request is considered to be pertinent to applicant’s disclosure. Cardoso, (US PUB 2019/0042330), discloses a method for managing heat in a center processing unit, to reduce hot-spotting on a CPU (title, abstract and figures 1 – 6). Zhao, (US PUB 2014/0082202), discloses a method for determining cost of migration virtual machines on physical machines in a cluster system (title, abstract and figures 1 – 7). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG N HOANG whose telephone number is (571)272-3763. The examiner can normally be reached 9:5-30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHUONG N HOANG/ Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Apr 28, 2022
Application Filed
Mar 17, 2025
Non-Final Rejection — §101, §103
Jun 23, 2025
Applicant Interview (Telephonic)
Jun 24, 2025
Response Filed
Jun 26, 2025
Examiner Interview Summary
Oct 02, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536052
SYSTEMS AND METHODS FOR DEPLOYING PERMISSIONS IN A DISTRIBUTED COMPUTING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12450106
AUTOMATIC ACCESS CONTROL OF CALLS MADE OVER NAMED PIPES WITH OPTIONAL CALLING CONTEXT IMPERSONATION
2y 5m to grant Granted Oct 21, 2025
Patent 12430176
CONTROLLING OPERATION OF EDGE COMPUTING NODES BASED ON KNOWLEDGE SHARING AMONG GROUPS OF THE EDGE COMPUTING NODES
2y 5m to grant Granted Sep 30, 2025
Patent 12386665
METHOD FOR MANAGING RESOURCES, COMPUTING DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Aug 12, 2025
Patent 12373265
TECHNOLOGIES FOR RULES ENGINES ENABLING HANDOFF CONTINUITY BETWEEN COMPUTING TERMINALS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+50.8%)
4y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month