Prosecution Insights
Last updated: April 19, 2026
Application No. 18/249,958

ORCHESTRATING DATACENTER WORKLOADS BASED ON ENERGY FORECASTS

Non-Final OA §101§103
Filed
Apr 20, 2023
Examiner
WAI, ERIC CHARLES
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Quantum Loophole Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+27.1% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more. As per claim 1, in step 1 of the 101 analysis, the examiner has determined that the claim is directed to a method. Therefore, the claim is directed to one of the four statutory categories of invention. In step 2A prong 1 of the 101 analysis, the examiner has determined that the claim recites a judicial exception. Specifically, the limitations “generating a cloud-computing optimization plan based at least on the real-time utilization of the energy grids, the available energy supply, and the cloud-computing demand, wherein the cloud-computing optimization plan identifies a cloud-computing job schedule and one or more energy grids of the energy grids available to the datacenters” recite mental processes. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas under Prong 1 Step 2A. In step 2A prong 2 of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not integrate the judicial exceptions into a practical application for the following rationale: The limitations “obtaining, via one or more machine learning models, at least a real-time utilization of energy grids and an available energy supply of the energy grids for use by datacenters over a period of time; obtaining, via the one or more machine learning models, a cloud-computing demand projected for the datacenters during the period of time” represent insignificant, extra-solution activities. The term "extra-solution activity" can be understood as "activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim" (MPEP 2106.05(g)). The examiner has determined that the limitations “obtaining, via one or more machine learning models, at least a real-time utilization of energy grids and an available energy supply of the energy grids for use by datacenters over a period of time; obtaining, via the one or more machine learning models, a cloud-computing demand projected for the datacenters during the period of time” are directed to mere data gathering activities which is a category of insignificant extra-solution activities (MPEP 2106.05(g)). The limitation “providing the cloud-computing optimization plan to one or more tenants” constitutes insignificant post-solution activities. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood routine and conventional. See MPEP 2106.05(d)(II) “presenting offers and gathering statistics” In step 2B of the 101 analysis, the examiner has determined that the additional elements, alone or in combination do not recite significantly more than the abstract ideas identified above for the following rationale: The limitations “obtaining, via one or more machine learning models, at least a real-time utilization of energy grids and an available energy supply of the energy grids for use by datacenters over a period of time; obtaining, via the one or more machine learning models, a cloud-computing demand projected for the datacenters during the period of time” represent insignificant, extra-solution activities and are well-understood, routine, or conventional because they are directed to "receiving or transmitting data" (MPEP 2106.05(d)). These are additional elements that the courts have recognized as well understood, routine, or conventional (MPEP 2106.05(d)). The citation of court cases in the MPEP meets the Berkheimer evidentiary burden since citation of a court case in the MPEP is one of the 4 types of evidentiary support that can be used to prove that the additional elements are well-understood, routine, or conventional (see 125 USPQ2d 1649 Berkheimer v. HP, Inc.). Thus, the limitations do not amount to significantly more than the abstract idea. The limitation “providing the cloud-computing optimization plan to one or more tenants” the courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood routine and conventional. See MPEP 2106.05(d)(II) “presenting offers and gathering statistics” Considering the additional elements individually and in combination and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. The claim is not patent eligible. As per claim 8, it is a apparatus claim of claim 1, so it is rejected for the same reasons as claim 1. Additionally, claim 8 recites “one or more computer-readable storage media; a processing system” which recite generic computing components that do not integrate the judicial exceptions into a practical application and do not provide significantly more and recite intended use limitations that do not have patentable weight. As per claim 15, it is a media/product type claim of claim 1, so it is rejected for the same reasons as claim 1. Additionally, claim 14 recites “one or more computer-readable storage media” which are generic computing components that do not integrate the judicial exceptions into a practical application and do not provide significantly more. As per claim 2 (and similarly for claims 9 and 16), it recites “wherein the energy grids comprise renewable energy sources and non-renewable energy sources ” which further describes the abstract idea. As per claim 3 (and similarly for claims 10 and 17), it recites “wherein the cloud-computing schedule assigns a routing path between each cloud-computing job of a plurality of cloud-computing jobs and a datacenter of the datacenters ” which further describes the abstract idea. As per claim 4 (and similarly for claims 11 and 18), it recites “wherein providing the cloud-computing optimization plan to the one or more tenants comprises communicating with the one or more tenants via an application programming interface ” which further describes the abstract idea. As per claim 5 (and similarly for claims 12 and 19), it recites “wherein obtaining the cloud-computing demand projected for the datacenters comprises identifying a current amount of cloud-computing jobs assigned to the datacenters and a historical pattern of the cloud-computing jobs assigned to the datacenters ” which further describes the abstract idea. As per claim 6 (and similarly for claims 13 and 20), it recites “further comprising obtaining a news status and a weather status corresponding to geographies around the datacenters ” which further describes an insignificant extra-solution activity (i.e., data gathering) which does not integrate the judicial exceptions into a practical application and does not provide significantly more. As per claim 7 (and similarly for claim 14), it recites “wherein the news status includes an indication of one or more of an event and an emergency ” which further describes the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 8-12, and 14-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat et al. (US PG Pub No. 2020/0167205 A1) in view of Shaikh et al. (WO 2019/213466 A1), further in view of Dabbagh et al. (US PG Pub No. 2017/0093639 A1). Guim Bernat, Shaik, and Dabbagh were disclosed in IDS filed 04/20/2023. Regarding claim 1, Guim Bernat teaches a method of operating a datacenter orchestration engine in a multi-tenant environment ( (Fig. 3, Para. [0085], the example orchestrator 202 includes an example workload scheduler 306; Fig. 3, Para. [0119], based on the model, the workload scheduler 306 selects an optimal multi-tenant schedule for an edge platform [the orchestration engine operates in a multi-tenant environment]; Fig. 2, Para, [0066], the core data center 132 can be implemented by the edge platform 300 [the edge platform, which implements the data center, includes the orchestrator; thus, the orchestration engine is interpreted as a datacenter orchestration engine]), comprising: on a per-tenant basis: obtaining, obtaining, edge platform, the thermal controller 308 [part of the orchestrator] determines an estimate of the deviance from the performance "optimal point" [the orchestration engine obtains, for each tenant, the computing demand over the period of time]); generating a cloud-computing optimization plan based at least on the real-time utilization of the energy grids, (Fig. 3, Para. [0119], based on the model, the workload scheduler 306 selects an optimal multi-tenant schedule for an edge platform, and the workload scheduler 306 utilizes a model-based schedule to determine how best to schedule tenant workloads [tenant workloads include the computing job; the multi-tenant schedule / optimization plan identifies the tenant workload / computing job schedule for each tenant]). Guim Bernat does not teach obtaining, via one or more machine learning models, at least a real-time utilization of energy grids and an available energy supply of the energy grids for use by datacenters over a period of time; generating a cloud-computing optimization plan based at least on the real-time utilization of the energy grids, the available energy supply, and the cloud-computing demand, wherein the cloud-computing optimization plan identifies a cloud-computing job schedule and one or more energy grids of the energy grids available to the datacenters; providing the cloud-computing optimization plan to one or more tenants. Shaikh discloses a method of operating a datacenter (Abstract), comprising: obtaining, via one or more machine learning models, at least a utilization of energy grids and an available energy supply of the energy grids for use by datacenters (Para. [0039], the power configuration can be based on power topology, visualization based on power availability, power configurations, and so on, and cloud power management can include datacenter power management of remote datacenters; cross datacenter power configuration and workload orchestration; machine learning for power configuration, where the machine learning can be based on cross-datacenter knowledge; accelerated simulations of power arrangements within a datacenter and across datacenters [machine learning models are used to obtain the available energy supply for use by datacenters]; Para. [0036], power configurations can be adapted to capture underutilized power from some portions of the datacenter [machine learning models are used to obtain power configurations, which include the indication of underutilizing power; underutilizing power then indicates that the utilization of energy is known]; Fig. 3, Para. [0036], software defined power can be used to match power sources such as grid power or renewable micro-grid power, uninterruptable power supplies, backup power, and so on, to power requirements of equipment such as electrical equipment and cooling equipment in datacenters [the utilization of energy is the utilization of energy grids, and available energy supply is the available energy supply of energy grids]); generating an optimization plan, wherein the optimization plan identifies one or more energy grids of the energy grids available to the datacenters providing the cloud-computing optimization plan to one or more tenants (Fig. 2, Para. [0033], the flow 200 includes modifying a power arrangement 210 within the datacenter based on the policy within the set of policies, and the power arrangement can include selecting a power source such as grid power, renewable micro-grid power, diesel- generator power, etc.; configuring power switches for distribution of power to data racks within the datacenter; determining whether buffer power such as battery buffer power may be needed to supplement grid power; power distribution unit settings; and so on [a selection is made of one or more energy grids of the energy grids available to the datacenters]; Fig. 1, Para. [0027], a power policy can include factors such as time, cost, source availability, source health, etc [the energy grid selection, which is based on factors such as time and cost, is interpreted as a optimization plan]). It would have been obvious to one of ordinary skill in the art before the effective filing date to obtain, via one or more machine learning models, at least a utilization of energy grids and an available energy supply of the energy grids for use by datacenters and generate an optimization plan, wherein the optimization plan identifies one or more energy grids of the energy grids available to the datacenters. On would be motivated by the desire to use machine learning models to more accurately model energy grids to more effectively utilize power resources as taught by Shaikh. Guim Bernat and Shaikh do not teach obtaining, via the one or more machine learning models, a cloud-computing demand projected for the datacenters during the period of time. Dabbagh discloses a method of operating a datacenter (Abstract), comprising: obtaining, via the one or more machine learning models, a cloud-computing demand projected for the datacenters during the period of time (Para. [0020], embodiments described herein provide techniques for predicting an optimal number of physical servers to have online within a data center [e.g., of a cloud computing environment] at a future moment in time in order to satisfy a predicted client demand for deploying virtual workloads [a cloud-computing demand projected for the datacenters during the period of time is obtained]; Fig. 1, Para. [0027], the workload prediction component 150 is generally tasked with determining a number of active physical servers 170 that will be needed a future moment in time; Fig. 3, Para. [0042], in addition to tuning the length of the prediction window, the workload prediction component 150 can tune other parameters of the prediction models 220 [including a neural network prediction model] to ensure accurate predictions of future workload [the workload prediction component uses a neural network prediction model / machine learning model to obtain the cloud-computing demand projected for the datacenters]); Para. [0020], generating a cloud-computing optimization plan based at least on the cloud-computing demand (embodiments described herein provide techniques for predicting an optimal number of physical servers to have online within a data center [e.g., of a cloud computing environment] at a future moment in time in order to satisfy a predicted client demand for deploying virtual workloads; Fig. 3, Para. [0051], the workload prediction component 150 then determines a number of physical servers to have active at a future point in time, based on the generated prediction [machine learning] models [determining the number of physical servers to have on at a future time to satisfy client demand is a cloud-computing optimization plan based on the projected cloud-computing demand of the client]). It would have been obvious to one of ordinary skill in the art before the effective filing date to obtain, via the one or more machine learning models, a cloud-computing demand projected for the datacenters. On would be motivated by the desire to use machine learning models to more accurately model projected cloud-computing demand to more effectively utilize computing resources. Regarding claim 2, Shaikh teaches wherein the energy grids comprise renewable energy sources and non-renewable energy sources (Fig. 3, Para. [0036], software defined power can be used to match power sources such as grid power or renewable micro-grid power). Regarding claim 3, Guim Bernat teaches wherein the cloud-computing schedule assigns a routing path between each cloud-computing job of a plurality of cloud-computing jobs and a datacenter of the datacenters (Fig. 3, Para. [0119], the thermal controller 308 computes a model involving several possible workload cost options, and based on the model, the workload scheduler 306 selects an optimal multi-tenant schedule for an edge platform [e.g., based on the current resource configuration] [the utilization of energy and the computing demand are part of the workload cost per tenant, which is part of the model for determining the optimal multi-tenant schedule; this optimal multi-tenant schedule is interpreted as a computing optimization plan]). Regarding claim 4, Dabbagh teaches wherein providing the cloud-computing optimization plan to the one or more tenants comprises communicating with the one or more tenants via an application programming interface ([0023]). Regarding claim 5, Guim Bernat teaches wherein obtaining the cloud-computing demand projected for the datacenters comprises identifying a current amount of cloud-computing jobs assigned to the datacenters and a historical pattern of the cloud-computing jobs assigned to the datacenters ([0029]). Regarding claims 8-12, and 14-19, they are the apparatus and media claims of claims 1-5 above, Therefore, they are rejected for the same reasons as claims 1-5 above. Claim(s) 6-7, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat et al. (US PG Pub No. 2020/0167205 A1) in view of Shaikh et al. (WO 2019/213466 A1), in view of Dabbagh et al. (US PG Pub No. 2017/0093639 A1), further in view of Rasmussen (US PG Pub No. 2009/0112522 A1). Regarding claims 6-7, Guim Bernat, Shaikh, and Dabbagh do not teach obtaining a news status and a weather status corresponding to geographies around the datacenters, wherein the news status includes an indication of one or more of an event and an emergency. Rasmussen teaches using inputs such as news and weather to create a model for data center energy management (Para [0057], a model that accurately represents the workings of a specific data center, and accepts as inputs the IT load, outdoor weather statistics, time-of-day electric rates, etc., [news] may be used effectively in a data center energy management program. Unlike the measurement of an actual operating data center, which provides only data for the conditions at the time of measurement, a model can provide data for any input conditions fed to it.). It would have been obvious to one of ordinary skill in the art before the effective filing date to obtain, a news status and a weather status corresponding to geographies around the datacenter. On would be motivated by the desire to more effectively model the data center as taught by Rasmussen. Regarding claims 13 and 20, they are the apparatus and media claims of claim 6 above, Therefore, they are rejected for the same reasons as claim 6 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC C WAI whose telephone number is (571)270-1012. The examiner can normally be reached Monday - Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Eric C Wai/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Apr 20, 2023
Application Filed
Dec 18, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602261
CONTAINER SCHEDULING ACCORDING TO PREEMPTING A SET OF PREEMPTABLE CONTAINERS DEPLOYED IN A CLUSTER
2y 5m to grant Granted Apr 14, 2026
Patent 12602248
METHOD AND DEVICE OF LAUNCHING AN APPLICATION IN BACKGROUND
2y 5m to grant Granted Apr 14, 2026
Patent 12585498
SYSTEM AND METHOD FOR RESOURCE MANAGEMENT IN DYNAMIC SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12585503
UNIFIED RESOURCE MANAGEMENT ARCHITECTURE FOR WORKLOAD SCHEDULERS
2y 5m to grant Granted Mar 24, 2026
Patent 12579001
REINFORCEMENT LEARNING SPACE STATE PRUNING USING RESTRICTED BOLTZMANN MACHINES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month