Prosecution Insights
Last updated: April 19, 2026
Application No. 18/183,161

SYSTEM, APPARATUS AND METHOD FOR CLOUD RESOURCE ALLOCATION

Final Rejection §101§103§112
Filed
Mar 14, 2023
Examiner
RIGGINS, ARI FAITH COLEMA
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Industrial Technology Research Institute
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
27.8%
-12.2% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to claims filed on 11/27/2025. Claims 1-25 are pending. Claim Objections Claim 14 is objected to because of the following informalities: The limitation “power consumption adjustment strategy suggestions; and; parsing a job profile” should read “power consumption adjustment strategy suggestions; and parsing a job profile”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 13, and 14 recite the limitation “workload monitoring data checked for workload”. It is unclear what is meant by checked for workload in this limitation. For the sake of compact prosecution, Examiner will interpret this limitation to mean “workload monitoring data”. Claims 2-12 and 15-25 depend, directly or indirectly, from rejected claims and do not resolve the deficiencies thereof and are therefore rejected for at least the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-12 are directed to a system and fall within the statutory category of machine. Claim 13 is directed to an apparatus and falls within the statutory category of machines. Claims 14-25 are directed to a method and fall within the statutory category of process. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Step 2A Prong 1: Claims 1, 13, and 14: The limitations of “and through the job scheduler, parse a job profile of a job request obtained from a waiting queue”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally analyze a given job profile. Further, the limitations of “through the job scheduler, decide to execute a direct resource allocation for a job to be handled requested by the job request in response to determining that an available resource of at least one of the worker nodes meets a resource requirement of the job request;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe available resources of worker nodes and a resource requirement of a job request and, based on these observations, can mentally decide to execute a direct resource allocation by using mental comparison and analysis to determine that an available resource of at least one of the worker nodes meets a resource requirement of the job request. Further, the limitations of “and through the job scheduler, decide to execute an indirect resource allocation for the job to be handled requested by the job request in response to determining that the available resource of none of the worker nodes meets the resource requirement of the job request, and the resource requirement of the job request is met after preempting the resources used by one or more running jobs with low priority;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe available resources of worker nodes, a resource requirement of a job request, and resources used by running jobs with low priority and, based on these observations, can mentally decide to execute an indirect resource allocation by using mental comparison and analysis to determine that the available resource of none of the worker nodes meets the resource requirement of the job request, and the resource requirement of the job request is met after preempting the resources used by one or more running jobs with low priority. This may also be done with pencil and paper. Further, the limitations of “wherein executing the direct resource allocation for the job to be handled comprises finding a first worker node having an available resource matching the job profile through the job scheduler among the worker nodes; dispatching the job to be handled to the first worker node through the resource manager, so that the first worker node executes the job to be handled; and putting the job to be handled into a running queue through the job scheduler;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally decide to execute a direct resource allocation comprising finding a first worker node having an available resource matching the job profile through the job scheduler among the worker nodes; dispatching the job to be handled to the first worker node through the resource manager, so that the first worker node executes the job to be handled; and putting the job to be handled into a running queue through the job scheduler. Further, the limitations of “wherein executing the indirect resource allocation for the job to be handled comprises through the job scheduler, finding a second worker node having a low priority job among the worker nodes, and notifying the second worker node so that the second worker node backs up an operation mode of the low priority job, and then releases resource used by the low priority job; putting another job request corresponding to the low priority job into the waiting queue through the job scheduler in response to receiving a resource release notification from the second worker node through the resource manager; dispatching the job to be handled to the second worker node through the resource manager, so that the second worker node executes the job to be handled; and putting the job to be handled into the running queue through the job scheduler”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally decide to execute an indirect resource allocation comprising through the job scheduler, finding a second worker node having a low priority job among the worker nodes, and notifying the second worker node so that the second worker node backs up an operation mode of the low priority job, and then releases resource used by the low priority job; putting another job request corresponding to the low priority job into the waiting queue through the job scheduler in response to receiving a resource release notification from the second worker node through the resource manager; dispatching the job to be handled to the second worker node through the resource manager, so that the second worker node executes the job to be handled; and putting the job to be handled into the running queue through the job scheduler. Therefore, Yes, claims 1, 13, and 14 recite a judicial exception. Step 2A Prong 2: Claims 1, 13, and 14: The judicial exception is not integrated into a practical application. In particular, the claims recite additional element recitations of “A cloud resource allocation system, comprising a plurality of worker nodes and a master node, each of the worker nodes and the master node is realized by using an electronic device with computing and networking functions, wherein the master node comprises: a storage, storing an orchestrator, the orchestrator comprising a job scheduler and a resource manager; and a processor, coupled to the storage, configured to:”, “and a processor, coupled to the storage, configured to:”, which are merely recitations of generic computing components being used as a tool to merely apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “A cloud resource allocation apparatus, comprising: a storage, storing an orchestrator and providing a waiting queue and a running queue, wherein the orchestrator comprises a resource manager and a job scheduler;”, and “A cloud resource allocation method, comprising: executing the following through a cloud resource allocation apparatus:”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of ‘’obtain a plurality of node resource information respectively reported by the worker nodes through the resource manager, wherein the node resource information includes workload monitoring data checked for workload and power consumption monitoring data checked for power consumption, and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions;”, which are merely recitations of data reception and transmission which are insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. After having evaluated the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1, 17, and 19 not only recite a judicial exception but that the claims are directed to the judicial exception as the judicial exception has not been integrated into practical application. Step 2B: Claims 1, 13, and 14: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components, field of use/technological environment, and insignificant extra solution activity which do not amount to significantly more than the abstract idea. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, Claims 1, 13, and 14 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 2 and 15, the claims recite additional element recitations of “wherein the first worker node, which has the available resource, found by the job scheduler meets a job goal and the job goal is a minimum power consumption cost, best performance, or a comprehensive measurement goal”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 2 and 15 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 2 and 15 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 2 and 15 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 3 and 16, claim 16 recites additional abstract idea recitations of “wherein executing the indirect resource allocation comprises: after finding the second worker node with the low priority job, notifying the second worker node to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally decide to execute an indirect resource allocation comprising after finding the second worker node with the low priority job, notifying the second worker node to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job. Further, the claims recite additional element recitations of “wherein the one or more running jobs comprise at least the low priority job;”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “wherein after finding the second worker node with the low priority job through the job scheduler, the processor is configured to: notify the second worker node through the job scheduler to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job”, which are merely recitations of data transmission which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 3 and 16 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 3 and 16 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 3 and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 4 and 17, the claims recite additional abstract idea recitations of “wherein in the master node, the processor is configured to: through the job scheduler, execute the direct resource allocation for each of a plurality of application group members in the job profile, in response to determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets a resource requirement of the job request based on the node resource information and the job profile,” and “further comprising executing the following through the cloud resource allocation apparatus: executing the direct resource allocation for each of a plurality of application group members in the job profile in response to determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets the resource requirement of the job request based on the node resource information and the job profile,” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally assign resources to each of a plurality of application group members in response to mentally determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets the resource requirement of the job request based on mental comparison of node resource information and a job profile. Further, the claims recite additional abstract idea recitations of “comprising: finding a plurality of third worker nodes that meet a resource requirement of the application group members respectively among the worker nodes through the job scheduler;”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a resource requirement and, based on this observation, can perform mental comparison to determine a plurality of third worker nodes that meet the resource reequipment. Further, the claims recite additional element recitations of “dispatching each of the application group members to a corresponding third worker node through the resource manager;” and “and putting the job to be handled into the running queue through the job scheduler”, which are merely recitations of data transmission and storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 4 and 17 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4 and 17 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 4 and 17 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 5 and 18, the claims recite additional element recitations of “wherein the storage further comprises a resource monitor, the processor is configured to: through the resource monitor, collect the node resource information respectively reported by the worker nodes;”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “dispatching each of the application group members to a corresponding third worker node through the resource manager;” and “wherein after putting the job to be handled into the running queue through the job scheduler, the processor is configured to: delete the job to be handled from the running queue through the job scheduler in response to receiving a notification indicating that the job to be handled has ended through the resource manager”, “executing the following through the cloud resource allocation apparatus: collecting the node resource information respectively reported by the worker nodes;”, which are merely recitations of data storage, reception, and gathering which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 5 and 18 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 5 and 18 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 5 and 18 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 6 and 19, the claims recite additional abstract idea recitations of “confirm a system resource usage through a system inspector;” and “confirm a container resource usage actually used by a workload of each of a plurality of containers” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally confirm a system or container resource usage through observation. Further, the claims recite additional element recitations of “wherein each of the worker nodes comprises a local processor configured to:”, “wherein node resource information corresponding to each of the worker nodes comprises the workload monitoring data and the power consumption monitoring data”, and “executing the following through each of the worker nodes:”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “and obtain workload monitoring data based on the system resource usage and the container resource usage through a performance data inspector;” and “obtain power consumption monitoring data through a power consumption inspector;”, which are merely recitations of data gathering which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 6 and 19 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 6 and 19 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 6 and 19 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 7 and 20, the claims recite additional abstract idea recitations of “determine whether the workload monitoring data exceeds a preset workload upper bound” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a preset workload upper bound and workload monitoring data, and through these observations can mentally determine whether the workload monitoring data exceeds the upper bound through comparison. Further, the claims recite additional element recitations of “wherein in each of the worker nodes, the local processor is further configured to:” and “further comprising executing the following through each of the worker nodes:”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “and mark a warning label in the workload monitoring data through the performance data inspector in response to determining that the workload monitoring data exceeds the preset workload upper bound”, which are merely recitations of data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 7 and 20 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 7 and 20 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 7 and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 8 and 21, the claims recite additional abstract idea recitations of “and determine whether each of the worker nodes has a resource abnormality by analyzing the workload monitoring data” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe and mentally analyze workload monitoring data, and through these evaluations can mentally determine whether each of the worker nodes has a resource abnormality. Further, the claims recite additional element recitations of “wherein in the master node, the processor is further configured to: through a resource monitor,” and “further comprising executing the following through the cloud resource allocation apparatus:”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “collect the workload monitoring data reported by each of the worker nodes through a performance data collector”, “and append history data to the workload monitoring data based on a preset time in response to the workload monitoring data being marked with the warning label;”, and “and through a workload manager receive the workload monitoring data from the performance data collector through a workload analyzer”, which are merely recitations of data gathering, storage, and reception which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 8 and 21 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 8 and 21 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 8 and 21 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 9 and 22, the claims recite additional abstract idea recitations of “in response to determining that the resource abnormality is a workload excess or a system resource loss,” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally determine whether a resource abnormality is a workload excess or a system resource loss through observation. Further, the claims recite additional abstract idea recitations of “generate a job group level state migration suggestion in response to determining that the resource abnormality is the workload excess” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can, in response to determining that a resource abnormality is the workload excess, mentally generate a suggestion for group level state migration. Further, the claims recite additional abstract idea recitations of “and generate a node level state migration suggestion in response to determining that the resource abnormality is the system resource loss through the workload analyzer for each of the worker nodes where the resource abnormality occurs” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can, in response to determining that a resource abnormality is the system resource loss, mentally generate a suggestion for node level state migration for each of the worker nodes where the resource abnormality occurs. Further, the claims recite additional element recitations of Further, the claims recite additional element recitations of “wherein the processor is further configured to execute the workload manager to: notify the resource manager through the workload analyzer” and “so that the resource manager transmits state migration command to a state migration handler;”, which are merely recitations of data gathering, storage, and reception which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 9 and 22 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 9 and 22 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 9 and 22 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 10 and 23, the claims recite additional abstract idea recitations of “obtain a power consumption analysis result by analyzing the power consumption monitoring data,” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally analyze power consumption monitoring data to mentally obtain an analysis. Further, the claims recite additional abstract idea recitations of “and generate a power consumption adjustment strategy based on the power consumption analysis result” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can, observe a power consumption analysis result and based on this observation can mentally generate a power consumption adjustment strategy. Further, the claims recite additional abstract idea recitations of “and generate a power adjustment suggestion based on the power consumption adjustment strategy through a power planer” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can, observe a power consumption adjustment strategy and based on this observation can mentally generate a power adjustment suggestion. Further, the claims recite additional element recitations of “wherein in the master node, the processor is further configured to: through a resource monitor” and “and execute a power manager, to:”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of Further, the claims recite additional element recitations of “collect the power consumption monitoring data reported by each of the worker nodes through a power consumption collector;” and “through a power analyzer, receive the power consumption monitoring data from the power consumption collector,”, which are merely recitations of data gathering, storage, and reception which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 10 and 23 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 10 and 23 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 10 and 23 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 11 and 24, the claims recite additional abstract idea recitations of “wherein in the master node, the processor is configured to: through the orchestrator determine whether the worker nodes are fully loaded based on the node resource information after obtaining the job request through the resource manager;” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe node resource information and, based on these observations, can mentally determine whether the worker nodes are fully loaded. Further, the claims recite additional element recitations of “in response to the worker nodes all being fully loaded, issue a power on command for each of the worker nodes in a sleep mode or a powered off mode through a power manager;” and “in response to each of the worker nodes in the sleep mode or the powered off mode transitioning to an operation state, reacquire the node resource information respectively reported by the worker nodes through the resource manager”, which are merely recitations of data transmission and gathering which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Further, claims 11 and 24 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 11 and 24 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 11 and 24 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 12 and 25, the claims recite additional abstract idea recitations of “adjust a system power state through a power modules handler in response to receiving a power adjustment suggestion from the master node,” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can, in response to receiving a power adjustment suggestion, mentally adjust a system power state by mentally assigning a new power state to the system. Further, the claims recite additional element recitations of “wherein each of the worker nodes comprises a local processor configured to:”, “wherein the container lifetime cycle management comprises one of container creation, container deletion, and state migration;” and “wherein the system power state comprises one of a powered off mode, a sleep mode, and a specific power consumption mode”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite additional element recitations of “execute a container lifetime cycle management through a job handler in response to receiving a resource management command from the master node,”, which is merely a recitation of generically using a computer as a tool to implement the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, claims 12 and 25 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 12 and 25 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 12 and 25 do not recite patent eligible subject matter under 35 U.S.C. § 101. Therefore, Claims 1-25 do not recite patent eligible subject matter under U.S.C. §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-6, 10, 13-15, 18-19, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Julien (US 2022/0214912 A1) in view of Snider (US 2016/0316003 A1) Sethi (US 12,307,281 B2) in view of Tsaur (US 7,890,714 B1). With regard to claim 1, Julien teaches: A cloud resource allocation system, comprising a plurality of worker nodes “The number of GPGPU nodes 110 and corresponding GPGPUs 102 within each GPGPU node 110 may vary for different data center systems 100. Accordingly, the configuration of FIG. 1 is for purposes of illustration” [Julien ¶ 31, fig. 1, 3]. and a master node, “A non-transitory machine-readable storage medium is described that provides instructions that, if executed by a processor of a proxy agent (master node) in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. each of the worker nodes and the master node is realized by using an electronic device with computing and networking functions, “In some embodiments, communications between (1) the application agent 120 and the proxy agent 122 and (2) the proxy agent 122 and the GPGPU agent 124 can use a customized protocol (e.g., rCUDA) for exchanging requests and responses (e.g., successful or unsuccessful execution/processing of a request)” [Julien ¶ 35]. “A non-transitory machine-readable storage medium is described that provides instructions that, if executed by a processor of a proxy agent in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. “As shown in FIG. 1, the data center system 100 includes a set of GPGPU nodes 110A-110Z and each GPGPU node 110A-110Z includes a corresponding set of GPGPUs 102 (e.g., the GPGPUs 102A1-102A3 of the GPGPU node 110A and the GPGPUs 102Z1 and 102Z2 of the GPGPU node 110Z) with corresponding GPGPU memory 112 for each GPGPU 102 (e.g., the GPGPU memories 112A1-112A3 are associated with the GPGPUs 102A1 - 102A3 and the GPGPU memories 112Z1 and 112Z2 are associated with the GPGPUs 102Z1 and 102Z2 , respectively)” [Julien ¶ 31]. “The cloud orchestrator 106 may configure these remote memories such that the remote memory units 132A1 -132AN and 132M1 -132Mp are accessible to GPGPUs 102 via a high-speed interconnect network of the data center system 100. In this capacity, the remote memory units 132A1 -132AN and 132M1 -132Mp, as managed by the remote memory management unit 128, offer a global source of memory for components of the data center system 100, including the GPGPUs 102” [Julien ¶ 40]. wherein the master node comprises: a storage, storing an orchestrator, the orchestrator comprising a job scheduler and a resource manager; and a processor, coupled to the storage, configured to: “A non-transitory machine-readable storage medium is described that provides instructions (orchestrator) that, if executed by a processor of a proxy agent in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. “The proxy agent 122 may be used for (1) scheduling/assigning (job scheduler) applications 104 and associated workloads to GPGPUs 102 via corresponding GPGPU agents 124 of GPGPU nodes 110, which monitor/manage the GPGPUs 102, (2) evicting workloads/applications 104 (resource manager) from GPGPUs 102 based on monitored performance information/profiles of the workloads/applications 104, and (3) rescheduling/reassigning evicted workloads/applications 104 to other GPGPUs 102 via corresponding GPGPU agents 124 that monitor/manage these other GPGPUs 102 (e.g., the GPGPU agent 124A monitors the GPGPUs 102A1-102A3 and associated GPGPU memories 112A1 -112A3, while the GPGPU agent 124Z monitors the GPGPUs 102Z1 and 102Z2 and associated GPGPU memories 112Z1 and 112Z2)” [Julien ¶ 34]. obtain a plurality of node resource information respectively reported by the worker nodes through the resource manager, “As shown in FIG. 1, each GPGPU agent 124A-124Z may include a respective monitoring agent 126A-126Z that monitors and profiles all the GPGPUs 102 in a corresponding GPGPU node 110A-110Z, including associated resources and workloads/applications 104 being processed by the GPGPUs 102. For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122” [Julien ¶ 37, fig. 1]. wherein the node resource information includes workload monitoring data checked for workload “For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122. The monitoring information produced by the monitoring agents 126 can be used to form performance/usage profiles for workloads/applications 104 that describe the performance/operation of workloads of the applications 104 on GPGPUs 102 and respective GPGPU memories 112” [Julien ¶ 37]. through the job scheduler, (consider resource requirements) parse a job profile of a job request “Based on the availability of resources of the GPGPUs 102 in the various GPGPU nodes 110 and requirements of the applications 104 (e.g., as indicated in GPGPU requests from the applications 104), the proxy agent 122 assigns applications 104 to various GPGPUs 102” [Julien ¶ 35]. “At operation 804, the proxy agent 122 selects a first GPGPU 102 from the set of GPGPU s for processing the first workload of the first application 104A based on one or more of (1) available resources of the set of GPGPUs 102 (e.g., the GPGPU memory 112) and (2) requirements of the workload as indicated by the first GPGPU request” [Julien ¶ 81]. through the job scheduler, decide to execute a direct resource allocation for a job to be handled requested by the job request in response to determining (available resources of) that an available resource of at least one of the worker nodes (and) meets a resource requirement of the job request; “At operation 524, the proxy agent 122 may determine if there is a need to evict the workload/application 104A from the GPGPU 102A1. For example, in response to receipt of a GPGPU request from another application 104, the proxy agent 122 may determine that there are no available GPGPUs 102 to handle the GPGPU request (i.e., a workload/application has been assigned to each GPGPU 102 in the data center system 100). Since the workload/ application 104A is underutilizing the GPGPU 102A1 (as determined at operation 522), the proxy agent 122 may determine at operation 524 that there is a need to evict the workload/application 104A from the GPGPU 102A1. In response to determining that there is not a need to evict the workload/application 104A from the GPGPU 102A1, the method 500 may return to operation 516 to continue processing the workload/application 104A” [Julien ¶ 63]. “At operation 510, the proxy agent 122 selects a GPGPU 102 for the workload of the application 104. The GPGPU 102 is selected from a set of GPGPUs 102 in the data center system 100 (e.g., all of the GPGPUs 102 in the data center system 100) and the selection is based on the available resources from the set of GPGPUs 102 (e.g., available GPGPU memory 112) and/or requirements of the workload/application 104A” [Julien ¶ 54]. and through the job scheduler, decide to execute an indirect resource allocation for the job to be handled requested by the job request in response to determining that the available resource of none of the worker nodes meets the resource requirement of the job request, and the resource requirement of the job request is met after preempting the resources used by one or more running jobs with low priority; “At operation 816, the proxy agent 122 receives a second GPGPU request from a second application 104C, wherein the second GPGPU request requests scheduling of a second workload of the second application 104C to a GPGPU 102 in the set of GPGPUs 102 in the data center system 100. At operation 818, the proxy agent 122 determines that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102. At operation 820, the proxy agent 122 selects the first workload for eviction from the first GPGPU 102A1 in response to determining that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102 and the first workload is included in the candidate list of workloads for eviction. In one embodiment, selecting the first workload for eviction is based on one or more of (1) a similarity between characteristics of the first workload and characteristics of the second workload, (2) a priority level of the first workload that is lower than a priority level of the second workload, and (3) a round robin approach” [Julien ¶ 87-89]. wherein executing the direct resource allocation for the job to be handled comprises finding a first worker node (by considering) having an available resource(s and) matching the job(s requested resources) profile through the job scheduler among the worker nodes; “At operation 804, the proxy agent 122 selects a first GPGPU 102 from the set of GPGPUs for processing the first workload of the first application 104A based on one or more of (1) available resources of the set of GPGPUs 102 (e.g., the GPGPU memory 112) and (2) requirements of the workload as indicated by the first GPGPU request. Hereinafter, the method 800 will be described in relation to the proxy agent 122 selecting the GPGPU 102A1 for the first workload of the first application 104A at operation 804” [Julien ¶ 81]. dispatching the job to be handled to the first worker node through the resource manager, so that the first worker node executes the job to be handled; “At operation 806, the proxy agent 122 establishes (1) a first session 1381 between an application agent 120A located on a compute node 108A on which the application 104A is located and the proxy agent 122 and (2) a second session 6021 between the first GPGPU 102A1 and the proxy agent 122 in response to selecting the first GPGPU 102A1 for the first workload to allow the first GPGPU 102A1 to process the first workload of the first application 104A, including subsequent GPGPU requests associated with the first workload” [Julien ¶ 82]. wherein executing the indirect resource allocation for the job to be handled comprises through the job scheduler, finding a second worker node having a low priority job among the worker nodes, “In the example used herein, the workload/application 104A is determined to be evicted from the GPGPU 102A1 at operation 524. However, the proxy agent 122 may have indicated that several workloads/applications 104 are candidates to be evicted and the workload/application 104A was selected because of (1) the degree of use of the associated GPGPU 102A1 (e.g., high idle time in relation to other candidate workloads/applications 104 for eviction), (2) resource similarities between the workload/application 104A and the workload/application 104 that is to be assigned to a GPGPU 102, (3) a lower priority of the candidate workload/application 104 than that of the workload/application 104 that is to be assigned to a GPGPU 102, and/or (4) a round robin approach” [Julien ¶ 63]. and notifying the second worker node so that the second worker node backs up an operation mode of the low priority job, “At operation 530, the proxy agent 122 may obtain from the remote memory management unit 128, following completion of all pending requests/commands associated with the selected workload/application 104A to be evicted, a range of destination memory addresses for transferring data of the evicted workload/application 104A to remote memory units 132. This range of destination memory addresses of the remote memory units 132 will be used for storing data of the workload/application 104A until reassignment to another GPGPU 102 … At operation 534, the GPGPU agent 124A transfers data of the workload/application 104A from the GPGPU memory 112A1 to the remote memory units 132 using the source virtual addresses of the GPGPU memory 112A1 and destination addresses of the disaggregated/global memory units 132” [Julien ¶ 67, 69]. and then releases resource used by the low priority job; “On completion of the data transfer for a workload/application 104 eviction, the GPGPU agent 124 requests the MMU 116 to free the now transferred portions/addresses of the GPGPU memory 112, which were previously utilized by the now evicted workload/application 104, and inform the proxy agent 122 about the completion of the transfer” [Julien ¶ 44]. “At operation 538, portions of the GPGPU memory 112A allocated to the workload/application 104A are freed or otherwise deallocated following transfer to the remote memory units 132. Accordingly, these now freed/deallocated portions of the GPGPU memory 112A can be used for another workload/application 104” [Julien ¶ 71]. putting another job request corresponding to the low priority job into the waiting queue through the job scheduler in response to receiving a resource release notification from the second worker node through the resource manager; “At operation 540, the GPGPU agent 124A may report the completion of the data transfer/eviction of the workload/application 104A from the GPGPU 102A1 to the proxy agent 122. At operation 542, the proxy agent 122 may update a status of the workload/application 104A in the data center system 100. In particular, the proxy agent 122 may update the status 208 in the table 200 shown in FIG. 4 to note that the workload/application 104A has been evicted to the disaggregated/global memory units 132” [Julien ¶ 72, 73]. “At operation 546, the proxy agent 122 may continually determine if a request/command associated with the workload/application 104A has been received. Upon receipt of a request request/command associated with the workload/application 104A, the method 500 may move to operation 548. At operation 548, the proxy agent 122 selects a GPGPU 102 for the previously evicted workload/application 104A. Similar to operation 510, the GPGPU 102 is selected from a set of GPGPUs 102 in the data center system 100 (e.g., all of the GPGPUs 102 in the data center system 100) and the selection is based on the available resources from the set of GPGPUs 102 (e.g., available GPGPU memory 112) and/or requirements of the workload/application 104A. For example, the proxy agent 122 may select the GPGPU 102Z2 for the workload/application 104A at operation 548” [Julien ¶ 75-76]. dispatching the job to be handled to the second worker node through the resource manager, so that the second worker node executes the job to be handled; “At operation 548, the proxy agent 122 selects a GPGPU 102 for the previously evicted workload/application 104A. Similar to operation 510, the GPGPU 102 is selected from a set of GPGPUs 102 in the data center system 100 (e.g., all of the GPGPUs 102 in the data center system 100) and the selection is based on the available resources from the set of GPGPUs 102 (e.g., available GPGPU memory 112) and/or requirements of the workload/application 104A” [Julien ¶ 76]. “Thereafter, the method 500 may move to operation 512 for the proxy agent 122 to establish a session/connection with a GPGPU agent 124 of the selected GPGPU 102 for the workload/application 104A. For example, when the GPGPU 102Z2 is selected at operation 548 for the workload/application 104A, the proxy agent 122 establishes a session/connection 6022 between the proxy agent 122 and the GPGPU agent 124Z for the workload/application 104A, as shown in FIG. 7” [Julien ¶ 78, Fig. 5]. Julien fails to explicitly teach parse a job profile of a job request … determining that an available resource of at least one of the worker nodes meets a resource requirement of the job request; finding a first worker node having an available resource matching the job profile. However, Snider teaches: parse a job profile of a job request “A resource metric comprises information utilized by application placement component 210 to quantify utilization of nodes with respect to a given resource. For example, a resource metric can quantify actual and/or requested utilization of a given resource of a node by a job instance. In some cases, application placement component 210 may determine whether a given node can sufficiently serve a given job instance with respect to the resource metric” [Snider ¶ 41]. determining that an available resource of at least one of the worker nodes meets a resource requirement of the job request; “To this effect, in various implementations, application placement component 210 can select the nodes for job instances, based on determining whether the job instances have sufficient resources on their corresponding nodes for utilization demanded by the job instances” [Snider ¶ 52]. finding a first worker node having an available resource matching the job profile “In accordance with various implementations of the present disclosure, using resource metrics, application placement component 210 can place job instances on nodes that are representative of actual resources demanded by the job instances and clients, while not being limited to finite and predefined resources” [Snider ¶ 43]. “A resource metric comprises information utilized by application placement component 210 to quantify utilization of nodes with respect to a given resource. For example, a resource metric can quantify actual and/or requested utilization of a given resource of a node by a job instance. In some cases, application placement component 210 may determine whether a given node can sufficiently serve a given job instance with respect to the resource metric” [Snider ¶ 41]. Snider is considered to be analogous to the claimed invention because it is in the same field of resource allocation considering the load. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien to incorporate the teachings of Snider and include parse a job profile of a job request … determining that an available resource of at least one of the worker nodes meets a resource requirement of the job request; finding a first worker node having an available resource matching the job profile. Doing so would allow for taking into account any number of relevant resource demands for job placement. “In accordance with various implementations of the present disclosure, using resource metrics, application placement component 210 can place job instances on nodes that are representative of actual resources demanded by the job instances and clients, while not being limited to finite and predefined resources” [Snider ¶ 43]. Julien in view of Snider fails to teach and power consumption monitoring data checked for power consumption, and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions. However, Sethi teaches: and power consumption monitoring data checked for power consumption, “For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc.” [Julien ¶ 37]. “The cluster data collection engine 120 is configured to retrieve power utilization, CPU utilization, memory utilization or other resource utilization metrics of the host devices 102 (or nodes 202) within a cluster 101 (or 201) by using passthrough channels of the operating systems of the host devices 102” [Sethi Col. 5 Lines 63-67, Fig. 6]. and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions; “The cluster data collection engine 120 is configured to retrieve power utilization, CPU utilization, memory utilization or other resource utilization metrics of the host devices 102 (or nodes 202) within a cluster 101 (or 201) by using passthrough channels of the operating systems of the host devices 102” [Sethi Col. 5 Lines 63-67]. “The utilization values can also correspond to the overall device (node level) (e.g., overall power utilization for a host device 102), respective applications (job group level) (e.g., respective applications running on a host device 102 or one or more virtual machines 105), respective workloads (job schedule level) (e.g., respective workloads running on a host device 102 or one or more virtual machines 105), respective workload threads or other level of granularity. According to illustrative embodiments, the cluster data collection engine 120 collects current utilization values from the instances of resource management logic 103 for respective ones of the host devices 102 at a given time or over a given time period (1 second, 10 microseconds, etc.)” [Sethi Col. 6 Lines 8-20]. Sethi is considered to be analogous to the claimed invention because it is in the same field of multiprogramming arrangements taking into account power criteria. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider to incorporate the teachings of Sethi and include and power consumption monitoring data checked for power consumption, and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions. Doing so would allow for the consideration of power metrics in the task scheduling. “Advantageously, the embodiments herein provide a migration management platform 110, which predicts and forecasts the power requirements of virtual machines 105 and the power utilization of host devices 102. As a result, virtual machines 105 are migrated to target host devices 102 having the capability to handle virtual machine workloads without significant impact to the performance of the target host devices 102, thereby preventing the need for remigration of the virtual machines 105” [Sethi Col. 4 Lines 37-46]. Julien in view of Snider in view of Sethi fails to teach obtained from a waiting queue, and putting the job to be handled into a running queue through the job scheduler; putting another job request corresponding to the low priority job into the waiting queue and putting the job to be handled into the running queue through the job scheduler. However, Tsaur teaches: obtained from a waiting queue, “Job request receiver 550 can store job requests that are not able to be scheduled immediately in queue 555. High priority job requests (e.g., requests to redirect an ongoing backup) can be added to the front or beginning of the queue and low priority job requests (e.g., requests to perform a backup or restore) can be added to the back or end of the queue” [Tsaur Col. 9 Lines 59-65]. and putting the job to be handled into a running queue through the job scheduler; “Upon receiving a job request from dispatcher 565, monitor 570 communicates with the appropriate storage grid agent in order to cause the requested job to be performed on the selected storage node. The monitor adds all running jobs to running queue 585 and monitors (e.g., by receiving information generated by a node-specific monitoring module on a storage node) the ongoing jobs” [Tsaur Col. 11 Lines 16-22]. putting another job request corresponding to the low priority job into the waiting queue “Job request receiver 550 can store job requests that are not able to be scheduled immediately in queue 555. High priority job requests (e.g., requests to redirect an ongoing backup) can be added to the front or beginning of the queue and low priority job requests (e.g., requests to perform a backup or restore) can be added to the back or end of the queue” [Tsaur Col. 9 Lines 59-65]. and putting the job to be handled into the running queue through the job scheduler. “The monitor adds all running jobs to running queue 585 and monitors (e.g., by receiving information generated by a node-specific monitoring module on a storage node) the ongoing jobs” [Tsaur Col. 11, Lines 19-22]. Tsaur is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi to incorporate the teachings of Tsaur and include obtained from a waiting queue, and putting the job to be handled into a running queue through the job scheduler; putting another job request corresponding to the low priority job into the waiting queue and putting the job to be handled into the running queue through the job scheduler. Doing so would allow for storage of the jobs which are waiting and currently running. “Running queue 585 stores information identifying all jobs that are already scheduled but have not yet finished” [Tsaur Col. 11 Lines 23-24]. With regard to claim 2, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien further teaches: Julien teaches the consideration of workload performance on the nodes “In particular, workloads/applications allocated to GPGPUs are monitored to build usage/performance profiles per workload/application” [Julien ¶ 8]. However, Julien fails to explicitly teach wherein the first worker node, which has the available resource, found by the job scheduler meets a job goal and the job goal is a minimum power consumption cost, best performance, or a comprehensive measurement goal. However, Snider teaches wherein the first worker node, which has the available resource, found by the job scheduler meets a job goal and the job goal is a minimum power consumption cost, best performance, or a comprehensive measurement goal. “A resource metric comprises information utilized by application placement component 210 to quantify utilization of nodes with respect to a given resource. For example, a resource metric can quantify actual and/or requested utilization of a given resource of a node by a job instance. In some cases, application placement component 210 may determine whether a given node can sufficiently serve a given job instance with respect to the resource metric. In various implementations, application placement component 210 employs resource metrics 232 to balance the corresponding resources they define across the nodes of the cloud computing platform” [Snider ¶ 41]. “In accordance with various implementations of the present disclosure, using resource metrics, application placement component 210 can place job instances on nodes that are representative of actual resources demanded by the job instances and clients, while not being limited to finite and predefined resources” [Snider ¶ 43]. With regard to claim 5, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien further teaches: wherein the storage further comprises a resource monitor, “A non-transitory machine-readable storage medium is described that provides instructions that, if executed by a processor of a proxy agent in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. “As shown in FIG. 1, each GPGPU agent 124A-124Z may include a respective monitoring agent 126A-126Z that monitors and profiles all the GPGPUs 102 in a corresponding GPGPU node 110A-110Z, including associated resources and workloads/applications 104 being processed by the GPGPUs 102. For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122 (resource monitor)” [Julien ¶ 37, fig. 1]. the processor is configured to: through the resource monitor, collect the node resource information respectively reported by the worker nodes; “As shown in FIG. 1, each GPGPU agent 124A-124Z may include a respective monitoring agent 126A-126Z that monitors and profiles all the GPGPUs 102 in a corresponding GPGPU node 110A-110Z, including associated resources and workloads/applications 104 being processed by the GPGPUs 102. For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122” [Julien ¶ 37, fig. 1]. Julien in view of Snider in view of Sethi fails to teach wherein after putting the job to be handled into the running queue through the job scheduler, the processor is configured to: delete the job to be handled from the running queue through the job scheduler in response to receiving a notification indicating that the job to be handled has ended through the resource manager. However, Tsaur teaches: wherein after putting the job to be handled into the running queue through the job scheduler, the processor is configured to: delete the job to be handled from the running queue “Running queue 585 stores information identifying all jobs that are already scheduled but have not yet finished. Running queue 585 is mainly maintained by monitor 570” [Tsaur Col. 11 Lines 23-25]. in response to receiving a notification indicating that the job to be handled has ended through the resource manager. “When a job completes (e.g., as detected by monitor570) or is redirected, monitor 570 can update catalog 520” [Tsaur Col. 11 Lines 36-37]. With regard to claim 6, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien further teaches: wherein each of the worker nodes comprises a local processor configured to: confirm a system resource usage through a system inspector; “As shown in FIG. 1, each GPGPU agent 124A-124Z may include a respective monitoring agent 126A-126Z that monitors and profiles all the GPGPUs 102 in a corresponding GPGPU node 110A-110Z, including associated resources and workloads/applications 104 being processed by the GPGPUs 102. For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122” [Julien ¶ 37]. confirm a container resource usage actually used by a workload of each of a plurality of containers “For example, in one such alternative embodiment the virtualization layer 954 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 962A-R called software containers that may each be used to execute one (or more) of the sets of applications 964A-R” [Julien ¶ 108]. “… collecting, by the proxy agent, a performance profile of the first workload on the first GPGPU to describe usage of resources of the first GPGPU by the first workload while the first GPGPU is processing the first workload” [Julien ¶ 5]. and obtain workload monitoring data based on the system resource usage and the container resource usage through a performance data inspector; “The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122” [Julien ¶ 37]. “… collecting, by the proxy agent, a performance profile of the first workload on the first GPGPU to describe usage of resources of the first GPGPU by the first workload while the first GPGPU is processing the first workload” [Julien ¶ 5]. Julien in view of Snider fails to explicitly teach confirm a container resource usage actually used by a workload of each of a plurality of containers … obtain power consumption monitoring data through a power consumption inspector; wherein node resource information corresponding to each of the worker nodes comprises the workload monitoring data and the power consumption monitoring data. However, Sethi teaches: confirm a container resource usage actually used by a workload of each of a plurality of containers “For example, the instances of resource management logic 103 corresponding to the host devices 102 monitor performance of the host devices 102 and compile utilization data (e.g., for memory, megabytes (MB), gigabytes (GB) or percentage used, for CPU, percentage used and for power, Watts or percentage used)” [Sethi Col. 5-6 Lines 67, 1-5].“According to an embodiment, the utilization values correspond to power, CPU and memory utilization by respective ones of virtual machines 105 running on the host devices 102” [Sethi Col. 6 Lines 5-8]. obtain power consumption monitoring data through a power consumption inspector; “The utilization values can also correspond to the overall device (e.g., overall power utilization for a host device 102), respective applications (e.g., respective applications running on a host device 102 or one or more virtual machines 105), respective workloads (e.g., respective workloads running on a host device 102 or one or more virtual machines 105), respective workload threads or other level of granularity. According to illustrative embodiments, the cluster data collection engine 120 (power consumption inspector) collects current utilization values from the instances of resource management logic 103 for respective ones of the host devices 102 at a given time or over a given time period (1 second, 10 microseconds, etc.)” [Sethi Col. 6 Lines 8-20]. wherein node resource information corresponding to each of the worker nodes comprises the workload monitoring data and the power consumption monitoring data. “The utilization values can also correspond to the overall device (e.g., overall power utilization for a host device 102), respective applications (e.g., respective applications running on a host device 102 or one or more virtual machines 105), respective workloads (e.g., respective workloads running on a host device 102 or one or more virtual machines 105), respective workload threads or other level of granularity” [Sethi Col. 6 Lines 8-15]. “The cluster data collection engine 120 is configured to retrieve power utilization, CPU utilization, memory utilization or other resource utilization metrics of the host devices 102 (or nodes 202) within a cluster 101 (or 201) by using passthrough channels of the operating systems of the host devices 102” [Sethi Col. 5 Lines 62-67]. With regard to claim 10, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 6, as referenced above. Julian further teaches wherein in the master node, the processor is configured to through a resource monitor, … and execute a power manager to: through a power analyzer, receive the power consumption monitoring data from the power consumption collector, “The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122. The monitoring information produced by the monitoring agents 126 can be used to form performance/usage profiles for workloads/applications 104 that describe the performance/operation of workloads of the applications 104 on GPGPUs 102 and respective GPGPU memories 112” [Julian ¶ 37]. Julien in view of Snider fails to teach collect the power consumption monitoring data reported by each of the worker nodes through a power consumption collector; obtain a power consumption analysis result by analyzing the power consumption monitoring data, and generate a power consumption adjustment strategy based on the power consumption analysis result; and generate a power adjustment suggestion based on the power consumption adjustment strategy through a power planer. However, Sethi teaches: collect the power consumption monitoring data reported by each of the worker nodes through a power consumption collector; “For example, job instances on each node may report resource utilization of client defined resource metrics (and optionally system defined resource metrics) to their corresponding node (possibly via their corresponding machine), which may in turn report individual or aggregated resource utilization for resource balancing decisions to be made by application placement component 210. In some cases, the utilization is reported to the host on the node (e.g., host 150), which reports the individual or aggregated resource utilization on the node to a collection component (power consumption collector). The collection component may in turn report the information to other instances or portions of the collection component amongst the hierarchy of the cloud computing platform” [Snider ¶ 48]. “For example, the instances of resource management logic 103 corresponding to the host devices 102 monitor performance of the host devices 102 and compile utilization data (e.g., for memory, megabytes (MB), gigabytes (GB) or percentage used, for CPU, percentage used and for power, Watts or percentage used)” [Sethi Col. 5-6 Lines 67, 1-5]. obtain a power consumption analysis result by analyzing the power consumption monitoring data, “Referring to block 262 of FIG. 2, the power requirement analysis engine 140 predicts a power requirement of a VM 105 (or 205) to be migrated. Referring to the operational flow 300 for determining power requirements of a virtual machine to be migrated in FIG. 3, power requirement prediction may be performed by using at least one of two methods. Following a beginning (block 361) of the operational flow 300 and initiation of virtual machine migration (block 362), at block 363, the VM comparison layer 141 of the power requirement analysis engine 140 compares the virtual machines 105 in the cluster 101 to determine whether there are any similar virtual machines 105 to the virtual machine to be migrated” [Sethi Col. 6-7 Lines 56-67, 1].“The approximate average power requirement is based on resource utilization by the VM 105 to be migrated such as, but not necessarily limited to, CPU utilization, memory utilization, etc. The default power table 142, which can be in matrix form, includes resource utilization values (e.g., CPU and memory utilization values) mapped to predetermined power requirements of a virtual machine” [Sethi Col. 7 Lines 48-54]. and generate a power consumption adjustment strategy based on the power consumption analysis result; “The eligible host device identification layer 151 of the host device selection engine 150 identifies at least a subset of the host devices 102 as eligible target host devices based, at least in part, on power utilization data of the host devices 102 and utilization data of one or more additional resources (e.g., CPU, memory) of the host devices 102” [Sethi Col. 8 Lines 3-8]. and generate a power adjustment suggestion based on the power consumption adjustment strategy through a power planer. “In one or more embodiments, power consumption, as well as central processing unit (CPU) and memory imbalances between host devices 102 are considered in connection with virtual machine migration. The embodiments provide a framework for recommending migration of virtual machines 105 to certain host devices 102 in a cluster 101 of host devices 102 hosting multiple virtual machines 105” [Sethi Col. 4 Lines 48-53]. With regard to claim 13, it is a machine type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale. Further, Julien teaches the additional limitations A cloud resource allocation apparatus, comprising: a storage, storing an orchestrator and providing a waiting queue and a running queue, wherein the orchestrator comprises a resource manager and a job scheduler; and a processor, coupled to the storage, configured to: A non-transitory machine-readable storage medium is described that provides instructions (orchestrator) that, if executed by a processor of a proxy agent in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. “The proxy agent 122 may be used for (1) scheduling/assigning (job scheduler) applications 104 and associated workloads to GPGPUs 102 via corresponding GPGPU agents 124 of GPGPU nodes 110, which monitor/manage the GPGPUs 102, (2) evicting workloads/applications 104 (resource manager) from GPGPUs 102 based on monitored performance information/profiles of the workloads/applications 104, and (3) rescheduling/reassigning evicted workloads/applications 104 to other GPGPUs 102 via corresponding GPGPU agents 124 that monitor/manage these other GPGPUs 102 (e.g., the GPGPU agent 124A monitors the GPGPUs 102A1-102A3 and associated GPGPU memories 112A1 -112A3, while the GPGPU agent 124Z monitors the GPGPUs 102Z1 and 102Z2 and associated GPGPU memories 112Z1 and 112Z2)” [Julien ¶ 34]. Further, Tsaur teaches the additional limitations of providing a waiting queue and a running queue, “Job management module 500 includes a job request receiver 550, a queue (waiting queue) 555, a scheduler 560, a dispatcher 565, a monitor 570, a resource information listener 575, system status information 580, and a running queue 585” [Tsaur Col. 9 Lines 51-55]. With regard to claim 14, it is a method type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale. Further, Julien teaches the additional limitations executing the following through a cloud resource allocation apparatus: A non-transitory machine-readable storage medium is described that provides instructions that, if executed by a processor of a proxy agent in a data center system, will cause said processor to perform operations” [Julien ¶ 6]. “As described above and as will be described below, the data center system 100 assists in sharing resources of GPGPUs 102 more efficiently in cloud environments by allowing GPGPUs 102 to be oversubscribed for certain workloads/applications 104” [Julien ¶ 48]. With regard to claim 15, it is a method type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale. With regard to claim 18, it is a method type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale. With regard to claim 19, it is a method type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale. With regard to claim 23, it is a method type claim having similar limitations as claim 10 above. Therefore, it is rejected under the same rationale. Claims 3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Julien (US 2022/0214912 A1) in view of Snider (US 2016/0316003 A1) Sethi (US 12,307,281 B2) in view of Tsaur (US 7,890,714 B1) in view of Arai (US 2021/0149726 A1). With regard to claim 3, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien further teaches: wherein the one or more running jobs comprise at least the low priority job; “At operation 818, the proxy agent 122 determines that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102. At operation 820, the proxy agent 122 selects the first workload for eviction from the first GPGPU 102A1 in response to determining that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102 and the first workload is included in the candidate list of workloads for eviction. In one embodiment, selecting the first workload for eviction is based on one or more of (1) a similarity between characteristics of the first workload and characteristics of the second workload, (2) a priority level of the first workload that is lower than a priority level of the second workload, and (3) a round robin approach” [Julien ¶ 87-89]. wherein after finding the second worker node with the low priority job through the job scheduler, the processor is configured to: “In the example used herein, the workload/application 104A is determined to be evicted from the GPGPU 102A1 at operation 524. However, the proxy agent 122 may have indicated that several workloads/applications 104 are candidates to be evicted and the workload/application 104A was selected because of (1) the degree of use of the associated GPGPU 102A1 (e.g., high idle time in relation to other candidate workloads/applications 104 for eviction), (2) resource similarities between the workload/application 104A and the workload/application 104 that is to be assigned to a GPGPU 102, (3) a lower priority of the candidate workload/application 104 than that of the workload/application 104 that is to be assigned to a GPGPU 102, and/or (4) a round robin approach” [Julien ¶ 63]. Julien in view of Snider in view of Sethi in view of Tsaur fails to teach notify the second worker node through the job scheduler to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job. However, Arai teaches notify the second worker node through the job scheduler to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job. “For example, when resources are not sufficient to stop only one job, multiple jobs may be stopped. Candidates for stop may be selected in order of cost, starting with the lowest cost job, and jobs may be stopped up to a point where resources can be secured to execute a high-priority job” [Arai ¶ 64]. Arai is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi in view of Tsaur to incorporate the teachings of Arai and include notify the second worker node through the job scheduler to continuously release resource used by another low priority job different from the low priority job until an adjusted available resource meets the resource requirement of the job request in response to the adjusted available resource that still does not meet the resource requirement of the job request after the second worker node releasing the resource used by the low priority job. Doing so would allow for priority jobs to be scheduled more flexibly. “In the examples of FIG. 6 and FIG. 7, Job X is assumed to have sufficient resources by stopping Jobs A, B, or C. However, this is not limited thereto” [Arai ¶ 64]. With regard to claim 16, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale. Further, Julien teaches the additional limitations wherein executing the indirect resource allocation comprises: “At operation 816, the proxy agent 122 receives a second GPGPU request from a second application 104C, wherein the second GPGPU request requests scheduling of a second workload of the second application 104C to a GPGPU 102 in the set of GPGPUs 102 in the data center system 100. At operation 818, the proxy agent 122 determines that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102. At operation 820, the proxy agent 122 selects the first workload for eviction from the first GPGPU 102A1 in response to determining that workloads have been assigned to all GPGPUs 102 in the set of GPGPUs 102 and the first workload is included in the candidate list of workloads for eviction. In one embodiment, selecting the first workload for eviction is based on one or more of (1) a similarity between characteristics of the first workload and characteristics of the second workload, (2) a priority level of the first workload that is lower than a priority level of the second workload, and (3) a round robin approach” [Julien ¶ 87-89]. Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Julien (US 2022/0214912 A1) in view of Snider (US 2016/0316003 A1) Sethi (US 12,307,281 B2) in view of Tsaur (US 7,890,714 B1) in view of Arai (US 2021/0149726 A1) in view of O’Neil (US 2019/0317821 A1). With regard to claim 4, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien in view of Snider in view of Sethi fails to teach putting the job to be handled into the running queue through the job scheduler. However, Tsaur teaches and putting the job to be handled into the running queue through the job scheduler. “Upon receiving a job request from dispatcher 565, monitor 570 communicates with the appropriate storage grid agent in order to cause the requested job to be performed on the selected storage node. The monitor adds all running jobs to running queue 585 and monitors (e.g., by receiving information generated by a node-specific monitoring module on a storage node) the ongoing jobs” [Tsaur Col. 11 Lines 16-22]. Julien in view of Snider in view of Sethi in view of Tsaur fails to teach in response to determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets a resource requirement of the job request based on the node resource information and the job profile. However, Arai teaches in response to determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets a resource requirement of the job request based on the node resource information and the job profile, “After judging the priority, when the sum of resources used by a low-priority job at that timing is small compared to the resources required by the accepted job, there are few resources to be released even if the low-priority job is suspended, and the accepted job cannot be operated” [Arai ¶ 80]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi in view of Tsaur to incorporate the teachings of Arai and include in response to determining that none of the worker nodes is eligible for executing the indirect resource allocation after determining that the available resource of none the worker nodes meets a resource requirement of the job request based on the node resource information and the job profile. Doing so would allow for priority jobs to be scheduled more flexibly. “In the examples of FIG. 6 and FIG. 7, Job X is assumed to have sufficient resources by stopping Jobs A, B, or C. However, this is not limited thereto” [Arai ¶ 64]. Julien in view of Snider in view of Sethi in view of Tsaur in view of Arai fails to teach wherein in the master node, the processor is configured to: through the job scheduler, execute the direct resource allocation for each of a plurality of application group members in the job profile, comprising: finding a plurality of third worker nodes that meet a resource requirement of the application group members respectively among the worker nodes through the job scheduler; dispatching each of the application group members to a corresponding third worker node through the resource manager. However, O’Neil teaches: wherein in the master node, the processor is configured to: through the job scheduler, execute the direct resource allocation for each of a plurality of application group members in the job profile, “Workload orchestration module 114 also provides splitting (or chunking) operations. Splitting or chunking is the act of breaking a large processing job down in to small parts (application group members) that can be processed by multiple processing nodes at once (i.e., in parallel)” [O’Neal ¶ 34]. “The method 300 then proceeds to step 310 where the processing job is split into processing chunks. The processing chunks (application group members in the job profile) are portions of the processing job (i.e., sub-jobs, sub-tasks, etc.) that may be handled by different processing nodes so that the processing job may be handled in parallel and thus more quickly” [O’Neal ¶ 72]. comprising: finding a plurality of third worker nodes that meet a resource requirement of the application group members respectively among the worker nodes through the job scheduler; “In some examples, the job request may include parameters associated with the processing job, such as the maximum amount of time acceptable to complete the processing job. Such parameters may be considered by, for example, workload orchestration node 114 of FIG. 1 to determine the appropriate computing resources to allocate to the requested processing job” [O’Neal ¶ 66]. “The processing chunks may be distributed to different nodes in a distributed computing resource system based on many different factors. For example, a node may be chosen for a processing chunk based on characteristics of the nodes, such as the number or type of processors in the node, or the applications installed at the nodes (e.g., as discussed with respect to FIG. 2), etc. Using the example above of a video transcoding job, it may be preferable to distribute the processing chunks to nodes that include special purpose processors, such as powerful GPUs, which can processing the processing chunks very efficiently. A node may also be chosen based on current resource utilizations at the node. For example, if a node is currently heavily utilized by normal activity (such as a personal workstation) or by other processing tasks associated with the distributed computing resource system, it may not be selected for distribution of the processing chunk” [O’Neal ¶ 74]. dispatching each of the application group members to a corresponding third worker node through the resource manager; “The method 300 then proceeds to step 312 where the processing chunks are distributed to on-site nodes (e.g., node 132 of FIG. 1) and the cloud processing node (e.g., node 142 of FIG. 1)” [O’Neal ¶ 73]. O’Neal is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi in view of Tsaur in view of Arai to incorporate the teachings of O’Neal and include that in the master node, the processor is configured to: through the job scheduler, execute the direct resource allocation for each of a plurality of application group members in the job profile, comprising: finding a plurality of third worker nodes that meet a resource requirement of the application group members respectively among the worker nodes through the job scheduler; dispatching each of the application group members to a corresponding third worker node through the resource manager. Doing so would allow for the scheduling of more jobs amidst resource shortage. “In the present example, one or more of the processing chunks may be distributed to the cloud processing node to overcome any resource shortage in the on-site computing resources” [O’Neal ¶ 75]. With regard to claim 17, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale. Claims 7-9 and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over Julien (US 2022/0214912 A1) in view of Snider (US 2016/0316003 A1) Sethi (US 12,307,281 B2) in view of Tsaur (US 7,890,714 B1) in view of Peteva (US 2017/0199770 A1). With regard to claim 7, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 6, as referenced above. Julien fails to explicitly teach wherein in each of the worker nodes, the local processor is further configured to: determine whether the workload monitoring data exceeds a preset workload upper bound. However, Snider teaches wherein in each of the worker nodes, the local processor is further configured to: determine whether the workload monitoring data exceeds a preset workload upper bound “For example, application placement component 210 may at least attempt to generate a placement plan based on detecting imbalance of one or more resources in the cloud computing environment as exceeding a threshold value. The imbalance may be with respect to one or more nodes (e.g., based on detection of one or more hot nodes with respect to one or more resources) or the system overall” [Snider ¶ 65]. “For example, job instances on each node may report resource utilization of client defined resource metrics (and optionally system defined resource metrics) to their corresponding node (possibly via their corresponding machine), which may in turn report individual or aggregated resource utilization for resource balancing decisions to be made by application placement component 210. In some cases, the utilization is reported to the host on the node (e.g., host 150), which reports the individual or aggregated resource utilization on the node to a collection component (local manager). The collection component may in turn report the information to other instances or portions of the collection component amongst the hierarchy of the cloud computing platform. For example, a chain of reports and/or aggregations may flow up a node/rack/database/region/country hierarchy amongst collection components at each level as needed for resource balancing decisions” [Snider ¶ 48]. Julien in view of Snider in view of Sethi in view of Tsaur fails to explicitly teach and mark a warning label in the workload monitoring data through the performance data inspector in response to determining that the workload monitoring data exceeds the preset workload upper bound. However, Peteva teaches and mark a warning label in the workload monitoring data through the performance data inspector in response to determining that the workload monitoring data exceeds the preset workload upper bound. “When an anomalous condition is detected, the central monitoring system may flag the host node exhibiting the condition to prevent any containers from being initiated there as well as to prevent any migration of containers to that node” [Peteva ¶ 181]. “The central monitoring system may operate with a local monitoring system, which monitors and reports the status of system on each host node. The central monitoring system maintains a list of flagged host nodes” [Peteva ¶ 183]. Peteva is considered to be analogous to the claimed invention because it is in the same field of multiprogramming arrangements considering the load. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi in view of Tsaur to incorporate the teachings of Peteva and include mark a warning label in the workload monitoring data through the performance data inspector in response to determining that the workload monitoring data exceeds the preset workload upper bound. Doing so would help keep track of nodes exceeding the preset threshold. “Each node in the list may include node-specific information, such as cluster membership, network configuration, cluster placement, total used and free resources of the node (e.g., cpu, memory, hdd), and a flag indicating whether the node is suitable for host migration events” [Peteva ¶ 182]. With regard to claim 8, Julien in view of Snider in view of Sethi in view of Tsaur in view of Peteva teaches the cloud resource allocation system according to claim 7, as referenced above. Julien further teaches: wherein in the master node, the processor is further configured to: through a resource monitor, collect the workload monitoring data reported by each of the worker nodes through a performance data collector “For example, the monitoring agents 126 can monitor active/running process kernels on GPGPU s 102, memory utilization of each process on the GPGPUs 102, GPGPU 102 utilization, GPGPU 102 temperature, etc. The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122. The monitoring information produced by the monitoring agents 126 can be used to form performance/usage profiles for workloads/applications 104 that describe the performance/operation of workloads of the applications 104 on GPGPUs 102 and respective GPGPU memories 112” [Julien ¶ 37]. and through a workload manager receive the workload monitoring data from the performance data collector through a workload analyzer. “The monitoring agents 126 continuously generate monitoring information within an associated GPGPU node 110 and report this information to the proxy agent 122. The monitoring information produced by the monitoring agents 126 can be used to form performance/usage profiles for workloads/applications 104 that describe the performance/operation of workloads of the applications 104 on GPGPUs 102 and respective GPGPU memories 112” [Julien ¶ 37]. Julien in view of Snider fails to teach and append history data to the workload monitoring data based on a preset time … and determine whether each of the worker nodes has a resource abnormality by analyzing the workload monitoring data. However, Sethi teaches: and append history data to the workload monitoring data based on a preset time “The cluster data collection engine 120 is also configured to collect historical utilization values from the instances of resource management logic 103 for respective ones of the host devices 102 for given historical time periods (e.g., past year, 6 months, 3 months, month, week, etc.)” [Sethi Col. 6 Lines 20-25]. and determine whether each of the worker nodes has a resource abnormality by analyzing the workload monitoring data. “For example, referring to the cluster 201, the CPU, RAM and power utilization of Node1 are relatively high, while the CPU and RAM utilization of node2 are relatively low and the CPU, RAM and power utilization of node3 are relatively low, thereby resulting in a resource utilization imbalance among nodes1-3” [Sethi Col. 6 Lines 44-49]. Julien in view of Snider in view of Sethi in view of Tsaur fails to explicitly teach in response to the workload monitoring data being marked with the warning label. However, Peteva teaches in response to the workload monitoring data being marked with the warning label; “When an abnormal status is returned by the local monitoring system to the central monitoring system, the host node on which the local monitoring system resides is flagged. When the flag is changed, the central monitoring system performs additional actions to assess the health of that host node” [Peteva ¶ 184]. With regard to claim 9, Julien in view of Snider in view of Sethi in view of Tsaur in view of Peteva teaches the cloud resource allocation system according to claim 8, as referenced above. Julien in view of Snider fails to teach wherein the processor is further configured to execute the resource manager to: notify the resource manager through the workload analyzer in response to determining that the resource abnormality is a workload excess or a system resource loss, so that the resource manager transmits state migration command to a state migration handler; generate a job group level state migration suggestion in response to determining that the resource abnormality is the workload excess. However, Sethi teaches: wherein the processor is further configured to execute the resource manager to: notify the resource manager through the workload analyzer “Referring to block 261 of FIG. 2, the utilization data collected by the cluster data collection engine 120 is transmitted to the resource imbalance detection engine 130, which determines imbalance of one or more resources between host devices 102 (or nodes 202) in a cluster 101 (or 201)” [Sethi Col. 6 lines 26-31]. in response to determining that the resource abnormality is a workload excess or a system resource loss, “The current resource utilization layer 131 determines a current utilization of resources (e.g., power, CPU and/or memory) by the VMs 105 running on a host device 102 with high resource utilization and identifies a given VM of the VMs 105 running on the host device 102 with high resource utilization to be migrated to another host device 102 of the cluster 101” [Sethi Col. 6 Lines 50-56]. so that the resource manager transmits state migration command to a state migration handler; “In step 810, the at least one virtual machine is migrated from the source host device to the target host device” [Sethi Col. 12 Liens 46-48]. “Based on the current and predicted power utilization, appropriate target host devices 102 having sufficient power, CPU availability, and memory are identified, and the virtual machines 105 are migrated to the identified host devices 102” [Sethi Col. 4-5 Lines 65-67, 1-2]. “The determination of whether there are similar VMs 105 to the VM 105 being migrated is based on whether the types of workloads and applications running on the VMs 105, as well as the associated hardware are the same or similar to those of the VM 105 to be migrated. Such operational information is collected by the cluster data collection engine 120 via, for example, the instances of resource management logic 103 (state migration handler)” [Sethi Col. 7 Lines 9-18 Examiner notes this interpretation go state migration command is in accordance with the description given in the instant specification ¶ 72]. generate a job group level state migration suggestion in response to determining that the resource abnormality is the workload excess “In one or more embodiments, power consumption, as well as central processing unit (CPU) and memory imbalances between host devices 102 are considered in connection with virtual machine migration. The embodiments provide a framework for recommending migration of virtual machines 105 to certain host devices 102 (job group level state migration suggestion) in a cluster 101 of host devices 102 hosting multiple virtual machines 105” [Sethi Col. 4 Lines 48-53]. Julien in view of Snider in view of Sethi in view of Tsaur fails to teach and generate a node level state migration suggestion in response to determining that the resource abnormality is the system resource loss through the workload analyzer for each of the worker nodes where the resource abnormality occurs. However, Peteva teaches and generate a node level state migration suggestion in response to determining that the resource abnormality is the system resource loss through the workload analyzer for each of the worker nodes where the resource abnormality occurs. “Failover allows for the automatic switching of computing resources from a failed or failing computing device to a healthy (e.g., functional) one, thereby providing continuous availability of the interconnected resources to the end-user” [Peteva ¶ 6]. With regard to claim 20, it is a method type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale. With regard to claim 21, it is a method type claim having similar limitations as claim 8 above. Therefore, it is rejected under the same rationale. With regard to claim 22, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale. Sethi teaches the further limitation wherein after determining whether each of the worker nodes has the resource abnormality, further comprises: generate a job group level state migration suggestion in response to determining that the resource abnormality is a workload excess “The current resource utilization layer 131 determines a current utilization of resources (e.g., power, CPU and/or memory) by the VMs 105 running on a host device 102 with high resource utilization and identifies a given VM of the VMs 105 running on the host device 102 with high resource utilization to be migrated to another host device 102 of the cluster 101” [Sethi Col. 6 Lines 50-56]. “In one or more embodiments, power consumption, as well as central processing unit (CPU) and memory imbalances between host devices 102 are considered in connection with virtual machine migration. The embodiments provide a framework for recommending migration of virtual machines 105 to certain host devices 102 (job group level state migration suggestion) in a cluster 101 of host devices 102 hosting multiple virtual machines 105” [Sethi Col. 4 Lines 48-53]. Claims 11-12 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Julien (US 2022/0214912 A1) in view of Snider (US 2016/0316003 A1) Sethi (US 12,307,281 B2) in view of Tsaur (US 7,890,714 B1) in view of Chien (US 2022/0214917 A1). With regard to claim 11, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above Julien in view of Snider in view of Sethi in view of Tsaur fails to teach wherein in the master node, the processor is configured to: through the orchestrator determine whether the worker nodes are fully loaded based on the node resource information after obtaining the job request through the resource manager; in response to the worker nodes all being fully loaded, issue a power on command for each of the worker nodes in a sleep mode or a powered off mode through a power manager; in response to each of the worker nodes in the sleep mode or the powered off mode transitioning to an operation state, reacquire the node resource information respectively reported by the worker nodes through the resource manager. However, Chien teaches: wherein in the master node, the processor is configured to: through the orchestrator determine whether the worker nodes are fully loaded based on the node resource information after obtaining the job request through the resource manager; “Virtual machines are migrated to servers in the rack such that as many servers as possible are placed in heavy load state. The system eliminates unnecessary servers by putting such servers in a sleep state, therefore reducing total power consumption and increasing efficiency of servers on the rack” [Chien ¶ 33]. “The manifests and the reports are determined for all of the servers of the rack system 100. Each report includes the current status of all hardware resource utilization from the routine in FIG. 7 and the machine learning output of the predicted utilization in a future period such as over the next two days” [Chien ¶ 88]. in response to the worker nodes all being fully loaded, issue a power on command for each of the worker nodes in a sleep mode or a powered off mode through a power manager; “The commands for setting the power level of a server may be made from the rack management software 132 to one of the servers 120 in FIG. 1 over the management network 140” [Chien ¶ 93]. “However, an alternate set of utilization graphs 220 and 222 for the two server nodes shows that the first server can be set at a heavy 100% hardware resource utilization as shown in the graph 220, while the second server can be set at 0% utilization as shown in the graph 222” [Chien ¶ 42]. “Once multiple virtual machines have been migrated to an available single server, the server will be at a full loading state as 100% of hardware resource utilization. The original server or servers running the virtual machines may be set to either a sleep state or shutdown state to minimize power use. If a new hardware resource request is needed from the rack management software 132, such as the need for more virtual machines or applications, the sleeping/shutdown single server nodes may be resumed to active state immediately” [Chien ¶ 92]. in response to each of the worker nodes in the sleep mode or the powered off mode transitioning to an operation state, reacquire the node resource information respectively reported by the worker nodes through the resource manager. “The manifests for the sleeping/shutdown server nodes may be examined to determine those servers with sufficient or desirable hardware resources to fulfill the resource request” [Chien ¶ 92]. Chien is considered to be analogous to the claimed invention because it is in the same field of multiprogramming arrangements taking into account power criteria. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Julien in view of Snider in view of Sethi in view of Tsaur to incorporate the teachings of Chien and include that wherein in the master node, the processor is configured to: through the orchestrator determine whether the worker nodes are fully loaded based on the node resource information after obtaining the job request through the resource manager; in response to the worker nodes all being fully loaded, issue a power on command for each of the worker nodes in a sleep mode or a powered off mode through a power manager; in response to each of the worker nodes in the sleep mode or the powered off mode transitioning to an operation state, reacquire the node resource information respectively reported by the worker nodes through the resource manager. Doing so would allow for increased efficiency and power savings. “The system eliminates unnecessary servers by putting such servers in a sleep state, therefore reducing total power consumption and increasing efficiency of servers on the rack” [Chien ¶ 33]. With regard to claim 12, Julien in view of Snider in view of Sethi in view of Tsaur teaches the cloud resource allocation system according to claim 1, as referenced above. Julien further teaches wherein each of the worker nodes comprises a local processor configured to: execute a container (eviction) lifetime cycle management through a job handler in response to receiving a resource management command from the master node, “The proxy agent 122 provides the memory address range (and IP memory address, in case of RDMA) and address of the control page 134 to the GPGPU agent 124 to initiate the eviction process” [Julien ¶ 41]. “For example, in one such alternative embodiment the virtualization layer 954 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 962A-R called software containers that may each be used to execute one (or more) of the sets of applications 964A-R” [Julien ¶ 108]. Julien in view of Snider fails to teach execute a container lifetime cycle management through a job handler. However, Sethi teaches: execute a container lifetime cycle management through a job handler “As can be seen in FIG. 7, VM6 is being migrated from Host2 (source host device) to Host3 (target host device). Although Host1 has the same CPU and memory utilization as Host3, the algorithms of the embodiments are configured to select Host3 over Host! since the power utilization of Host3 (40%) is less than the power utilization of Host1 (90%)” [Sethi Col. 9 Lines 19-25]. “At least portions of the migration management platform 110 and the elements thereof may be implemented at least in part in the form of software that is stored in memory and executed by a processor” [Sethi Col. 10 Lines 12-15]. wherein the container lifetime cycle management comprises one of container creation, container deletion, and state migration; “The current resource utilization layer 131 determines a current utilization of resources (e.g., power, CPU and/or memory) by the VMs 105 running on a host device 102 with high resource utilization and identifies a given VM of the VMs 105 running on the host device 102 with high resource utilization to be migrated to another host device 102 of the cluster 101” [Sethi Col. 6 Lines 50-56]. “In one or more embodiments, power consumption, as well as central processing unit (CPU) and memory imbalances between host devices 102 are considered in connection with virtual machine migration. The embodiments provide a framework for recommending migration of virtual machines 105 to certain host devices 102 (job group level state migration suggestion) in a cluster 101 of host devices 102 hosting multiple virtual machines 105” [Sethi Col. 4 Lines 48-53]. Julien in view of Snider in view of Sethi in view of Tsaur fails to teach adjust a system power state through a power modules handler in response to receiving a power adjustment suggestion from the master node, wherein the system power state comprises one of a powered off mode, a sleep mode, and a specific power consumption mode. However, Chien teaches: adjust a system power state through a power modules handler in response to receiving a power adjustment suggestion from the master node, “The commands for setting the power level of a server may be made from the rack management software 132 to one of the servers 120 in FIG. 1 over the management network 140. As explained above, the management software 132 provides commands to any idle servers to minimize power consumption by entering a sleep state or turning off” [Chien ¶ 93 Examiner notes this example of a system power state for a server within the system is in accordance with the description given in the instant specification ¶ 92]. wherein the system power state comprises one of a powered off mode, a sleep mode, and a specific power consumption mode. “As a result of the example management routine managed by the rack management controller 118, the servers 122, 124, and 126 are set at full hardware resource utilization and therefore each executes three virtual machines 130. The server 128 is set to a sleep mode and therefore does not consume a large amount of power” [Chien ¶ 36]. With regard to claim 24, it is a method type claim having similar limitations as claim 11 above. Therefore, it is rejected under the same rationale. With regard to claim 25, it is a method type claim having similar limitations as claim 12 above. Therefore, it is rejected under the same rationale. Response to Arguments Applicant's arguments filed 11/27/2025 have been fully considered but they are not persuasive. Applicant argues in substance: I. In the claimed invention, the limitations "decide to execute a direct resource allocation" and "decide to execute an indirect resource allocation for the job to be handled" are inherently beyond the capabilities of the human mind due to their technical complexity. The claimed invention, within a cloud system containing a large number of worker nodes, collects and analyzes node resource information from all worker nodes in real time, parses complex job profiles, manages running and waiting queues, and executes preemptive resource scheduling based on priority. Its computational scale and speed far exceed the practical processing capabilities of the human brain. Therefore, this claim does not refer to a mental process. (Step 2A - Prong 1: No) In the claimed invention, the master node requires the resource manager to acquire and process large amounts of dynamically changing node resource information (such as CPU resources and memory load) from multiple distributed worker nodes in real time, while comparing multiple job profiles (including resource requirements and priorities) in the waiting queue. A human operator cannot simultaneously monitor a cloud cluster containing dozens or even hundreds of nodes "in their mind" and complete the aforementioned data processing and decision-making within milliseconds. This fully meets the 2106 Patent Subject Matter Eligibility standard of "cannot be practically performed in the human mind." Therefore, claims 1-25 are not directed to an abstract idea. When these steps are executed in a distributed cloud system consisting of a master node (an electronic device) and multiple worker nodes (other electronic devices) as defined in the request, their complexity, scale, and immediacy are far beyond what the human mind can actually perform. a) Examiner respectfully disagrees. As detailed in the rejection above, the recited deciding is a mental process. A human can mentally decide which resource allocation method to use. This is not beyond the capability of the human mind. The independent claims do not include recitations of executing either direct resource allocation or indirect resource allocation. For example, claim 1 recites “decide to execute a direct resource allocation for a job to be handled requested by the job request … wherein executing the direct resource allocation for the job to be handled comprises …”. Further, The limitations which applicant argues cannot be practically performed in the human mind are not cited in the rejection above as being mental processes. The additional elements of the claims amount to no more than generic computing components, field of use/technological environment, and insignificant extra solution activity which do not amount to significantly more than the abstract idea. Further, in response to applicant's argument that limitations of the claimed invention are not directed towards an abstract idea, it is noted that the features upon which applicant relies (i.e., “within a cloud system containing a large number of worker nodes, collects and analyzes node resource information from all worker nodes in real time, parses complex job profiles, manages running and waiting queues, and executes preemptive resource scheduling based on priority”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The arguments have been considered but there not found persuasive. II. As stated in Example 37 of the 2019 PEG, if a claim limitation, such as a 'determining' step, requires a processor to perform an action that 'cannot be practically performed in the human mind,' for instance because it 'requires a processor accessing computer memory indicative of application usage,' then such a limitation does not recite a mental process. Furthermore, in the claimed invention, an electronic device determines whether to perform direct resource allocation or indirect resource allocation for a job to be handled requested by the job request based on current node resource information reported by other electronic devices, and then dispatching the job to the appropriate electronic device for execution. Additional elements in the claimed invention: "each of the worker nodes and the master node is realized by using an electronic device with computing and networking functions", "through the job scheduler, decide to execute a direct resource allocation for a job to be handled requested by the job request ......"; "through the job scheduler, decide to execute an indirect resource allocation for a job to be handled requested by the job request ......"; "executing the direct resource allocation for the job to be handled comprises:......dispatch the job to be handled to the first worker node through the resource manager, so that the first worker node executes the job to be handled; and ......"; "executing the indirect resource allocation for the job to be handled comprises:...... through the job scheduler, find a second worker node having a low priority job among the worker nodes, notify the second worker node so that the second worker node backs up an operation mode of the low priority job, and then release resource used by the low priority job; put another job request corresponding to the low priority job into the waiting queue through the job scheduler in response to receiving a resource release notification from the second worker node through the resource manager; dispatch the job to be handled to the second worker node through the resource manager, so that the second worker node executes the job to be handled". The additional elements collectively constitute a specific technical framework designed to address a particular technical challenge in cloud computing: how to utilize a electronic device as a master node to dispatch jobs to other electronic devices acting as worker nodes in a dynamic, resource-constrained environment, thereby balancing the immediacy of high- priority tasks with the overall resource utilization and stability of the system. Accordingly, these additional element recitations integrate the judicial exception into a practical application (Step 2A - Prong 2: Yes). The invention does not merely automate a known concept;, but rather proposes a specific technical solution to address a particular technical problem in the field of cloud computing: how to efficiently and intelligently allocate computing resources in a dynamically changing resource environment, while simultaneously considering task priority, system performance, and power consumption. The proposed dual-track mechanism of" direct resource allocation " and " indirect resource allocation," along with its linkage with performance and power consumption monitoring, substantially improves the operational efficiency, stability, and energy efficiency of cloud resource management systems, constituting a concrete contribution to this technical field (Step 2B: Yes). a) Examiner respectfully disagrees. As detailed in the rejection above, the processor of the claimed invention is a generic computing component which does not amount to significantly more than the recited abstract idea. Further, the resource allocation methods are not actually implemented in the independent claims. A decision is made to execute direct resource allocation or a decision is made to execute indirect resource allocation, but they are not then executed. Thus, it is unclear how the argued technical improvements are implemented through the claims. The arguments have been considered but were not found to be persuasive. III. Arai only discloses that the job is queued into job queue 106 when resources are sufficient. Arai does not disclose how a suitable job execution device 20 is selected from the multiple job execution devices 20 to execute the job. Furthermore, Arai disclosed that when resources are insufficient, it directly compares the priority of "accepted job" and "the job being executed" to decide whether to perform preemption. Arai does not disclose that one job execution device 20 with a low priority job is identified among multiple job execution devices 20, and the one job execution device 20 with the low priority job backs up an operation mode of the low priority job, and then releases resource used by the low priority job to execute the "accepted job". While Arai provides the logic for "determining whether the job execution device 20 has sufficient free resources," it does not disclose the technical feature that all job execution devices 20 provide their resource information to the scheduling device 10. Therefore, Arai's teaching s cannot provide a basis for determining whether the available resources of each job execution device 20 are sufficient to meet the resource requirements of the job. Arai's decision-making is based on the general concept of "whether resources are sufficient." While Snider teaches the return of "resource utilization," it is only a general concept. Although Sethi teaches "power consumption analysis," its purpose is for virtual machine "migration" decisions, rather than for choosing a "direct/indirect allocation of new job" in the invention. Arai, Snider, and Sethi do not provide any technical insights into finding a suitable device from multiple job execution devices 20 to execute a job. Arai, Snider, and Sethi either taken alone or in combination, cannot read in the features of "the node resource information includes workload monitoring data checked for workload and power consumption monitoring data checked for power consumption, and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions", "decide to execute a direct resource allocation.......","decide to execute an indirect resource allocation ......","executing the direct resource allocation for the job to be handled comprises......","executing the indirect resource allocation for the job to be handled comprises......" recited in claim 1. Arai, Snider, and Sethi do not teach or suggest a specific technical solution as defined by the claimed invention: that is, "node resource information" must simultaneously include "workload monitoring data" and "power consumption monitoring data." Even with an incentive to combine Arai, Snider, and Sethi, one would only obtain a system that performs preemptive scheduling based on general resource metrics and/or general power consumption values. The present invention, through this unique data integration and analysis method, solves the complex technical problem of dynamically balancing performance and power consumption in a cloud environment, a problem that previous technologies failed to effectively address, and therefore possesses non-obviousness. a) As detailed in the rejection above, Sethi teaches power consumption monitoring data checked for power consumption [Sethi Sethi Col. 5 Lines 63-67, Fig. 6] and the power consumption monitoring data includes: at least one of power consumption statistics and energy efficiency, multi-level performance and power consumption statistics and analysis information including worker node level, job group level, job schedule level, and possible performance and power consumption adjustment strategy suggestions; [Sethi Col. 6 Lines 8-20]. Applicant’s further arguments with respect to claim(s) 1, 13, and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application. When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI F RIGGINS whose telephone number is (571)272-2772. The examiner can normally be reached Monday-Friday 7:00AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.F.R./Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Mar 14, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §103, §112
Nov 27, 2025
Response Filed
Mar 14, 2026
Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month