Prosecution Insights
Last updated: April 19, 2026
Application No. 18/016,747

SYSTEM AND METHOD FOR DYNAMIC RESOURCE MANAGEMENT AND ALLOCATION FOR CLUSTER NETWORKS

Final Rejection §103§112
Filed
Jan 18, 2023
Examiner
NAHRA, SELENA SABAH
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Rakuten Mobile Inc.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
12 granted / 16 resolved
+20.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
12 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
22.0%
-18.0% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In view of Applicant's amendments, the objection to claims is withdrawn. In view of Applicant's amendments, the rejection under 35 USC § 112 is withdrawn. The amendment integrates the abstract idea into a practical application by imposing a meaningful limit on the abstract idea. In view of Applicant’s amendments, the rejection under 35 USC § 101 is withdrawn. Claim Objections Claims 1, 2, 5, 7, 9, 11, 12, 15, 17, 19, 20, and 21 are objected to because of the following informalities: Claim 1, “the specific node” in line 6 lacks proper antecedent basis. Claim 11, “the execution” in line 11, “the one or more tasks” in line 12 , and “the identified predicted optimal node” in line 26 lack proper antecedent basis. Claim 20, “the execution” in line 9, “the one or more tasks” in line 10 , and “the identified predicted optimal node” in line 19 lack proper antecedent basis. Claims 2, 5, 7, 9, 12, 15, 17, 19, and 21 depend on the objected claims and inherit the same issues as the objected claims. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 21 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The newly added limitation “wherein the one or more operational requirements with respect to the task are one or more node resource requirements for the execution of the task” is not supported by the specification as originally filed. The closest paragraph [0007] discloses “The method can include determining one or more operational requirements with respect to a first task; identifying a plurality of nodes within the server cluster network with respect to meeting the one or more operational requirements of the first task; obtaining a traffic pattern with respect to each of the plurality of nodes with respect to one or more second tasks; and identifying a first node from the plurality of nodes for executing the first task.” which is different from “the one or more operational requirements with respect to the task are one or more node resource requirements for the execution of the task” as claimed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 11-12, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Sigal et al. (U.S. Patent Publication No. US 8185909 B2, hereinafter “Sigal”) in view of Wu et al. (WIPO Patent Application Publication No. WO 2023155904 A1, hereinafter “Wu”), Gill et al. (U.S. Patent Application Publication No. US 20240020157 A1, hereinafter “Gill”), and Song et al. (U.S. Patent Application Publication No. US 20160054774 A1, hereinafter “Song”). With regard to claim 1, Sigal discloses: A method of allocating resources within a server cluster network (“The load balancer engine may in one or more embodiments attempt to keep the future resource utilization of the servers in the cluster roughly the same”, col 6, lines 49-51), the method comprising: identifying, using the trained neural network model, a predicted node from the plurality of nodes, to execute the task (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The computer program product uses the observed resource utilization result to train a neural network and the observed result is associated with the incoming task name and its set of associated input parameters at 403.”, col 8, lines 46-52, “The computer program product optionally provides a connection to a predicted least busy server based on the predicted resource utilization for the new incoming task at 405.”, col 8, lines 56-59 “Upon request from the load balancer engine, a given task with particular input parameters results in the neural network returning predicted resource utilization to the load balancer. The load balancer then assigns the incoming task to a particular server based upon the predicted and observed resource utilization of a given server and the predicted resource utilization of the particular incoming task.”, col 6, lines 42-48). Sigal does not disclose: obtaining historical traffic pattern associated with each of a plurality of nodes within the server cluster network, the historical traffic pattern associated with a node of the plurality of nodes indicating a historical pattern of execution of each of one or more tasks running on the specific node mapped to a corresponding time period of day; mapping the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage, the mapping indicating a power requirement for the execution of the one or more tasks of each of the plurality of nodes during a corresponding defined period of time; training a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period; determining one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time; a predicted node from the plurality of nodes, to execute the task at the future period of time allocating the task to the predicted node for execution during the future period of time. Wu discloses: mapping the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage (“The local training samples include historical traffic information, historical energy consumption values, and configuration parameters corresponding to the historical energy consumption values of network devices managed by the first network device.”, page 4, second full paragraph), the mapping indicating a power requirement (“A training sample includes traffic information processed by the network device in a historical time period, energy consumption values corresponding to the historical time period, and configuration parameters corresponding to the historical time period.”, page 11, second full paragraph); Both the systems of Sigal and Wu deal with neural network models. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Wu to “improve the prediction of the traffic prediction model accuracy” (Wu, page 5, fifth full paragraph). Gill discloses: training a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period (“Examples of the present disclosure use historic input power periodic data from a server in an IT data center (e.g., time series input power sensor data from a server's power supply unit) to train a machine learning (ML) model to obtain forecasted power consumption data of the server for a future time period”, para [0012], “The present disclosure can apply to multiple servers 10a, 10b, . . . 10n as shown in FIG. 1 for analyzing a data center, or to a single server for analyzing only that server.”, para [0017]); Both the systems of Sigal and Gill deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Gill to “to optimally identify periods of over-utilization and under-utilization so that future workloads can be scheduled or revised more efficiently and productively.” (Gill, para [0012]). Song discloses: obtaining historical traffic pattern associated with each of a plurality of nodes within the server cluster network (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075], “FIG. 7 is a flowchart of a method 700 to profile a job power for a data processing system according to one embodiment. In one embodiment, method 700 is performed at each of the nodes, e.g., an OS node, an IO node, at a compute node that runs a job.”, para [0075], “The data processing system 1200 may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that data processing system. Further, while only a single data processing system is illustrated, the term “data processing system” shall also be taken to include any collection of data processing systems that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.”, para [0082]), the historical traffic pattern associated with a node of the plurality of nodes indicating a historical pattern of execution of each of one or more tasks running on the specific node mapped to a corresponding time period of day (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075]); a power requirement for the execution of the one or more tasks (“the power needed to run the job”, para [0055]) determining one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054], “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]); a predicted node from the plurality of nodes, to execute the task at the future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054]”, “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]) allocating the task to the predicted node for execution during the future period of time (“In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]). Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). With regard to claim 2, Sigal as modified discloses the method of claim 1. Sigal further discloses: wherein the task is comprised of at least one of: an application, a program, a job, or an operation (“read and write based tasks”, col 2, line 35). With regard to claim 11, Sigal discloses: An apparatus for allocating resources within a server cluster network, comprising: a memory storage storing computer-executable instructions; and a processor communicatively coupled to the memory storage, wherein the processor is con-figured to execute the computer-executable instructions and cause the apparatus to (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The load balancer engine may in one or more embodiments attempt to keep the future resource utilization of the servers in the cluster roughly the same”, col 6, lines 49-51): identify, using the trained neural network model, a predicted node from the plurality of nodes, to execute the task (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The computer program product uses the observed resource utilization result to train a neural network and the observed result is associated with the incoming task name and its set of associated input parameters at 403.”, col 8, lines 46-52, “The computer program product optionally provides a connection to a predicted least busy server based on the predicted resource utilization for the new incoming task at 405.”, col 8, lines 56-59 “Upon request from the load balancer engine, a given task with particular input parameters results in the neural network returning predicted resource utilization to the load balancer. The load balancer then assigns the incoming task to a particular server based upon the predicted and observed resource utilization of a given server and the predicted resource utilization of the particular incoming task.”, col 6, lines 42-48)); and Sigal does not disclose: obtain historical traffic patterns with respect to a plurality of nodes within the server cluster network, the traffic patterns being data on observed, historical patterns of tasks running on each of the plurality of nodes and mapped to different times of day; map the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage, the mapping indicating a power requirements for the execution of the one or more tasks of each of the plurality of nodes during a corresponding defined period of time; train a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period; determine one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time; a predicted node from the plurality of nodes, to execute the task at the future period of time allocate the task to the identified predicted optimal node. Wu discloses: map the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage (“The local training samples include historical traffic information, historical energy consumption values, and configuration parameters corresponding to the historical energy consumption values of network devices managed by the first network device.”, page 4, second full paragraph), the mapping indicating a power requirement (“A training sample includes traffic information processed by the network device in a historical time period, energy consumption values corresponding to the historical time period, and configuration parameters corresponding to the historical time period.”, page 11, second full paragraph); Both the systems of Sigal and Wu deal with neural network models. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Wu to “improve the prediction of the traffic prediction model accuracy” (Wu, page 5, fifth full paragraph). Gill discloses: train a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period (“Examples of the present disclosure use historic input power periodic data from a server in an IT data center (e.g., time series input power sensor data from a server's power supply unit) to train a machine learning (ML) model to obtain forecasted power consumption data of the server for a future time period”, para [0012], “The present disclosure can apply to multiple servers 10a, 10b, . . . 10n as shown in FIG. 1 for analyzing a data center, or to a single server for analyzing only that server.”, para [0017]); Both the systems of Sigal and Gill deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Gill to “to optimally identify periods of over-utilization and under-utilization so that future workloads can be scheduled or revised more efficiently and productively.” (Gill, para [0012]). Song discloses: obtain historical traffic patterns with respect to a plurality of nodes within the server cluster network (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075], “FIG. 7 is a flowchart of a method 700 to profile a job power for a data processing system according to one embodiment. In one embodiment, method 700 is performed at each of the nodes, e.g., an OS node, an IO node, at a compute node that runs a job.”, para [0075], “The data processing system 1200 may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that data processing system. Further, while only a single data processing system is illustrated, the term “data processing system” shall also be taken to include any collection of data processing systems that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.”, para [0082]), the traffic patterns being data on observed, historical patterns of tasks running on each of the plurality of nodes and mapped to different times of day (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075]); a power requirement for the execution of the one or more tasks (“the power needed to run the job”, para [0055]) determine one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054], “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]); a predicted node from the plurality of nodes, to execute the task at the future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054]”, “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]) allocate the task to the identified predicted optimal node (“In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]). Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). With regard to claim 12, Sigal as modified discloses the apparatus of claim 11. Sigal further discloses: wherein the task is comprised of at least one of: an application, a program, a job, or an operation (“read and write based tasks”, col 2, line 35). With regard to claim 20, Sigal discloses: A non-transitory computer-readable medium comprising computer-executable instructions for allocating resources within a server cluster network by an apparatus, wherein the computer-executable instructions, when executed by at least one processor of the apparatus, cause the apparatus to (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39): identify, using the trained neural network model, a predicted node from the plurality of nodes, to execute the task (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The computer program product uses the observed resource utilization result to train a neural network and the observed result is associated with the incoming task name and its set of associated input parameters at 403.”, col 8, lines 46-52, “The computer program product optionally provides a connection to a predicted least busy server based on the predicted resource utilization for the new incoming task at 405.”, col 8, lines 56-59 “Upon request from the load balancer engine, a given task with particular input parameters results in the neural network returning predicted resource utilization to the load balancer. The load balancer then assigns the incoming task to a particular server based upon the predicted and observed resource utilization of a given server and the predicted resource utilization of the particular incoming task.”, col 6, lines 42-48)); and Sigal does not disclose: obtain historical traffic patterns with respect to a plurality of nodes within the server cluster network, the traffic patterns being data on observed, historical patterns of tasks running on each of the plurality of nodes and mapped to different times of day; map the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage, the mapping indicating a power requirements for the execution of the one or more tasks of each of the plurality of nodes during a corresponding defined period of time; train a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period; determine one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time; a predicted node from the plurality of nodes, to execute the task at the future period of time allocate the task to the identified predicted optimal node. Wu discloses: map the obtained historical traffic pattern associated with each of the plurality of nodes to a power usage (“The local training samples include historical traffic information, historical energy consumption values, and configuration parameters corresponding to the historical energy consumption values of network devices managed by the first network device.”, page 4, second full paragraph), the mapping indicating a power requirement (“A training sample includes traffic information processed by the network device in a historical time period, energy consumption values corresponding to the historical time period, and configuration parameters corresponding to the historical time period.”, page 11, second full paragraph); Both the systems of Sigal and Wu deal with neural network models. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Wu to “improve the prediction of the traffic prediction model accuracy” (Wu, page 5, fifth full paragraph). Gill discloses: train a neural network model using the mapping to predict power usage data for each of the plurality of nodes at a specific time period (“Examples of the present disclosure use historic input power periodic data from a server in an IT data center (e.g., time series input power sensor data from a server's power supply unit) to train a machine learning (ML) model to obtain forecasted power consumption data of the server for a future time period”, para [0012], “The present disclosure can apply to multiple servers 10a, 10b, . . . 10n as shown in FIG. 1 for analyzing a data center, or to a single server for analyzing only that server.”, para [0017]); Both the systems of Sigal and Gill deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Gill to “to optimally identify periods of over-utilization and under-utilization so that future workloads can be scheduled or revised more efficiently and productively.” (Gill, para [0012]). Song discloses: obtain historical traffic patterns with respect to a plurality of nodes within the server cluster network (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075], “FIG. 7 is a flowchart of a method 700 to profile a job power for a data processing system according to one embodiment. In one embodiment, method 700 is performed at each of the nodes, e.g., an OS node, an IO node, at a compute node that runs a job.”, para [0075], “The data processing system 1200 may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that data processing system. Further, while only a single data processing system is illustrated, the term “data processing system” shall also be taken to include any collection of data processing systems that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.”, para [0082]), the traffic patterns being data on observed, historical patterns of tasks running on each of the plurality of nodes and mapped to different times of day (“At operation 702 a start time for the job is stamped. At operation 703 an end time for the job is stamped. In one embodiment, a log record is created comprising the process ID, the process start time, the process end time, or any combination thereof. In one embodiment, the log record is stored in a database. At operation 704 a node identifier is attached to the log record. At operation 705 the log record having the attached node identifier is sent to a head node.”, para [0075]); a power requirement for the execution of the one or more tasks (“the power needed to run the job”, para [0055]) determine one or more operational requirements with respect to a task to be allocated to at least one of the plurality of nodes for execution at a future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054], “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]); a predicted node from the plurality of nodes, to execute the task at the future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054]”, “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]) allocate the task to the identified predicted optimal node (“In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]). Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). With regard to claim 21, Sigal as modified discloses the method of claim 1. Sigal does not disclose however, Song discloses: wherein the one or more operational requirements with respect to the task are one or more node resource requirements for the execution of the task (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054]). Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sigal, Wu, Gill, and Song as applied to claims 1 and 11 above, and further in view of Koehrsen (“Neural network embeddings explained.”). With regard to claim 5, Sigal as modified discloses the method of claim 1. Sigal as modified does not disclose however, Koehrsen discloses: wherein the neural network model is based on a plurality of embeddings (“The network I used has two parallel embedding layers that map the book and wikilink to separate 50-dimensional vectors and a dot product layer that combines the embeddings into a single number for a prediction.”, page 3, last full paragraph). Both the systems of Sigal and Koehrsen deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Koehrsen to improve neural network accuracy. With regard to claim 15, Sigal as modified discloses the apparatus of claim 11. Sigal as modified does not disclose however, Koehrsen discloses: wherein the neural network model is based on a plurality of embeddings (“The network I used has two parallel embedding layers that map the book and wikilink to separate 50-dimensional vectors and a dot product layer that combines the embeddings into a single number for a prediction.”, page 3, last full paragraph). Both the systems of Sigal and Koehrsen deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Koehrsen to improve neural network accuracy. Claims 7, 9, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sigal, Wu, Gill, and Song as applied to claims 1 and 11 above, and further in view of Mankovskii et al. (U.S. Patent Application Publication No. US 20150199215 A1, hereinafter “Mankovskii”). With regard to claim 7, Sigal as modified discloses the method of claim 1. Sigal as modified does not disclose: further comprising: predicting future power consumption by each of the plurality of nodes using the trained neural network model; and identifying the predicted node from the plurality of nodes based on the predicted future power consumption by each of the plurality of nodes. Gill discloses: further comprising: predicting future power consumption by each of the plurality of nodes using the trained neural network model (“Various machine learning models such as the above can learn and predict power consumption of a server for a next or future time period.”, para [0033], “The present disclosure can apply to multiple servers 10a, 10b, . . . 10n as shown in FIG. 1 for analyzing a data center, or to a single server for analyzing only that server.”, para [0017]); and Both the systems of Sigal and Gill deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Gill to “to optimally identify periods of over-utilization and under-utilization so that future workloads can be scheduled or revised more efficiently and productively.” (Gill, para [0012]). Mankovskii discloses: identifying the predicted node from the plurality of nodes based on the predicted future power consumption by each of the plurality of nodes (“Then, at Block 620, future power usage by the server is predicted based on the power utilization index and a projected workload demand on the server. In some embodiments, workload may be selectively assigned for the server at Block 630. Specifically, workload may be assigned to the server or assigned to a different server, in response to the predicting of Block 620.”, para [0037], 620, 630, fig 6). Both the systems of Sigal and Mankovskii deal with assigning workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Mankovskii to “to improve or optimize placement of future IT workload” (Mankovskii, para [0058]). With regard to claim 9, Sigal as modified discloses the method of claim 7. Sigal further discloses: further comprising: identifying another node from the plurality of nodes for executing the another task (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The computer program product uses the observed resource utilization result to train a neural network and the observed result is associated with the incoming task name and its set of associated input parameters at 403.”, col 8, lines 46-52, “The computer program product optionally provides a connection to a predicted least busy server based on the predicted resource utilization for the new incoming task at 405.”, col 8, lines 56-59, It would be obvious to one of ordinary skill in the art that the process performed in claim 7 could be repeated with a different “node”.). Sigal as modified does not disclose however, Song discloses: determining one or more operational requirements with respect to another task for execution at the future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054], “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055], It would be obvious to one of ordinary skill in the art that the process preformed in claim 1 could be repeated with a different “task”.); and executing the another task at the future period of time (In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]) Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). With regard to claim 17, Sigal as modified discloses the apparatus of claim 11. Sigal as modified does not disclose: wherein the computer-executable instructions, when executed by the processor, further cause the apparatus to: predict future power consumption by each of the plurality of nodes using the trained neural network model; and identify the predicted node from the plurality of nodes based on the predicted future power consumption by each of the plurality of nodes. Gill discloses: wherein the computer-executable instructions, when executed by the processor, further cause the apparatus to (“Hardware processor 22 may fetch, decode, and execute instructions for performing methods”, para [0019]): predict future power consumption by each of the plurality of nodes using the trained neural network model (“Various machine learning models such as the above can learn and predict power consumption of a server for a next or future time period.”, para [0033], “The present disclosure can apply to multiple servers 10a, 10b, . . . 10n as shown in FIG. 1 for analyzing a data center, or to a single server for analyzing only that server.”, para [0017]); and Both the systems of Sigal and Gill deal with neural networks. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Gill to “to optimally identify periods of over-utilization and under-utilization so that future workloads can be scheduled or revised more efficiently and productively.” (Gill, para [0012]). Mankovskii discloses: identify the predicted node from the plurality of nodes based on the predicted future power consumption by each of the plurality of nodes (“Then, at Block 620, future power usage by the server is predicted based on the power utilization index and a projected workload demand on the server. In some embodiments, workload may be selectively assigned for the server at Block 630. Specifically, workload may be assigned to the server or assigned to a different server, in response to the predicting of Block 620.”, para [0037], 620, 630, fig 6). Both the systems of Sigal and Mankovskii deal with assigning workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Mankovskii to “to improve or optimize placement of future IT workload” (Mankovskii, para [0058]). With regard to claim 19, Sigal as modified discloses the apparatus of claim 17. Sigal further discloses: wherein the computer-executable instructions, when executed by the processor, further cause the apparatus to (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39): identifying another node from the plurality of nodes for executing the another task (“Embodiments of the preemptive neural network database load balancer may be implemented as a computer program product for example which includes computer readable instruction code that executes in a tangible memory medium of a computer or server computer.”, col 8, lines 35-39, “The computer program product uses the observed resource utilization result to train a neural network and the observed result is associated with the incoming task name and its set of associated input parameters at 403.”, col 8, lines 46-52, “The computer program product optionally provides a connection to a predicted least busy server based on the predicted resource utilization for the new incoming task at 405.”, col 8, lines 56-59, It would be obvious to one of ordinary skill in the art that the process preformed in claim 17 could be repeated with a different “node”.). Sigal as modified does not disclose however, Song discloses: determine one or more operational requirements with respect to another task for execution at the future period of time (“Power aware selector of nodes 303 is configured to select nodes to run a job, e.g., job 304. In alternative embodiments, power aware selector of nodes 303 selects nodes based on the job, e.g. a job power allocation, a job configuration parameter, a job communication latency, a distance, a number of hops of network switch, other criteria, or any combination thereof.”, para [0054], “In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055], It would be obvious to one of ordinary skill in the art that the process performed in claim 11 could be repeated with a different “task”.); and executing the another task at the future period of time (In one embodiment, the power aware job scheduler 302 examines the job queue at appropriate times (periodically or at certain events e.g., termination of previously running jobs) and determines if resources including the power needed to run the job can be allocated. In some cases, such resources can be allocated only at a future time, and in such cases the job is scheduled to run at a designated time in future.”, para [0055]) Both the systems of Sigal and Song deal with allocating workloads. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine Sigal as modified in view of Song to “to efficiently schedule and monitor each job requested” (Song, para [0050]). Response to Arguments Applicant’s arguments with respect to claims 1-2, 5, 7, 9, 11-12, 15, 17, and 19-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELENA SABAH NAHRA whose telephone number is (571)272-6115. The examiner can normally be reached Monday-Thursday 7:00 AM -5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung Sough can be reached at (571) 272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.S.N./Examiner, Art Unit 2192 /S. Sough/SPE, Art Unit 2192
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Aug 05, 2025
Non-Final Rejection — §103, §112
Nov 10, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554531
IMPROVING PROCESSOR UTILIZATION
2y 5m to grant Granted Feb 17, 2026
Patent 12554550
Real Time Optimization Apparatus Using Quantum Non-Fungible Token Contract Ranking for Dynamic Code Evolution
2y 5m to grant Granted Feb 17, 2026
Patent 12536047
Dynamic Core Allocation Among Containers on a Host
2y 5m to grant Granted Jan 27, 2026
Patent 12530212
METHOD AND APPARATUS FOR ISOLATED EXECUTION OF COMPUTER CODE WITH A NATIVE CODE PORTION
2y 5m to grant Granted Jan 20, 2026
Patent 12436793
Virtual Machine Management
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+66.7%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month