Prosecution Insights
Last updated: April 19, 2026
Application No. 18/373,931

PROACTIVELY PERFORM PLACEMENT OPERATIONS TO PROVIDE RESIZING RECOMMENDATIONS FOR WORKER NODES

Non-Final OA §101§103§112
Filed
Sep 27, 2023
Examiner
MHEIR, ZUHEIR
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Cloudnatix Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
61 granted / 75 resolved
+26.3% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
13 currently pending
Career history
88
Total Applications
across all art units

Statute-Specific Performance

§101
25.8%
-14.2% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 75 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this office correspondence. Priority Acknowledgment is made of applicant’s claim for provisional Application No. 63/413,613, filed on 10/06/2022. Drawings The Drawings filed on 09/27/2023, have been acknowledged. Claim Objections Claim 9 is objected to because of the following informalities: the aforementioned claim recites the following language: “…, to add a new worker node or new Pod, or to move a Pod to a new worker node,, to direct the VPC controller cluster to perform the required actions.” (Emphasis Added). The claim seems to have a typing mistake repeating double commas after the recited word “node”, for which the examiner requests a correct to delete one of these repeated commas. For claim examination purposes, the examiner will interpret the above claim language to read as follows: “…, to add a new worker node or new Pod, or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions. Appropriate correction is required. Claim Rejections - 35 USC § 112 (b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Independent claim 1, recites the following limitations: “A method of managing a set of one or more clusters of worker nodes deployed in a set of one or more virtual private clouds (VPC), wherein the plurality of Pods run on the worker nodes, the method comprising: …” (Emphasis Added) The aforementioned independent claim recites the following definitive limitation – “the plurality of Pods …” (Emphasis Added) The recitation of “the plurality of Pods” is not explicitly defined in the aforementioned independent claim before the term “the plurality of Pods” can be referenced in definitive form. Consequently, there is insufficient antecedent basis for this term/limitation in the aforementioned independent claim language and hence the claim is rejected under 35 U.S.C. 112(b) for indefiniteness. For the purpose of application examination, and for example, the examiner will interpret the aforementioned claim to read as follows: “A method of managing a set of one or more clusters of worker nodes deployed in a set of one or more virtual private clouds (VPC), wherein a plurality of Pods run on the worker nodes, the method comprising: …” Claims (2-12) depend from claim (1) and are inherently rejected under 35 U.S.C. 112(b) for indefiniteness for similar reason(s) as detailed above. Proper corrections are required Additionally, independent claim 13, recites the following limitations: “A non-transitory machine readable medium storing a program for execution by a set of processing units, the program for managing a set of one or more clusters of worker nodes deployed in a set of one or more virtual private clouds (VPC), wherein the plurality of Pods run on the worker nodes, …” (Emphasis Added) The aforementioned independent claim recites the following definitive limitation – “the plurality of Pods …” (Emphasis Added) The recitation of “the plurality of Pods” is not explicitly defined in the aforementioned independent claim before the term “the plurality of Pods” can be referenced in definitive form. Consequently, there is insufficient antecedent basis for this term/limitation in the aforementioned independent claim language and hence the claim is rejected under 35 U.S.C. 112(b) for indefiniteness. For the purpose of application examination, and for example, the examiner will interpret the aforementioned claim to read as follows: “A non-transitory machine readable medium storing a program for execution by a set of processing units, the program for managing a set of one or more clusters of worker nodes deployed in a set of one or more virtual private clouds (VPC), wherein a plurality of Pods run on the worker nodes, …” Claims (13-20) depend from claim (13) and are inherently rejected under 35 U.S.C. 112(b) for indefiniteness for similar reason(s) as detailed above. Proper corrections are required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to abstract idea without significantly more. Step 1: The claims are directed to a process and a system, wherein the claimed process starts by collecting event data regarding various worker nodes deployed in the set of virtual private clouds (VPC), then passing the collected event data through a mapping layer that maps all the data to a common set of data structures for processing to present a unified view of the worker nodes deployed across the set of VPCs; then receiving, through a scheduler, a schedule for adjusting a number of worker nodes in a set of worker nodes and dynamically move Pods among operating worker nodes in order to optimize the deployment of the Pods on the worker nodes as the number of worker nodes increases or decreases. Step 2A – Prong One – The claims recite an abstract idea Independent claims 1 and 13 are directed to an abstract idea without significantly more. The claim(s) recites the following limitation: “passing the collected event data through a mapping layer that maps all the data to a common set of data structures for processing to present a unified view of the worker nodes deployed across the set of VPCs.” The aforementioned claim recites the following limitation: “a mapping layer that maps all the data to a common set of data structures …”, which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting: “worker nodes” and “non-transitory machine readable medium”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, and given some information at hand, a person is mentally capable, or with the aid of pen and paper, of analyzing information at hand (for example event data) and be able to map this information to another set of data structure, which is a mental process. As explained above, a process that “maps all the data to a common set of data structures …” is nothing more than an abstract idea. Consequently, if a claim limitation, under its broadest reasonable interpretation, covers an abstract idea that includes a series of steps that recite mental steps, but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of “Abstract Ideas”. Accordingly, the aforementioned claim(s) recite abstract ideas. Step 2A – Prong Two - The abstract idea is not integrated into a practical application This judicial exception is not integrated into a practical application. In particular, the aforementioned claim(s) recites the additional limitation – “collecting, through a common interface, event data regarding various worker nodes deployed in the set of VPCs.” The recited language of “collecting, …, event data, …” is considered to be extra-solution activities of mere data gathering activity. In this context, “collecting” is an activity that is considered data manipulation activity for simply enabling a person to deal with information/data, and to analyze the content of this information, which is considered to be an insignificant extra-solution activity to the judicial exception, for which an extra-solution activity includes both pre-solution and post-solution activity. In this example, the aforementioned claim limitations amount to mere data-gathering step, and is considered an insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic process of transmission of collected and analyzed data, see MPEP 2106.05(g). Further, the aforementioned claim recites the following limitation: “passing the collected event data through a mapping layer …” This recited language of “passing the collected event data, …” is considered to be extra-solution activities of mere data transmission activity, which is considered data manipulation activity for simply enabling a person to deal with information/data. This process is considered to be an insignificant extra-solution activity to the judicial exception, for which an extra-solution activity includes both pre-solution and post-solution activity. In this example, the aforementioned claim limitations amount to mere data-transmission step, and is considered an insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic process of transmission of collected and analyzed data, see MPEP 2106.05(g). Additionally, the aforementioned claim recites the following limitation: “receiving, through a scheduler, a schedule for adjusting a number of worker nodes in a set of worker nodes and dynamically move the Pods among operating worker nodes in order to optimize the deployment of the Pods on the worker nodes as the number of worker nodes increases or decreases.” This claim’s recited language of “receiving, through a scheduler, a schedule for adjusting a number of worker nodes, …” is considered to be extra-solution activities of mere data gathering activity, which is considered data manipulation activity for simply enabling a person to deal with information/data. This process is considered to be an insignificant extra-solution activity to the judicial exception, for which an extra-solution activity includes both pre-solution and post-solution activity, which is a mere nominal or tangential addition to the claim, a mere generic process of transmission of collected and analyzed data, see MPEP 2106.05(g). Furthermore, the aforementioned claim recites “dynamically move the Pods among operating worker nodes in order to optimize the deployment of the Pods, …” The recited language of “dynamically move the Pods among operating worker nodes …” are considered mere instruction activities, and are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do "‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’". Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983, see MPEP 2106.05(f). Additionally, the aforementioned claim recites the following limitations: “using the schedule to direct, through the common interface, a set of controllers associated with the set of worker nodes to adjust a number of worker nodes and to dynamically move the Pods among the operating worker nodes.” The recited language of “adjust a number of worker nodes and to dynamically move the Pods …” are again considered mere instruction activities, and are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner. Thus, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983, see MPEP 2106.05(f) and 2106.05(g). The additional elements recited in the aforementioned claim(s) are: “worker nodes” and “non-transitory machine readable medium”. The additional elements of using a computer node, storage device(s) and processor(s) to obtain information, analyze information, and manipulate information amounts to no more than mere instructions to apply the exception using a generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. See MPEP 2106.05(f). Step 2B: The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activities identified above, which include the data-gathering activities: (“collecting, …”, “receiving”), and data-transmission activities: (“passing”), are also considered mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Additionally, the “worker nodes” and “non-transitory machine readable medium” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component that are well-know and conventional and cannot provide an inventive concept. Thus, there are no additional elements that amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that any combination of elements improves the functioning of a computer or improves any other technology. The claim(s) is not patent eligible. Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Further, the aforementioned claim recites the additional limitations of “wherein the schedule specifies a first time period for reducing the number of worker nodes in the set due to an expected drop in traffic to the Pods deployed on the worker nodes.” The recited language of “the schedule specifies a first time period …”, recites an abstract idea to define a limitation based on some criteria, which does not amount to significantly more than the abstract idea. Claim 3 is dependent on claim 2 and includes all the limitations of claim 2. The aforementioned claim recites the additional limitations of “wherein the schedule specifies a second time period for increasing the number of worker nodes in the set due to an expected rise in traffic to the Pods deployed on the worker nodes.” The recited language of “the schedule specifies a second time period …”, which again recites a mere mental step to define a limitation based on another criteria of evaluation, which does not amount to significantly more than the abstract idea. Claim 4 is dependent on claim 3 and includes all the limitations of claim 3. The aforementioned claim recites the additional limitations of “wherein the first and second time periods are one of different times within one day and different days in the week.” Furthermore, the recited language of claim 4 is yet another mental step to define a limitation based on another criteria of evaluation, which does not amount to significantly more than the abstract idea. Claim 5 is dependent on claim 1 and includes all the limitations of claim 1. The aforementioned claim recites the additional limitations of “wherein said collecting, passing, receiving, and directing are performed by a global controller cluster that operates outside of the VPCs.” At this step, the claim’s defined functions are mere insignificant extra-solution activities of data gathering/transmission that steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 6 is dependent on claim 5 and includes all the limitations of claim 5. The aforementioned claim recites the additional limitations of “wherein said directing comprises: at a first time before a first time period during which the schedule specifies that the number of worker nodes should be reduces, executing a placement process, at the global controller cluster, to identify new worker-node assignments for at least a subset of the Pods operating on existing worker nodes in order to reduce the number of worker nodes that are operating during the first time period.” At this step, the claim recites – “executing a placement process, …”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Further, the aforementioned claim recites the following: “after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to shutdown an existing worker node, to add a new worker node or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions.” Again at this step, the claim’s recited language of – “to shutdown an existing worker node, to add a new worker node or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 7 is dependent on claim 6 and includes all the limitations of claim 6. The aforementioned claim recites the additional limitations of “wherein said directing further comprises terminating a subset of Pods that are performing redundant operations that are forecast to be adequately performed during the first period by another subset of Pods that will remain operation during the first period.” At this step, the claim’s recited language of – “terminating a subset of Pods …”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 8 is dependent on claim 6 and includes all the limitations of claim 6. The aforementioned claim recites the additional limitations of “wherein said directing comprises: at a second time during the first time period, executing a placement process, at the global controller cluster, to identify new worker-node assignments for a set of new Pods to deploy, a set of new worker nodes to deploy, or a set of new Pods and new worker nodes to deploy in order to increase the number of Pods, worker nodes or Pods and worker nodes that are operating during the second time period and to spread existing or new Pods among any set of new worker nodes that are deployed.” At this step, the claim recites – “executing a placement process, …”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Further, the aforementioned claim recites the following: “after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions.” Again at this step, the claim’s recited language of – “to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 9 is dependent on claim 6 and includes all the limitations of claim 6. The aforementioned claim recites the additional limitations of “wherein said directing comprises: at a first time before a first time period during which the schedule specifies that the number of worker nodes should be increased, executing a placement process, at the global controller cluster, to identify new worker-node assignments for a set of new Pods to deploy, a set of new worker nodes to deploy, or a set of new Pods and new worker nodes to deploy in order to increase the number of worker nodes that are operating during the first time period and to spread the Pods to the new worker nodes.” At this step, the claim recites – “executing a placement process, …”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Further, the aforementioned claim recites the following: “after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node,, to direct the VPC controller cluster to perform the required actions.” Again at this step, the claim’s recited language of – “to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node,[[,]] to direct the VPC controller cluster to perform the required actions”, which disclose steps to be performed by a generic computer that apply mere instruction activities, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 10 is dependent on claim 1 and includes all the limitations of claim 1. Further, the aforementioned claim recites the additional limitations of “wherein the schedule is received from an administrator”, which recites an insignificant extra solution activity of data-transmission/gathering, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Claim 11 is dependent on claim 1 and includes all the limitations of claim 1. Further, the aforementioned claim recites the additional limitations of “analyzing historical usage data from a set of VPCs to identify one or more periods during which worker nodes were under utilized.” The recited language of “analyzing historical usage data …”, recites an abstract idea of information evaluation according to a certain criteria, which does not amount to significantly more than the abstract idea. Claim 12 is dependent on claim 11 and includes all the limitations of claim 1. Further, the aforementioned claim recites the additional limitations of “providing the schedule as a recommendation to an administrator, receiving input from the administrator accepting, rejecting or modifying the schedule”, which recites an insignificant extra solution activity of data-transmission/gathering, which are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner, which does not amount to significantly more than the abstract idea. Further, the aforementioned claim recites the following limitation: “modifying the schedule when the input modifies the schedule”, which recites an abstract idea of a mental process to update/adjust a schedule based on some input, which is a mental process that does not amount to significantly more than the abstract idea. Independent claim 13 recites similar limitations to claim 1 and therefore rejected for the same reasons as explained above. Dependent claims 14-20, recite similar limitations to dependent claims 2-12, and therefore rejected for the same reasons as explained above. The aforementioned claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 10-11 and 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication (US 20240111588 A1) issued to Wei et al. (hereinafter as “WEI”), and in view of US Patent Application Publication (US 20170199770 A1) issued to Peteva et al. (hereinafter as “PETEVA”). Regarding claim 1, WEI teaches a method of managing a set of one or more clusters of worker nodes deployed in a set of one or more virtual private clouds (VPC), wherein the plurality of Pods run on the worker nodes (WEI discloses in Fig. 1, Para. [0030]: “The computing resources provided by public cloud 105 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 142, which is the universe of physical computers in and/or available to public cloud 105. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 143 and/or containers from container set 144.”; and Fig. 1, Para. [0044]: “Process manager 208 may be implemented using process management code 200 in FIG. 1. Process manager 208 intelligently manages a plurality of processes running in worker node cluster 204.”; and Fig. 2, Para. [0045]: “…, worker node cluster 204 includes worker node 1 (216), worker node 2 (218), and worker node 3 (220). A worker node is a machine, either physical or virtual, where containers (i.e., process workloads) are deployed. A pod includes a container and a specification for how to run the container. The worker node hosts the pod, which includes the components of the process workload. However, it should be noted that worker node cluster 204 is intended as an example only and may include any number of worker nodes, pods, and processes.”), the method comprising: collecting, through a common interface, event data regarding various worker nodes deployed in the set of VPCs (WEI discloses in Para. [0038]: “The process analyzer analyzes historical process execution data and generates process execution statistics data, such as, for example, minimum, average, and maximum execution times of each respective process, based on the analysis of the historical process execution data. The process predictor generates and executes a process scheduling plan for an additional process (e.g., a short-running process) to be deployed on the cluster of worker nodes.”; and Fig. 2, Fig. 4A/B, Para: “[0056]: “…, generating statistical data for a process 400 includes worker node 1 402, worker node 2 404, and historical process statistics data 406. It should be noted that illustrative embodiments utilize a process analyzer, such as, for example, process analyzer 238 in FIG. 2, to generate historical process statistics data 406 by analyzing process execution data corresponding to all processes that have previously run on the serverless workflow cloud process management environment.”); passing the collected event data through a mapping layer that maps all the data to a common set of data structures for processing to present a unified view of the worker nodes deployed across the set of VPCs (WEI discloses in Para. [0038]: “The process analyzer analyzes historical process execution data and generates process execution statistics data, such as, for example, minimum, average, and maximum execution times of each respective process, based on the analysis of the historical process execution data. The process predictor generates and executes a process scheduling plan for an additional process (e.g., a short-running process) to be deployed on the cluster of worker nodes.”; and Fig. 2, Fig. 4A/B, Para: “[0056]: “…, generating statistical data for a process 400 includes worker node 1 402, worker node 2 404, and historical process statistics data 406. It should be noted that illustrative embodiments utilize a process analyzer, such as, for example, process analyzer 238 in FIG. 2, to generate historical process statistics data 406 by analyzing process execution data corresponding to all processes that have previously run on the serverless workflow cloud process management environment.”; and Fig. 2, Fig. 4A/B, Para [0059]: “A user, such as, for example, administrative user 248 in FIG. 2, submits additional process deployment request 434 for process C 436 to a controller node using a client device, such as, for example, controller node 202 and client device 206 in FIG. 2. Process C 436 includes task C1 438 and task C2 440. Task C1 438 and task C2 440 are not sleep tasks and the average execution time of task C1 438 and task C2 440 is 1 minute and 3 minutes, respectively, according to historical process statistics data 406.”, the examiner notes that the reference discloses ample generated historical process data that is mapped to process task parameters, see Fig. 4B, that includes for example, Avg Execution Time, Max Execution Time, etc. to that of event data being mapped to a common set of data structures for processing); However, WEI does not explicitly teach receiving, through a scheduler, a schedule for adjusting a number of worker nodes in a set of worker nodes and dynamically move the Pods among operating worker nodes in order to optimize the deployment of the Pods on the worker nodes as the number of worker nodes increases or decreases; using the schedule to direct, through the common interface, a set of controllers associated with the set of worker nodes to adjust a number of worker nodes and to dynamically move the Pods among the operating worker nodes. But PETEVA teaches receiving, through a scheduler, a schedule for adjusting a number of worker nodes in a set of worker nodes and dynamically move the Pods among operating worker nodes in order to optimize the deployment of the Pods on the worker nodes as the number of worker nodes increases or decreases (PETEVA discloses in Para. [0010]: “…, the present disclosure further allows for instant and scheduled scaling that provides the users (e.g., the hosting account owners, managers, and/or administrators) with the ability to instantly change the resource limits of a container and/or to configure scaling events based on a user-defined schedule (year, date and time).”; and Para. [0014]: “…, responsive to the determination that at least one of the compared resource usage statistics exceeds the first set of threshold values, transmitting, via the processor, a command (e.g., API function) to the first host computing device to migrate the container associated with the compared resource usage statistics from the first host computing device to a second host computing device.”; and Fig. 8B, Para. [0153]: “…, the number of host nodes and containers can be increased and/or decreased by the load balancer node 814 according to customer-defined criteria.”); using the schedule to direct, through the common interface, a set of controllers associated with the set of worker nodes to adjust a number of worker nodes and to dynamically move the Pods among the operating worker nodes (PETEVA discloses in Para. [0014]: “…, responsive to the determination that at least one of the compared resource usage statistics exceeds the first set of threshold values, transmitting, via the processor, a command (e.g., API function) to the first host computing device to migrate the container associated with the compared resource usage statistics from the first host computing device to a second host computing device.”; and Fig. 7B/C, Para. [0140]: “Turning to FIGS. 7B and 7C, graphical user interfaces for configuring on-demand auto-scaling in a hosting service account are presented, according to an illustrative embodiment of the invention. FIG. 7B illustrates an example scheduled scaling that comprises the user-defined scaling events 606, and FIG. 7C illustrates an example on-demand scaling that comprises the user-defined scaling policy 604.”; and Fig. 8B, Para. [0153]: “…, the number of host nodes and containers can be increased and/or decreased by the load balancer node 814 according to customer-defined criteria.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of WEI (disclosing methods for monitoring and management of virtual processing environments) to include the teachings of PETEVA (disclosing methods for management of cloud hosting systems featuring scaling and load balancing with containers) and arrive at a method to utilize an intelligent process management that based on a resource scaling scheduling plan that provides the users the ability to instantly change the resource limits and availability by implementing the load balancing and scaling methods, thereby improving the efficiency and performance of distributed computing resources, as recognized by (PETEVA Abstract, Para. [0003]-[0011]). In addition, the references of WEI and PETEVA teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management. Regarding claim (13), the aforementioned claim recites similar limitations to claim 1, and therefore rejected for similar reasons as discussed above. Regarding claim 2, the combination of WEI and PETEVA teach the limitations of claim 1. Further, PETEVA teaches wherein the schedule specifies a first time period for reducing the number of worker nodes in the set due to an expected drop in traffic to the Pods deployed on the worker nodes (PETEVA Fig. 6, Fig. 7B/C, Para. [0140]: “Turning to FIGS. 7B and 7C, graphical user interfaces for configuring on-demand auto-scaling in a hosting service account are presented, according to an illustrative embodiment of the invention. FIG. 7B illustrates an example scheduled scaling that comprises the user-defined scaling events 606, and FIG. 7C illustrates an example on-demand scaling that comprises the user-defined scaling policy 604. As shown in FIG. 7B, the user interface allows the user 102 to specify a date 716 and time 718 (e.g., in hours) for the scaling event as well as the scaling limit 720. In addition to a schedule scale up, in some implementations, the user interface can also receive a user-defined scheduled scale down of the container resources.”; and Para. [0043]: “…, subsequent to the first host computing device adjusting the one or more resource allocations of the given compared container to the elevated resource level, the method includes comparing, via a processor of the first host computing device, (i) the average of one or more resource usage statistics of each container operating on the first host computing device to (ii) a third set of threshold values (e.g. down-scaling threshold) associated with the given user account.”; and Fig. 8B, Para. [0153]: “…, the number of host nodes and containers can be increased and/or decreased by the load balancer node 814 according to customer-defined criteria.”, the examiner notes that the reference discloses a container resource usage, i.e. traffic, that takes place on specified time/date/schedule so that to increase or decrease the host nodes/containers to that of claimed schedule specifying a resource decrease/increase based on traffic and time/date). Regarding claim (14), the aforementioned claim recites similar limitations to claims (2), and therefore rejected for similar reasons as discussed above. Regarding claim 3, the combination of WEI and PETEVA teach the limitations of claim 2. Further, PETEVA teaches wherein the schedule specifies a second time period for increasing the number of worker nodes in the set due to an expected rise in traffic to the Pods deployed on the worker nodes (PETEVA Fig. 6, Fig. 7B/C, Para. [0140]: “Turning to FIGS. 7B and 7C, graphical user interfaces for configuring on-demand auto-scaling in a hosting service account are presented, according to an illustrative embodiment of the invention. FIG. 7B illustrates an example scheduled scaling that comprises the user-defined scaling events 606, and FIG. 7C illustrates an example on-demand scaling that comprises the user-defined scaling policy 604. As shown in FIG. 7B, the user interface allows the user 102 to specify a date 716 and time 718 (e.g., in hours) for the scaling event as well as the scaling limit 720. In addition to a schedule scale up, in some implementations, the user interface can also receive a user-defined scheduled scale down of the container resources.”; and Para. [0043]: “…, subsequent to the first host computing device adjusting the one or more resource allocations of the given compared container to the elevated resource level, the method includes comparing, via a processor of the first host computing device, (i) the average of one or more resource usage statistics of each container operating on the first host computing device to (ii) a third set of threshold values (e.g. down-scaling threshold) associated with the given user account.”; and Fig. 8B, Para. [0153]: “…, the number of host nodes and containers can be increased and/or decreased by the load balancer node 814 according to customer-defined criteria.”, the examiner notes that the reference discloses a container resource usage, i.e. traffic, that takes place on specified time/date/schedule so that to increase or decrease the host nodes/containers to that of claimed schedule specifying a resource decrease/increase based on traffic and time/date). Regarding claim (15), the aforementioned claim recites similar limitations to claims (3), and therefore rejected for similar reasons as discussed above. Regarding claim 4, the combination of WEI and PETEVA teach the limitations of claim 3. Further, PETEVA teaches wherein the first and second time periods are one of different times within one day and different days in the week (PETEVA Fig. 6, Fig. 7B/C, Para. [0140]: “Turning to FIGS. 7B and 7C, graphical user interfaces for configuring on-demand auto-scaling in a hosting service account are presented, according to an illustrative embodiment of the invention. FIG. 7B illustrates an example scheduled scaling that comprises the user-defined scaling events 606, and FIG. 7C illustrates an example on-demand scaling that comprises the user-defined scaling policy 604. As shown in FIG. 7B, the user interface allows the user 102 to specify a date 716 and time 718 (e.g., in hours) for the scaling event as well as the scaling limit 720. In addition to a schedule scale up, in some implementations, the user interface can also receive a user-defined scheduled scale down of the container resources.”; and Para. [0043]: “…, subsequent to the first host computing device adjusting the one or more resource allocations of the given compared container to the elevated resource level, the method includes comparing, via a processor of the first host computing device, (i) the average of one or more resource usage statistics of each container operating on the first host computing device to (ii) a third set of threshold values (e.g. down-scaling threshold) associated with the given user account.”; and Fig. 8B, Para. [0153]: “…, the number of host nodes and containers can be increased and/or decreased by the load balancer node 814 according to customer-defined criteria.”, the examiner notes that the reference discloses a container resource usage, i.e. traffic, that takes place on specified time/date/schedule so that to increase or decrease the host nodes/containers to that of claimed schedule specifying a resource decrease/increase based on traffic and time/date. Regarding claim (16), the aforementioned claim recites similar limitations to claims (4), and therefore rejected for similar reasons as discussed above. Regarding claim 5, the combination of WEI and PETEVA teach the limitations of claim 1. Further, WEI teaches wherein said collecting, passing, receiving, and directing are performed by a global controller cluster that operates outside of the VPCs (WEI Fig. 1/2, Para. [0044]: “Process manager 208 may be implemented using process management code 200 in FIG. 1. Process manager 208 intelligently manages a plurality of processes running in worker node cluster 204.”; and Fig. 2, Para. [0042]: “…, serverless workflow cloud process management environment 201 includes controller node 202, worker node cluster 204, and client device 206. Controller node 202, worker node cluster 204, and client device 206 may be, for example, computer 101, host physical machine set 142, and EUD 103, respectively, in FIG. 1.”; and Fig. 2, Para. [0045]: “…, worker node cluster 204 includes worker node 1 (216), worker node 2 (218), and worker node 3 (220). A worker node is a machine, either physical or virtual, where containers (i.e., process workloads) are deployed. A pod includes a container and a specification for how to run the container. The worker node hosts the pod, which includes the components of the process workload.”, the examiner notes that the reference discloses “Controller Node 202” that is a Process Manager, which reside outside the worker node virtual cluster 204 to that of a global controller cluster that operates outside of the VPCs). Regarding claim (17), the aforementioned claim recites similar limitations to claims (5), and therefore rejected for similar reasons as discussed above. Regarding claim 10, the combination of WEI and PETEVA teach the limitations of claim 1. Further, WEI teaches wherein the schedule is received from an administrator (WEI discloses in Fig. 2 an “Admin User (248)”; and the reference discloses in Fig. 2/4, Para. [0062]: “A user, such as, for example, administrative user 248 in FIG. 2, submits additional process deployment request 524 for process C 526 to a controller node using a client device, such as, for example, controller node 202 and client device 206 in FIG. 2.”). Regarding claim 11, the combination of WEI and PETEVA teach the limitations of claim 1. Further, WEI teaches analyzing historical usage data from a set of VPCs to identify one or more periods during which worker nodes were under utilized (WEI Para. [0038]: “To ensure improved utilization of resources on the cluster of worker nodes in the serverless workflow cloud environment, illustrative embodiments add a plurality of different components, such as, for example, a process analyzer, a process predictor, a backfill hander, and a total timeout handler, to a controller node of the serverless workflow cloud environment. The process analyzer analyzes historical process execution data and generates process execution statistics data, such as, for example, minimum, average, and maximum execution times of each respective process, based on the analysis of the historical process execution data. The process predictor generates and executes a process scheduling plan for an additional process (e.g., a short-running process) to be deployed on the cluster of worker nodes”); based on the analysis, producing the schedule (WEI Para. [0038]: “To ensure improved utilization of resources on the cluster of worker nodes in the serverless workflow cloud environment, illustrative embodiments add a plurality of different components, such as, for example, a process analyzer, a process predictor, a backfill hander, and a total timeout handler, to a controller node of the serverless workflow cloud environment. The process analyzer analyzes historical process execution data and generates process execution statistics data, such as, for example, minimum, average, and maximum execution times of each respective process, based on the analysis of the historical process execution data. The process predictor generates and executes a process scheduling plan for an additional process (e.g., a short-running process) to be deployed on the cluster of worker nodes”). Claims 6-9 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication (US 20240111588 A1) issued to Wei et al. (hereinafter as “WEI”), in view of US Patent Application Publication (US 20170199770 A1) issued to Peteva et al. (hereinafter as “PETEVA”) , and in view of US Patent Publication (US 10761889 B1) issued to Jain et al. (hereinafter as “JAIN”). Regarding claim 6, the combination of WEI and PETEVA teach the limitations of claim 5. However, the combination of WEI and PETEVA teach do not explicitly wherein said directing comprises: at a first time before a first time period during which the schedule specifies that the number of worker nodes should be reduces, executing a placement process, at the global controller cluster, to identify new worker-node assignments for at least a subset of the Pods operating on existing worker nodes in order to reduce the number of worker nodes that are operating during the first time period; after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to shutdown an existing worker node, to add a new worker node or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions. But JAIN teaches at a first time before a first time period during which the schedule specifies that the number of worker nodes should be reduces, executing a placement process, at the global controller cluster, to identify new worker-node assignments for at least a subset of the Pods operating on existing worker nodes in order to reduce the number of worker nodes that are operating during the first time period (JAIN Col. 2, line (11): “…, systems and methods use a combination of on-demand control, observational control and predictive control to determine whether to scale up or down the instances of an instance group.”; and Fig. 6, Col. 2, line (52): “FIG. 6 is a simplified diagram showing a method for scaling down an instance group of a computing platform according to one embodiment of the present invention”; and Col. 3, line (9): “Benefits of some embodiments include maximizing the percentage of provisioned resources that are allocated to pods by the computing platform at any given time. …, systems and methods are configured to terminate instances and/or autoscale instance groups of a computing platform.”; and Fig. 1, Col. 5, line (57): “…, the terminator 112 is configured to evaluate on a continuous basis whether an instance associated with an instance group is eligible for termination. In some examples of scaling up instance groups, the autoscaler 110 is configured to run bin packing, including the pods that were deemed unschedulable by the scheduler 116, and scale up the number of bins (instances) that the autoscaler 110 requires bin packing pods while respecting utilization targets and/or maximum job latency. In certain examples of scaling down instance groups, the autoscaler 110 is configured to periodically evaluate instances that are below utilization targets and attempt to terminate ones that are least impactful based on runtime and/or priority. In other examples, the autoscaler 110 is configured to scale down instance groups in the least destructive way possible, initially preferring to allow all pods to exit gracefully at the cost of utilization over pre-empting pods before the pods run to completion to increase efficiency.”); after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to shutdown an existing worker node, to add a new worker node or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions (JAIN Fig. 1, Col. 5, line (57): “…, the terminator 112 is configured to evaluate on a continuous basis whether an instance associated with an instance group is eligible for termination. In some examples of scaling up instance groups, the autoscaler 110 is configured to run bin packing, including the pods that were deemed unschedulable by the scheduler 116, and scale up the number of bins (instances) that the autoscaler 110 requires bin packing pods while respecting utilization targets and/or maximum job latency. In certain examples of scaling down instance groups, the autoscaler 110 is configured to periodically evaluate instances that are below utilization targets and attempt to terminate ones that are least impactful based on runtime and/or priority. In other examples, the autoscaler 110 is configured to scale down instance groups in the least destructive way possible, initially preferring to allow all pods to exit gracefully at the cost of utilization over pre-empting pods before the pods run to completion to increase efficiency.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination teachings of WEI (disclosing methods for monitoring and management of virtual processing environments), and the teachings of PETEVA (disclosing methods for management of cloud hosting systems featuring scaling and load balancing with containers); to include the teachings of JAIN (disclosing methods for autoscaling instance groups of computing platforms) and arrive at a method to scale an instance group of a computing platform by determining whether to scale up or down the instance group by using statistical feedback data, thereby improving system efficiency and improved utilization of resources, as recognized by (JAIN Abstract, Col. (3). In addition, the references of WEI, PETEVA and JAIN teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management. Regarding claim (18), the aforementioned claim recites similar limitations to claims (6), and therefore rejected for similar reasons as discussed above. Regarding claim 7, the combination of WEI, PETEVA and JAIN teach the limitations of claim 6. Further, JAIN teaches wherein said directing further comprises terminating a subset of Pods that are performing redundant operations that are forecast to be adequately performed during the first period by another subset of Pods that will remain operation during the first period (JAIN Fig. 1, Col. 5, line (57): “…, the terminator 112 is configured to evaluate on a continuous basis whether an instance associated with an instance group is eligible for termination. In some examples of scaling up instance groups, the autoscaler 110 is configured to run bin packing, including the pods that were deemed unschedulable by the scheduler 116, and scale up the number of bins (instances) that the autoscaler 110 requires bin packing pods while respecting utilization targets and/or maximum job latency. In certain examples of scaling down instance groups, the autoscaler 110 is configured to periodically evaluate instances that are below utilization targets and attempt to terminate ones that are least impactful based on runtime and/or priority. In other examples, the autoscaler 110 is configured to scale down instance groups in the least destructive way possible, initially preferring to allow all pods to exit gracefully at the cost of utilization over pre-empting pods before the pods run to completion to increase efficiency.”). Regarding claim (19), the aforementioned claim recites similar limitations to claims (7), and therefore rejected for similar reasons as discussed above. Regarding claim 8, the combination of WEI, PETEVA and JAIN teach the limitations of claim 6. Further, JAIN teaches wherein said directing comprises: at a second time during the first time period, executing a placement process, at the global controller cluster, to identify new worker-node assignments for a set of new Pods to deploy, a set of new worker nodes to deploy, or a set of new Pods and new worker nodes to deploy in order to increase the number of Pods, worker nodes or Pods and worker nodes that are operating during the second time period and to spread existing or new Pods among any set of new worker nodes that are deployed (JAIN Fig. 1, Col. 12, line (61): “…, the autoscaler 110 is configured to determine a number of new instances associated with the instance group 118.sub.1 based at least in part on the sum equal to the demanded resources for the one or more schedulable pods plus the scheduled resources of the instance group 118.sub.1. …, the autoscaler 110 is configured to determine the number of new instances associated with the instance group 118.sub.1 by bin packing the one or more schedulable pods into the instances 120.sub.1-m of the instance group 118.sub.1. For example, the autoscaler 110 is configured to increase the number of new instances if the autoscaler 110 is unable to schedule the one or more schedulable pods on the existing instances 120.sub.1-m by bin packing the one or more schedulable pods into the existing instances 120.sub.1-m.”); after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node, to direct the VPC controller cluster to perform the required actions (JAIN Fig. 1, Col. 12, line (61): “…, the autoscaler 110 is configured to determine a number of new instances associated with the instance group 118.sub.1 based at least in part on the sum equal to the demanded resources for the one or more schedulable pods plus the scheduled resources of the instance group 118.sub.1. …, the autoscaler 110 is configured to determine the number of new instances associated with the instance group 118.sub.1 by bin packing the one or more schedulable pods into the instances 120.sub.1-m of the instance group 118.sub.1. For example, the autoscaler 110 is configured to increase the number of new instances if the autoscaler 110 is unable to schedule the one or more schedulable pods on the existing instances 120.sub.1-m by bin packing the one or more schedulable pods into the existing instances 120.sub.1-m.”). Regarding claim (20), the aforementioned claim recites similar limitations to claims (8), and therefore rejected for similar reasons as discussed above. Regarding claim 9, the combination of WEI and PETEVA teach the limitations of claim 5. However, the combination of WEI and PETEVA do not explicitly teach wherein said directing comprises: at a first time before a first time period during which the schedule specifies that the number of worker nodes should be increased, executing a placement process, at the global controller cluster, to identify new worker-node assignments for a set of new Pods to deploy, a set of new worker nodes to deploy, or a set of new Pods and new worker nodes to deploy in order to increase the number of worker nodes that are operating during the first time period and to spread the Pods to the new worker nodes; after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node,, to direct the VPC controller cluster to perform the required actions. But JAIN teaches wherein said directing comprises: at a first time before a first time period during which the schedule specifies that the number of worker nodes should be increased, executing a placement process, at the global controller cluster, to identify new worker-node assignments for a set of new Pods to deploy, a set of new worker nodes to deploy, or a set of new Pods and new worker nodes to deploy in order to increase the number of worker nodes that are operating during the first time period and to spread the Pods to the new worker nodes (JAIN Fig. 1, Col. 12, line (61): “…, the autoscaler 110 is configured to determine a number of new instances associated with the instance group 118.sub.1 based at least in part on the sum equal to the demanded resources for the one or more schedulable pods plus the scheduled resources of the instance group 118.sub.1. …, the autoscaler 110 is configured to determine the number of new instances associated with the instance group 118.sub.1 by bin packing the one or more schedulable pods into the instances 120.sub.1-m of the instance group 118.sub.1. For example, the autoscaler 110 is configured to increase the number of new instances if the autoscaler 110 is unable to schedule the one or more schedulable pods on the existing instances 120.sub.1-m by bin packing the one or more schedulable pods into the existing instances 120.sub.1-m.”); after the placement process identifies new worker-node assignments, communicating through the interface with any VPC controller cluster that has to deploy any new worker node or new Pod, to add a new worker node or new Pod, or to move a Pod to a new worker node,, to direct the VPC controller cluster to perform the required actions (JAIN Fig. 1, Col. 12, line (61): “…, the autoscaler 110 is configured to determine a number of new instances associated with the instance group 118.sub.1 based at least in part on the sum equal to the demanded resources for the one or more schedulable pods plus the scheduled resources of the instance group 118.sub.1. …, the autoscaler 110 is configured to determine the number of new instances associated with the instance group 118.sub.1 by bin packing the one or more schedulable pods into the instances 120.sub.1-m of the instance group 118.sub.1. For example, the autoscaler 110 is configured to increase the number of new instances if the autoscaler 110 is unable to schedule the one or more schedulable pods on the existing instances 120.sub.1-m by bin packing the one or more schedulable pods into the existing instances 120.sub.1-m.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination teachings of WEI (disclosing methods for monitoring and management of virtual processing environments), and the teachings of PETEVA (disclosing methods for management of cloud hosting systems featuring scaling and load balancing with containers); to include the teachings of JAIN (disclosing methods for autoscaling instance groups of computing platforms) and arrive at a method to scale an instance group of a computing platform by determining whether to scale up or down the instance group by using statistical feedback data, thereby improving system efficiency and improved utilization of resources, as recognized by (JAIN Abstract, Col. (3). In addition, the references of WEI, PETEVA and JAIN teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication (US 20240111588 A1) issued to Wei et al. (hereinafter as “WEI”), in view of US Patent Application Publication (US 20170199770 A1) issued to Peteva et al. (hereinafter as “PETEVA”) , and in view of US Patent Publication (US 11886926 B1) issued to Gadalin et al. (hereinafter as “GADALIN”). Regarding claim 12, the combination of WEI and PETEVA teach the limitations of claim 11. However, the combination of WEI and PETEVA do not explicitly teach providing the schedule as a recommendation to an administrator, receiving input from the administrator accepting, rejecting or modifying the schedule; modifying the schedule when the input modifies the schedule. But GADALIN teaches providing the schedule as a recommendation to an administrator, receiving input from the administrator accepting, rejecting or modifying the schedule (GADALIN Fig. 1/2, Col. (18), line (58): “…, the user 106 may select a manual-migration option 420 where the migration component 224 determines that a new migration schedule is needed, and requests the user 106 approve, modify, and/or reject the new migration schedule.”); modifying the schedule when the input modifies the schedule (GADALIN Fig. 1/2, Col. (18), line (58): “…, the user 106 may select a manual-migration option 420 where the migration component 224 determines that a new migration schedule is needed, and requests the user 106 approve, modify, and/or reject the new migration schedule. The user 106, when finished, may select the generate schedule option 422, and the migration component 224 may generate a migration rule 220 that is in turn used to create a migration schedule 228.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination teachings of WEI (disclosing methods for monitoring and management of virtual processing environments), and the teachings of PETEVA (disclosing methods for management of cloud hosting systems featuring scaling and load balancing with containers); to include the teachings of GADALIN (disclosing methods for workloads management between computing platforms) and arrive at a method to generate for an administrator approval an efficient resource migration schedule based on historical utilization patterns that meets user needs, as recognized by (GADALIN Abstract, Col. (1-2). In addition, the references of WEI, PETEVA and GADALIN teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Tang et al.; (US 20220027197 A1); “Methods for workload offloading between computing environments, wherein a capacity calculator is further configured to determine a processing capacity of service that can be allocated for executing requests for target function so that a total processing capacity of service may generally depend on the resources provisioned for this service, which may be scaled as required.” Ahmed et al.; (US 20180349168 A1); “Methods for managing a cloud computing environment, wherein based on the forecast, the computing platform can scale up or down the container to improve (e.g., optimize) the performance of the container in the future. Scaling up or down can include increasing or decreasing the memory, the processor, or the I/O for the container or its host virtual machine.” IYENGAR et al.; (US 20200257512 A1); “Methods for efficient scaling of a container-based application in a distributed computing system, wherein a the load experienced by a container changes over time, system administrators may adjust the amount of resources that are allocated to the container.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to Zuheir A Mheir whose telephone number is (571)272-4151. The examiner can normally be reached on Monday - Friday 9:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay M Bhatia can be reached on (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 2/18/2026 /ZUHEIR A MHEIR/Patent Examiner, Art Unit 2156 /PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Sep 27, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561376
PERSONALIZED SEARCH BASED ON ACCOUNT ATTRIBUTES
2y 5m to grant Granted Feb 24, 2026
Patent 12493587
SYSTEMS, METHODS, AND MEDIA FOR IMPLEMENTING CONFLICT-FREE REPLICATED DATA TYPES IN IN-MEMORY DATA STRUCTURES
2y 5m to grant Granted Dec 09, 2025
Patent 12406026
ABNORMAL LOG EVENT DETECTION AND PREDICTION
2y 5m to grant Granted Sep 02, 2025
Patent 12399941
CONDITION RESOLUTION SYSTEM
2y 5m to grant Granted Aug 26, 2025
Patent 12367228
METHODS AND SYSTEMS FOR PERFORMING LEGAL BRIEF ANALYSIS
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
92%
With Interview (+10.2%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 75 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month