Prosecution Insights
Last updated: April 19, 2026
Application No. 18/160,492

SYSTEM AND METHOD FOR MANAGING PODS HOSTED BY VIRTUAL MACHINES

Final Rejection §101§103
Filed
Jan 27, 2023
Examiner
SOUGH, HYUNG SUB
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
19%
Grant Probability
At Risk
3-4
OA Rounds
2y 5m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 19% of cases
19%
Career Allow Rate
3 granted / 16 resolved
-36.2% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
1 currently pending
Career history
17
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to applicant’s response filed 10/29/2025 and 01/. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/16/2026 is in compliance with the provisions of 37 CFR 1.97, and are being considered by the Examiner. Examiner’s notes Claim 19, “data processing system” as presented, requires only “processor” and “a memory” but not “instructions” since “a memory coupled to the processor to store instructions” is intended use. Applicant is advised to amend as --a memory coupled to the processor storing instructions--. Response to amendment In view of amendment filed 10/29/2025, 112(b) rejection is withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-9 are directed to methods and fall within the statutory category of processes; claims 10-18 are directed to non-transitory machine-readable media and fall within the statutory category of articles of manufacture; claims 19-20 are directed to systems and fall within the statutory category of machines. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Claims 1, 10, and 19 recite: (a) providing, by a hypervisor and to a plurality of pods hosted by a virtual machine, shared access to hardware resources of the data processing system; (b) monitoring, by the hypervisor, the virtual machine to identify a decommissioning of the virtual machine; based on the monitoring: (c) identifying a type of the decommissioning; (d) identifying a pod of the plurality of; and (e) adjusting access of the pod to the shared hardware resources based on the type of the decommissioning to manage operation of the pod through the decommissioning of the virtual machine. Step 2A Prong 1: Claims 1, 10, and 19: Steps (b), (c), and (d) are mental processes such as observation, evaluation or judgement). Therefore, Yes, claims 1, 10, and 19 recite judicial exceptions. The claims have been identified to recite judicial exceptions, Step 2A Prong 2 will evaluate whether the claims are directed to the judicial exception. Step 2A Prong 2: Claims 1, 10, and 19: The judicial exception is not integrated into a practical application. In particular, the claim recites the following additional elements “computer”, “data processing system”, “a hypervisor”, “a plurality of pods”, “virtual machine”, “hardware resources”, “A non-transitory machine-readable medium having instructions”, “a processor”, and “a memory”, which is merely a recitation of a field of use/technological environment (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, steps (a) and (e) are merely applying the judicial exception. Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. After having evaluating the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1, 10, and 19 not only recite a judicial exception but that the claims are directed to the judicial exception as the judicial exception has not been integrated into practical application. Step 2B: Claims 1, 10, and 19: The additional elements, considering them alone or in combination, do not amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than field of use/technological environment as well as using a computer as a tool to apply which do not amount to significantly more than the abstract idea. Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, claims 1, 10, and 19 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 2 and 11, they recite additional elements of “a first instance of the type of the decommissioning that is an immediate decommissioning” which describes a type of decommissioning used in a field of use/technological environment (see MPEP § 2106.05(h)) but does not integrate a judicial exception into a practical application. Further, they recite “gracefully terminating operation of the pod”, “and preventing deployment of new pods to the virtual machine prior to the decommissioning of the virtual machine” which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 2 and 11 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 2 and 11 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 3 and 12, they recite additional elements of “a second instance of the type of the decommissioning that is a scheduled decommissioning” which describes a type of decommissioning used in a field of use/technological environment (see MPEP § 2106.05(h)) but does not integrate a judicial exception into a practical application. Further, they recite “preventing deployment of new pods to the virtual machine prior to the decommissioning of the virtual machine” which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 3 and 12 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 3 and 12 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 4 and 13, they recite additional abstract ideas of “identifying computing resource expended by the pod” which, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe and evaluate some group of resources being used. Further, they recite additional elements of, “a third instance of the type of the decommissioning that is a load balancing decommissioning” which describes a type of decommissioning used in a field of use/technological environment (see MPEP § 2106.05(h)) but does not integrate a judicial exception into a practical application. Further still, they recite “making an attempt to reduce a magnitude of the computing resource expended by the pod; in an instance of the attempt where the magnitude of the computing resources expended is reduced” which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Additionally, given a required determination of whether one is in the instance of resources expended being reduced, such a determination would be a mental process. Moreover, the claims recite “notifying a management entity for the virtual machine of the reduced expenditure of the computing resources to attempt to abort the decommissioning” which is merely insignificant extra-solution data transmission activity (see MPEP § 2106.05(g)) which does not integrate a judicial exception into practical application. Lastly, the data transmission is also WURC, see at least MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network” wherein notifying a management entity as claimed is transmitting data over a network. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4 and 13 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 4 and 13 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 5 and 14, they recite additional abstract ideas of “in an instance of the notifying of the management entity where the decommissioning is not aborted” which, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can judge and evaluate if decommissioning has not been aborted, therefore proceeding with a course of action after determining if they are in that instance. Further, the claims recite additional elements of “gracefully terminating operation of the pod” which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 5 and 14 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 5 and 14 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 6 and 15, they recite additional elements of “making the attempt to reduce the magnitude of the computing resource expended by the pod”, “restarting a portion of the pod” which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 6 and 15 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 6 and 15 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 7 and 16, they recite additional elements of “making the attempt to reduce the magnitude of the computing resource expended by the pod”, which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, the claims recite “migrating the pod to a second virtual machine” which is merely insignificant extra-solution data transmission activity (see MPEP § 2106.05(g)) which does not integrate a judicial exception into practical application. Lastly, the data transmission is also WURC, see at least MPEP § 2106.05(d)(II) “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network” wherein migrating pods to a second virtual machine as claimed is transmitting data over a network. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 7 and 16 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 7 and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101. Regarding claims 8-9, 17-18, and 20, claims 9 and 20 recite additional abstract ideas of “the management action is one selected from a group of management actions” (claims 9, 18, and 20) which, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can judge and evaluate selecting a management action from a group of possible management actions. Further, the claims recite additional elements of “wherein the type of the decommissioning is based on a management action” (claims 8, 17, and 20) which describes what the type of decommissioning is based on in a field of use/technological environment (see MPEP § 2106.05(h)) but does not integrate a judicial exception into a practical application. Next, they recite “a management action that triggered a management entity to initiate the decommissioning” (claims 8, 17, 20) which is merely using a computer as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Lastly, they recite, “unscheduled maintenance of the data processing system; scheduled maintenance of the data processing system; and load balancing for the data processing system” (claims 9, 18, 20) which is merely a recitation of generic computing components used in a field of use/technological environment (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. For the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 8-9, 17-18, and 20 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, claims 8-9, 17-18, and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, 10, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain). Regarding claim 1, A method for providing computer implemented services on a data processing system, ([0009]: “FIG. 1 is a block diagram of a clustered container host system 100, e.g., a Kubernetes system, in which embodiments may be implemented…A virtualization software layer, also referred to herein as a hypervisor 150, is installed on top of the hardware platform. The hypervisor supports a virtual machine execution space within which multiple VMs may be concurrently instantiated and executed. As shown in FIG. 1, the VMs that are concurrently instantiated and executed in host 120-1 includes pod VMs 130,”; [0014]: “Each pod VM 130 has one or more containers 132 running therein in an execution space managed by container runtime 134.”; [0033]: “The embodiments described herein may employ various computer-implemented operations involving data stored in computer systems.” Examiner notes, Kubernetes is a container orchestration platform that provides services using containers.) the method comprising: providing, by hypervisor and to a plurality of pods hosted by a virtual machine, shared access to hardware resources of the data processing system ([0011]; “VM management server 116 is a physical or virtual server that communicates with host daemon 152 running in hypervisor 150 to provision pod VMs 130 and VMs 140 from the hardware resources of hosts 120 and shared storage 170”); monitoring, by the hypervisor, the virtual machine to identify a decommissioning of the virtual machine ([0013]: “Pod VM controller 154 manages the lifecycle of pod VMs 130 and determines when to spin up or delete a pod VM 130.”; [0009]: “As shown in FIG. 1, the VMs that are concurrently instantiated and executed in host 120-1 includes pod VMs 130.”); ([0001]: “One of the features of workload management software, such as Kubernetes®, is management of workload lifecycles. This can include everything from their specification to their deployment and monitoring.”; Examiner notes, workloads, which are processing data, run in containers hosted by the pod VMs. Therefore, the clustered container host system 100 is a data processing system.) based on the monitoring: identifying a type of the decommissioning; ([0003]: “upon detecting that the dummy process has been terminated, selecting one of the containers to be terminated; and terminating processes of the selected container. In one embodiment, the selected container is terminated gracefully. In another embodiment, where graceful termination is not possible, the selected container is terminated forcefully.”) identifying a container of the containers ([0024]: “In another embodiment, each container is assigned a class of service and as among containers running in the same pod VM, the container with the lowest class of service is selected for termination first.”); and adjusting access of the container to the shared hardware resources based on the type of the decommissioning to manage operation of the container through the (Fig. 3; [0025]: “FIG. 3 is a flow diagram illustrating the steps of a method for evicting workloads according to embodiments. The steps of FIG. 3 are carried out by a pod VM agent running in a pod VM. The method begins at step 312, where the pod VM agent continually monitors a dummy process that has been launched in the pod VM to run alongside containers in the pod VM. When the dummy process terminated as determined at step 314, the pod VM agent selects a container to be terminated at step 316. In one embodiment, the container to be terminated is selected based on a class of service assigned to all containers currently running in the pod VM. Then, at step 318, the pod VM agent terminates all processes in the selected container in an orderly manner that ensures a graceful shutdown of the selected container, e.g., according to an order that is implied by the container's internal dependencies.”; Examiner notes, “graceful shutdown” means the container is notified of a coming decommissioning, and allowed to continue operating for a segment of time to finish processing) decommissioning of the virtual machine. ([0013]: “Pod VM controller 154 manages the lifecycle of pod VMs 130 and determines when to spin up or delete a pod VM 130.”; Examiner notes, shutting down a pod VM would enforce one of the types of decommissioning across the pod VM’s containers, therefore, during graceful shutdown, the containers would still be operating, as previously taught in [0003]). Mueller does not teach identifying a pod of the pods that is hosted by the virtual machine. However, in analogous art, Jain teaches identifying a pod of the pods that is hosted by the virtual machine (Fig. 4A - 402; [0033]: “5) utilizing unconventional and non-routine systems and techniques to assign pods to virtual machines in a manner that more efficiently consumes available IOPS and throughput of the virtual machines using a best fit bin packing mechanism;”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the hosting of pods in a virtual machine in Jain with the hosting and identifying of containers in virtual machines in Mueller. Mueller already teaches hosting multiple containers in a VM (see at least [0014]), and identifying a specific container of the containers to be decommissioned (see at least [0025]). Given that a pod is a group of containers, with Jain’s teaching, Mueller would then group the containers into more than one pod within the pod VM. As a result, Mueller would have multiple pods running within each pod VM, therefore would be able to apply the types of decommissioning (Mueller [0003]) to the different pods. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to decrease overprovisioning in Mueller. Jain explains some of the benefits of a container orchestration platform, particularly how processes performed by containers are able to be co-located and scheduled on the same physical hardware or virtual machine (Jain [0005]). However, Jain describes that in normal container environments, over time this results in overprovisioning of VMs that host pods, leading to wasted resources (Jain [0021]). Thus, with Jain’s teachings, Mueller would be able to avoid overprovisioning VMs by allocating multiple pods to each virtual machine. Regarding claim 8, Mueller in view of Jain teaches the method of claim 1. Mueller further teaches wherein the type of the decommissioning is based on a management action that triggered a management entity to initiate the decommissioning. ([0013]: “Hypervisor 150 includes a host daemon 152 and a pod VM controller 154. As described above, host daemon 152 communicates with VM management server 116 to instantiate pod VMs 130 and VMs 140. Pod VM controller 154 manages the lifecycle of pod VMs 130 and determines when to spin up or delete a pod VM 130”). Regarding claim 10, please see the rejection for claim 1 above. Further, Mueller teaches A non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations ([0004]: “Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above methods, as well as a computer system configured to carry out the above methods.”; Fig. 2 – element 160; Examiner notes, the computer system contains CPUs.) for providing computer implemented services on a data processing system, ([0009]: “FIG. 1 is a block diagram of a clustered container host system 100, e.g., a Kubernetes system, in which embodiments may be implemented…A virtualization software layer, also referred to herein as a hypervisor 150, is installed on top of the hardware platform. The hypervisor supports a virtual machine execution space within which multiple VMs may be concurrently instantiated and executed. As shown in FIG. 1, the VMs that are concurrently instantiated and executed in host 120-1 includes pod VMs 130,”; [0014]: “Each pod VM 130 has one or more containers 132 running therein in an execution space managed by container runtime 134.”; Examiner notes, Kubernetes is a container orchestration platform that provides services using containers.) Regarding claim 17, please see the rejection for claim 8 above. Regarding claim 19, please see the rejection for claim 1 above. Further, Mueller discloses A data processing system, ([0001]: “One of the features of workload management software, such as Kubernetes®, is management of workload lifecycles. This can include everything from their specification to their deployment and monitoring.”; Examiner notes, workloads, which are processing data, run in containers hosted by the pod VMs. Therefore, the clustered container host system 100 is a data processing system.) comprising: a processor; (Fig. 2 – element 160) and a memory coupled to the processor ([0009]: “System 100 includes a cluster of hosts 120 which may be constructed on a server grade hardware platform such as an x86 architecture platform. The hardware platform includes one or more central processing units (CPUs) 160, system memory, e.g., random access memory (RAM) 162…”; Examiner notes, CPUs and RAM are a part of the same hardware architecture, therefore “coupled”.) to store instructions, which when executed by the processor, cause the processor to perform operations ([0004]: “Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above methods, as well as a computer system configured to carry out the above methods.”; Fig. 2 – element 160; Examiner notes, the computer system contains CPUs.) for providing computer implemented services, ([0009]: “FIG. 1 is a block diagram of a clustered container host system 100, e.g., a Kubernetes system, in which embodiments may be implemented…A virtualization software layer, also referred to herein as a hypervisor 150, is installed on top of the hardware platform. The hypervisor supports a virtual machine execution space within which multiple VMs may be concurrently instantiated and executed. As shown in FIG. 1, the VMs that are concurrently instantiated and executed in host 120-1 includes pod VMs 130,”; [0014]: “Each pod VM 130 has one or more containers 132 running therein in an execution space managed by container runtime 134.”; Examiner notes, Kubernetes is a container orchestration platform that provides services using containers.) Claims 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain) as applied to claims 1 and 10 above, and further in view of Miyata et al. US 20110099403 A1 (Miyata). Regarding claim 2, Mueller in view of Jain teaches the method of claim 1. Mueller in view of Jain further teaches wherein in a first instance of the type of the decommissioning that is an immediate decommissioning, (Mueller [0013]: “Pod VM controller 154 manages the lifecycle of pod VMs 130 and determines when to spin up or delete a pod VM 130.”; Examiner notes, Mueller in view of Jain can terminate pods at any moment due to the determinations of the Pod VM controller and hypervisor.) the method comprises: gracefully terminating operation of the pod; (Mueller [0003]: “In one embodiment, the selected container is terminated gracefully.”; Examiner notes, gracefully terminating all the containers of a pod would result in a graceful termination of the pod, when the pod is selected for termination by the Pod VM Controller 154). Mueller in view of Jain does not teach preventing deployment of new pods to the virtual machine prior to the decommissioning of the virtual machine. However, in analogous art, Miyata teaches and preventing deployment of new pods to the virtual machine prior to the decommissioning of the virtual machine. (Fig. 3; [0088]: “FIG. 3 is a diagram showing an execution sequence of scale-in processing. A detailed procedure of the scale-in will be explained with reference to the sequence diagram of FIG. 3. First, a request for blockage of a virtual server 301 which is the deactivation target (scale-in target) is issued to the load balancer 104 (at step S311); in responding thereto, the load balancer 104 stops request allocation to the deactivation target virtual server (at step S312). The load balancer 104 notifies the server management apparatus 101 of the fact that the blockage processing is completed (step 3313),”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the scale in/scale out load balancing decommissioning procedure in teaching in Miyata, with the load balancing in Mueller in view of Jain. Miyata uses a workload dispersion device (load balancer, Miyata [0007]) to allocate requests across the virtual servers, and Mueller in view of Jain uses the VM management server to perform similar load balancing (Mueller [0011]). As a result of Miyata’s teaching, the VM management server in Mueller in view of Jain would prevent deploying new pods on a pod VM if the pod VM has already been selected for decommissioning by the pod VM controller, which would eliminate unnecessary deployments. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to decrease ineffective load balancing operations. As Miyata’s load balancer achieves an optimal distribution of requests across the virtual servers (Miyata [0070]), preventing the allocation of new pods to pod VMs already selected for decommissioning, would increase the efficiency of the load balancing operations in Mueller in view of Jain, as they would not have to re-deploy a pod to a new pod VM that the VM management server just previously allocated. Regarding claim 11, please see the rejection for claim 2 above. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain) and Miyata et al. US 20110099403 A1 (Miyata) as applied to claims 2 and 11 above, and further in view of Kondo et al. US 20190068442 A1 (Kondo). Regarding claim 3, Mueller in view of Jain, further in view of Miyata the method of claim 2. Mueller in view of Jain, further in view of Miyata further teaches wherein in a second instance of the type of the decommissioning that is a scheduled decommissioning, the method comprises: preventing the deployment of the new pods to the virtual machine prior to the decommissioning of the virtual machine. (Miyata Fig. 3; [0088]: “FIG. 3 is a diagram showing an execution sequence of scale-in processing. A detailed procedure of the scale-in will be explained with reference to the sequence diagram of FIG. 3. First, a request for blockage of a virtual server 301 which is the deactivation target (scale-in target) is issued to the load balancer 104 (at step S311); in responding thereto, the load balancer 104 stops request allocation to the deactivation target virtual server (at step S312). The load balancer 104 notifies the server management apparatus 101 of the fact that the blockage processing is completed (step 3313),”). Mueller in view of Jain, further in view of Miyata does not teach wherein in a second instance of the type of the decommissioning that is a scheduled decommissioning. However, in analogous art, Kondo teaches wherein in a second instance of the type of the decommissioning that is a scheduled decommissioning. ([0032]: “the processor first determines, based on the past request trend and the specified maintenance-performing time, a rough maintenance time period for each VM 102, and stands by until the determined maintenance time period comes.”; [0003]: “”Maintenance” as used herein refers to a maintenance work, such as “re-creation of a VM”, “deletion of a VM”, or the like.”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the scheduling of VM downtimes in Kondo with the VM decommissioning procedures of Mueller in view of Jain, further in view of Miyata. As a result, Mueller in view of Jain, further in view of Miyata would be able to schedule a decommissioning of a pod VM, that the pod VM controller selected, to a later time. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to decrease the impact on the environment from decommissioning pod VMs in Mueller in view of Jain, further in view of Miyata. Kondo states that by using the past trend data to predict when a VM is under its least load, VM standby times are minimized (time VMs are not processing requests, Kondo [0071-0072]) therefore, decreasing the likelihood of causing a system to go down (Kondo [0079]). Regarding claim 12, please see the rejection for claim 3 above. Claims 4, 5, 7, 13, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain), Miyata et al. US 20110099403 A1 (Miyata), and Kondo et al. US 20190068442 A1 (Kondo) as applied to claims 3 and 12 above, and further in view of Gupta et al. US 10013273 B1 (Gupta). Regarding claim 4, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo teaches the method of claim 3. Mueller in view of Jain, further in view of Miyata, even further in view of Kondo further teaches wherein in a third instance of the type of the decommissioning that is a load balancing decommissioning, the method comprises: (Mueller [0011]: “VM management server 116 logically groups hosts 120 into a cluster to provide cluster-level functions to hosts 120, such as load balancing across hosts 120 by performing VM migration between hosts 120”) identifying computing resource expended by the pod; (Mueller [0030]: “That is, if one of the workloads running on a compute node has known upper bounds for memory consumption, these can be enforced in the embodiments. In addition, these upper bounds may be considered when making an eviction decision.”; Jain [0057]: “A vertical pod autoscaler 416 may be configured to monitor 418 resource consumption by the pods,”; Examiner notes, as Mueller in view of Jain, further in view of Miyata, even further in view of Kondo uses Jain’s environment teaching of pods within a VM, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, would be able to measure those resourced used by a pod as Mueller’s VM management server already does monitor memory consumption of workloads.) making an attempt to reduce a magnitude of the computing resource expended by the pod; (Mueller [0003]: “Embodiments provide techniques to detect memory shortage in a clustered container host system so that workloads can be shut down gracefully. According to one embodiment, a method of managing memory in a virtual machine (VM) in which containers are executed, includes the steps of: monitoring a dummy process (e.g., a canary process) that runs in the VM concurrently with the containers, the dummy process being configured to be terminated by an operating system of the VM under a low memory condition before any other processes running in the VM; upon detecting that the dummy process has been terminated, selecting one of the containers to be terminated; and terminating processes of the selected container.”) in an instance of the attempt where the magnitude of the computing resources expended is reduced: notifying a management entity for the virtual machine of the reduced expenditure of the computing resource (Miyata [0100]: “The scale-out target is a virtual server B4 of the physical server #3. Thereafter, the cluster system B decreases in workload, and the result of the scale-in execution is indicated in the configuration information table 542. The scale-in target is a virtual server B1 of physical server #1. Further, the cluster system A decreases in load, and the result of scale-in execution is indicated in the configuration information table 543. The scale-in target is a virtual server A3 of physical server #3.”; Examiner notes, the server management apparatus 101 monitors load information (Fig. 19) collected by collection unit 124 and configuration information collector 125, which are both a part of the server management apparatus 101. In the context of Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, the VM management server 116 would be made aware of a decrease in computing resource expenditure, as taught by Miyata for his scale-in/scale-out procedure). Mueller in view of Jain, further in view of Miyata, even further in view of Kondo does not teach to attempt to abort the decommissioning. However, in analogous art, Gupta teaches to attempt to abort the decommissioning. (Col. 9, lines 36-46: “At 268, the provisioning service 120 may receive an indication to abort the termination request. The abort indication may be implemented as an API call specifically to abort a previously transmitted instance termination request. The abort termination API call may refer to a specific termination request API by an API identifier. Alternatively, the abort termination API call may not refer to a specific prior termination request API and instead cause any termination request API received by the provisioning service 120 for that customer whose termination delay time period has not yet expired to be aborted (270).”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine aborting a decommissioning of a virtual machine from Gupta with the systems and methods of Mueller in view of Jain, further in view of Miyata, even further in view of Kondo. Mueller in view of Jain, further in view of Miyata, even further in view of Kondo already teaches decommissioning pod VMs through load balancing (Mueller [0011], [0013], Miyata [0082]) as well as decreasing the resource usage within each pod VM by terminating containers causing memory pressure (Mueller [0021], [0025], [0003]). As a result, the VM management server in Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, after reviewing the changes to the configuration information, which indicate the results of the load balancing decommissioning, would be able to abort decommissioning a pod VM. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to reduce unnecessary migrations, which is one of the goals of Miyata (Miyata, Abstract). Mueller states one of the reasons memory shortages occur in clustered container systems is a result of trying to balance the amount of resources a workload requires with maximizing the number of workloads on a set of resources, and “When faced with such memory shortage, it would be desirable to detect misbehaving workloads and shut them down gracefully.” (Mueller [0002]). Therefore, after correcting the memory shortage problem caused by a pod VM, it would be inefficient to then continue decommissioning the pod VM, as the problem was already corrected and the remaining workloads would need to be migrated to other pod VMs. Regarding claim 5, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta teaches the method of claim 4. Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta further teaches wherein in the third instance of the type of the decommissioning that is a load balancing decommissioning, the method further comprises: (Mueller [0011]: “VM management server 116 logically groups hosts 120 into a cluster to provide cluster-level functions to hosts 120, such as load balancing across hosts 120 by performing VM migration between hosts 120”) in an instance of the notifying of the management entity where the decommissioning is not aborted: (Miyata [0088]: “Finally, the server management apparatus 101 receives a deactivation completion notice (step S316).”) gracefully terminating operation of the pod. (Mueller [0003]: “In one embodiment, the selected container is terminated gracefully.”; Examiner notes, gracefully terminating all the containers of a pod would result in a graceful termination of the pod, when the pod is selected for termination by the Pod VM Controller 154 (see Mueller [0013])). Regarding claim 7, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta teaches the method of claim 5. Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta further teaches wherein making the attempt to reduce the magnitude of the computing resource expended by the pod comprises: (Mueller [0003]: “Embodiments provide techniques to detect memory shortage in a clustered container host system so that workloads can be shut down gracefully. According to one embodiment, a method of managing memory in a virtual machine (VM) in which containers are executed, includes the steps of: monitoring a dummy process (e.g., a canary process) that runs in the VM concurrently with the containers, the dummy process being configured to be terminated by an operating system of the VM under a low memory condition before any other processes running in the VM; upon detecting that the dummy process has been terminated, selecting one of the containers to be terminated; and terminating processes of the selected container.”) migrating the pod to a second virtual machine. (Mueller [0011]: “VM management server 116 logically groups hosts 120 into a cluster to provide cluster-level functions to hosts 120, such as load balancing across hosts 120 by performing VM migration between hosts 120…”; Miyata Fig. 22, [0073]: “In addition, a result of migration which causes the virtual server A3 to change from the state of the configuration information table 2212 so as to operate on the physical server #2 due to workload consolidation is shown in the configuration information table 2214.”; Examiner notes, Miyata’s load balancing procedure’s migration is performed after reducing the computing resources used by the workload. Thus, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta would use Miyata’s teaching for migrating pods). Regarding claim 13, please see the rejection for claim 4 above. Regarding claim 14, please see the rejection for claim 5 above. Regarding claim 16, please see the rejection for claim 7 above. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain), Miyata et al. US 20110099403 A1 (Miyata), Kondo et al. US 20190068442 A1 (Kondo), and Gupta et al. US 10013273 B1 (Gupta) as applied to claims 5 and 14 above, and further in view of Gaurav et al. US 20160378563 A1 (Gaurav). Regarding claim 6, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta teaches the method of claim 5. Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta further teaches wherein making the attempt to reduce the magnitude of the computing resource expended by the pod comprises: (Mueller [0003]: “Embodiments provide techniques to detect memory shortage in a clustered container host system so that workloads can be shut down gracefully. According to one embodiment, a method of managing memory in a virtual machine (VM) in which containers are executed, includes the steps of: monitoring a dummy process (e.g., a canary process) that runs in the VM concurrently with the containers, the dummy process being configured to be terminated by an operating system of the VM under a low memory condition before any other processes running in the VM; upon detecting that the dummy process has been terminated, selecting one of the containers to be terminated; and terminating processes of the selected container.”). Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta does not teach restarting a portion of the pod. However, in analogous art, Gaurav teaches restarting a portion of the pod. ([0040]: “…virtualization management module 130 dynamically removes memory from this VM to reset memory to mem_alloc (hot remove). In some embodiments (for example where dynamically removal is not supported), an alert may be provided to a system administrator to power off the VM and then remove memory from the VM, and then restart containers on the VM.”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the restarting of the containers on a VM or resetting a VM’s memory in Gaurav with the systems and methods in Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta, resulting in Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta being capable of restarting portions of the pod VM. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to increase resource optimization by decreasing overallocated resources (Gaurav [0004]), which causes some issues during the load balancing decommissioning procedure where there are insufficient resources for scale-out. For example, Miyata states in [0090] that the scaling procedure may not be executable on some target clusters due to a resource shortage for scaling-out. At this point, the load balancing procedure is concluded and the server administrator is notified of the resource shortage. However, with Gaurav’s teachings, Mueller in view of Jain, further in view of Miyata, even further in view of Kondo, moreover in view of Gupta could consider that some containers within pod’s are using less resources then they are allocated (Gaurav [0040], [0004]), therefore could attempt to resolve the problem by restarting the containers within a pod VM. Regarding claim 15, please see the rejection for claim 6 above. Claims 9, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mueller et al. US 20210232419 A1 (Mueller) in view of Jain et al. US 20230244392 A1 (Jain) as applied to claims 8, 17, and 19 above, and further in view of Kondo et al. US 20190068442 A1 (Kondo). Regarding claim 9, Mueller in view of Jain teaches the method of claim 8. Mueller further teaches wherein the management action is one selected from a group of management actions consisting of: unscheduled maintenance of the data processing system; scheduled maintenance of the data processing system; and load balancing for the data processing system. ([0011]: “VM management server 116 logically groups hosts 120 into a cluster to provide cluster-level functions to hosts 120, such as load balancing across hosts 120 by performing VM migration between hosts 120,”). Mueller in view of Jain does not teach unscheduled maintenance of the data processing system; scheduled maintenance of the data processing system; However, in analogous art, Kondo teaches unscheduled maintenance of the data processing system; ([0029]: “Also, for example, in a case where the current request trend follows the past request trend and the number of current requests is smaller than the number of requests in the past request trend, the processor allows the administrator to choose whether or not to immediately perform maintenance. In a case where the administrator chooses to immediately perform maintenance, the processor starts maintenance processing, for example, without performing the above-described standby.”) scheduled maintenance of the data processing system; ([0029]: “Also, in a case where the administrator chooses not to immediately perform maintenance, the processor causes maintenance processing to be started after standing by, for example, until the determined maintenance time period comes.”). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the teaching of choosing between different maintenance approaches with the decommissioning procedures of Mueller in view of Jain. As a result, Mueller in view of Jain could choose whether to execute a specific decommissioning procedure given whether the maintenance is scheduled to happen at a specific time, or needs to happen immediately. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to decrease the impact of VM downtimes due to maintenance. Kondo describes the advantages of performing maintenance during a scheduled time, as maintenance can be done during a time where VMs have their least requests (Kondo [0011]) and done in a prioritized order to decrease downtime (Kondo [0027]), thus with Kondo’s teachings, the entire system of Mueller in view of Jain would be less affected by VM downtime due to maintenance. Regarding claim 18, please see the rejection for claim 9 above. Regarding claim 20, please see the rejection for claim 9 above. Response to arguments Applicant's arguments filed 10/29/2025 have been fully considered but they are not persuasive. Regarding applicant’s argument for 101 rejection: Applicant argues that “ ..., Applicant submits that limitations (i)-(iii) recite additional, non-abstract elements that integrate the alleged judicial exception into a practical application. For example, the ‘hypervisor’ in limitations (i) and (ii) is a non-abstract, specialized software component configured to manage virtual machines and evaluate conditions for decommissioning. Original specification, 1 [0043], [0069]-[0070]. In particular, the claimed hypervisor is required to provide ‘a plurality of pods hosted by a virtual machine’ with ‘shared access to hardware resources.’ This requirement in limitation (i), which involves a particular way of interaction between hardware resources and virtualized software components, cannot be abstractly performed in mind or implemented on a generic computer. Similarly, the adjustment of the pod's access to the shared hardware resources, which is based on the monitoring of the hypervisor, cannot be abstractly performed in mind or implemented on a generic computer.” (pages 7-8 of the REMARKS). Further, applicant argues that “Furthermore, limitations (i)-(iii) integrate the alleged judicial exception into a practical application, e.g., managing hardware resources for services running on a virtual machine. As described in the specification, a virtual machine may be decommissioned for a variety reasons, so it is desired to coordinate the virtual machine with the pods to avoid unrecoverable interruption of the services running in the pods. Id. at [0036]. Accordingly, a hypervisor is used to monitor the activities of the virtual machine to determine a reason for the decommissioning and adjust the pods' access to hardware resources based on the reason. Id. at 1 [0038]-[0040]. This way, the decommissioning of the virtual machine is coordinated with the consumption of the hardware resources by the pods, and thereby coordinated with the execution of the services in the pods. Therefore, such coordination ‘may reduce an impact of decommissioning of a virtual machine on operation of pods.’ Id. at [0040]. In short, by reciting at least limitations (i)-(iii), amended claim 1 makes a practical improvement to the operations of a data processing system with virtual machines, thereby integrating the alleged abstract idea into a practical application. Under Step 2A, Prong Two, amended independent claim 1 is patent-eligible. Likewise, amended independent claims 10 and 19, and all the dependent claims, are patent-eligible. With the determination under Step 2A, it is not necessary to proceed with Step 2B.” (See page 8 of the REMAKRS) Examiner respectively disagrees. The recited hypervisor is well-understood, routine, conventional activity. Further, contrary to applicant’s argument that “... a particular way of interaction between hardware resources and virtualized software components,”, the claim merely recites “providing, ..., shared access to hardware resources ...” and “monitoring, ..., the virtual machine to identify a decommissioning ...” which are well-understood, routine, conventional activities of a hypervisor. Thus, claims do not recite patent eligible patent subject matter under 35 U.S.C. § 101. Regarding applicant’s argument for 101 rejection: Applicant argues that “Despite Jain’s teachings about consolidating pods as a way to conserve resources, the consolidation is not ‘based on the type of the decommissioning.’” Examiner respectively disagrees. As stated in the rejection for claims 1, 10, and 19 above, Mueller discloses adjusting access .. based on the type of the decommissioning ...” (e.g., Mueller, para [0025] ). Jain is relied on soley for the use of pod of the pods that are hosted by the virtual machine, i.e., containers and pods can be used interchangeably. See e.g., the abstract of Jain. Further, arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: HUH et al. US 2014/0123142 A1 disclose well known functions of hypervisor (paragraph [0051]) LONAPPAN et al. US 2014/0298345 A1 disclose well known functions of hypervisor (paragraph [0012]) TAO et al. US 2017/0005935 A1 disclose well known functions of hypervisor (paragraph [0008]) Deshpande et al. US 2018/0032250 A1 disclose well known functions of hypervisor (paragraph [0015]) LU et al. US 2022/0027183 A1 disclose well known functions of hypervisor (paragraph [0019]) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hyung S. Sough whose telephone number is (571) 272-6799. The examiner can normally be reached Monday-Fri 7-3. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cordelia Zecher can be reached at (571) 272-7771. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S. Sough/SPE, Art Unit 2192
Read full office action

Prosecution Timeline

Jan 27, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §101, §103
Oct 29, 2025
Response Filed
Feb 28, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 10437505
EFFICIENTLY RESTORING EXECUTION OF A BACKED UP VIRTUAL MACHINE BASED ON COORDINATION WITH VIRTUAL-MACHINE-FILE-RELOCATION OPERATIONS
2y 5m to grant Granted Oct 08, 2019
Patent 10198255
METHOD AND SYSTEM FOR REDUCING INSTABILITY WHEN UPGRADING SOFTWARE
2y 5m to grant Granted Feb 05, 2019
Patent 10152406
SOFTWARE PROGRAM REPAIR
2y 5m to grant Granted Dec 11, 2018
Patent null
BUFFER ALLOCATION FOR NETWORK SUBSYSTEM
Granted
Patent null
Non Intrusive Application Mechanism
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
19%
Grant Probability
41%
With Interview (+22.5%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month