Prosecution Insights
Last updated: April 18, 2026
Application No. 18/334,592

DECLARATIVE VM MANAGEMENT FOR A CONTAINER ORCHESTRATOR IN A VIRTUALIZED COMPUTING SYSTEM

Non-Final OA §103§112§DP
Filed
Jun 14, 2023
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 14 objected to because of the following informalities: Claim 14 contains a typo/markup “second_application”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (similarly claim 4) recite: “the second VMs including non-virtualized guest operating system”. The examiner is unclear how VMs which are inherently virtualized include a non-virtualized guest operating system. Claim 8 recites the limitation “the pod VM”. There is insufficient antecedent basis for this limitation in the claim. The examiner is unclear which pod VM, “the pod VM” is referring to. Claim 15 recite: “a lifecycle controller (PLC)”, furthermore, specification recite (PGPUB paragraph 40): “a pod VM lifecycle controller (PLC)”. The examiner is unclear why the term PLC (pod VM lifecycle control) is used without any recitation of a pod VM within the claim. Furthermore, PLC is disclosed as “a pod VM lifecycle controller” not a lifecycle controller thus inconsistent with the disclosure. Claims 2-7, 9-14 and 16-20 are rejected based on rejection of its corresponding dependent claim. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim(s) 8-20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 8-20 of U.S. Patent No. 11,720,382. Although the claims at issue are not identical, they are not patentably distinct from each other because: Instant 11,720,382 8. A method of application orchestration in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts and a virtualization management server configured to manage the virtualization layer and the hosts, the virtualization layer supporting execution of virtual machines (VMs), the virtualization layer integrated with an orchestration control plane, the method comprising: receiving, at the orchestration control plane, specification data for a first application and a second application; deploying, by a lifecycle controller executing in the orchestration control plane, the first application to a first VM in a host of the host cluster based on the specification data, the first VM including a container engine supporting execution of containers in the pod VM; and deploying, by a VM controller executing in the orchestration control plane and in cooperation with the virtualization management server, the second application to a second VM in the host, the second VM executing on the virtualization layer in parallel with the first VM. 8. A method of application orchestration in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts and a virtualization management server configured to manage the virtualization layer and the hosts, the virtualization layer supporting execution of virtual machines (VMs), the virtualization layer integrated with an orchestration control plane, the method comprising: receiving, at a master server of the orchestration control plane, specification data for at least one application; deploying, by a pod VM lifecycle controller (PLC) executing in the master server, the at least one application to a pod VM in a host of the host cluster based on the specification data, the pod VM including a container engine supporting execution of containers in the pod VM; and deploying, by a VM controller executing in the master server and in cooperation with the virtualization management server, the at least one application to a native VM in the host, the native VM executing on the virtualization layer in parallel with the pod VM. 9. The method of claim 8, wherein the specification data specifies a VM resource referencing a VM image resource for a VM image of guest software executing in the second VM. 9. The method of claim 8, wherein the specification data specifies a VM resource referencing a VM image resource for a VM image of software executing in the native VM. 10. The method of claim 8, wherein the specification data specifies a VM resource referencing a VM profile resource having attributes of the second VM. 10. The method of claim 8, wherein the specification data specifies a VM resource referencing a VM profile resource having attributes of the native VM. 11. The method of claim 8, wherein the specification data specifies a VM resource referencing a network resource for a virtual network connected to the second VM. 11. The method of claim 8, wherein the specification data specifies a VM resource referencing a network resource for a virtual network connected to the native VM. 12. The method of claim 8, wherein the step of deploying comprises: cloning the second VM from a VM image referenced in the specification data; applying policies to the second VM based on the specification data; and starting the second VM on a selected host of the host cluster. 12. The method of claim 8, wherein the step of deploying comprises: cloning the native VM from a VM image referenced in the specification data; applying policies to the native VM based on the specification data; and starting the native VM on a selected host of the host cluster. 13. The method of claim 8, further comprising: receiving decoupled information at a management agent in the virtualization layer from the orchestration control plane through the VM controller; and providing the decoupled information for consumption by the second application executing in the second VM, the decoupled information including at least one of configuration information and secret information. 13. The method of claim 8, further comprising: receiving decoupled information at a management agent in the virtualization layer from the master server through the VM controller; and providing the decoupled information for consumption by the at least one application executing in the native VM, the decoupled information including at least one of configuration information and secret information. 14. The method of claim 8, wherein the second application in the second VM is non-containerized. 14. The method of claim 8, wherein the at least one application in the native VM is non-containerized. 15. A non-transitory computer readable medium comprising instructions to be executed in a computing device to cause the computing device to carry out a method of application orchestration in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts and a virtualization management server configured to manage the virtualization layer and the hosts, the virtualization layer supporting execution of virtual machines (VMs), the virtualization layer integrated with an orchestration control plane, the method comprising: receiving, at the orchestration control plane, specification data for a first application and a second application; deploying, by a lifecycle controller (PLC), the first application to a first VM in a host of the host cluster based on the specification data, the first VM including a container engine supporting execution of containers in the first VM; and deploying, by a VM controller executing in the orchestration control plane and in cooperation with the virtualization management server, the second application to a second VM in the host, the second VM executing on the virtualization layer in parallel with the first VM. 15. A non-transitory computer readable medium comprising instructions to be executed in a computing device to cause the computing device to carry out a method of application orchestration in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts and a virtualization management server configured to manage the virtualization layer and the hosts, the virtualization layer supporting execution of virtual machines (VMs), the virtualization layer integrated with an orchestration control plane, the method comprising: receiving, at a master server of the orchestration control plane, specification data for at least one application; deploying, by a pod VM lifecycle controller (PLC) executing in the master server, the at least one application to a pod VM in a host of the host cluster based on the specification data, the pod VM including a container engine supporting execution of containers in the pod VM; and deploying, by a VM controller executing in the master server and in cooperation with the virtualization management server, the at least one application to a native VM in the host, the native VM executing on the virtualization layer in parallel with the pod VM. 16. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a VM image resource for a VM image of guest software executing in the second VM. 16. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a VM image resource for a VM image of software executing in the native VM. 17. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a VM profile resource having attributes of the second VM. 17. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a VM profile resource having attributes of the native VM. 18. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a network resource for a virtual network connected to the second VM. 18. The non-transitory computer readable medium of claim 15, wherein the specification data specifies a VM resource referencing a network resource for a virtual network connected to the native VM. 19. The non-transitory computer readable medium of claim 15, wherein the step of deploying comprises: cloning the second VM from a VM image referenced in the specification data; applying policies to the second VM based on the specification data; and starting the second VM on a selected host of the host cluster. 19. The non-transitory computer readable medium of claim 15, wherein the step of deploying comprises: cloning the native VM from a VM image referenced in the specification data; applying policies to the native VM based on the specification data; and starting the native VM on a selected host of the host cluster. 20. The non-transitory computer readable medium of claim 15, wherein the second application in the second VM is non-containerized. 20. The non-transitory computer readable medium of claim 15, wherein the at least one application in the native VM is non-containerized. Claim(s) 8, 13-15 and 20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 15 and 16 of U.S. Patent No. 11,263,041. Although the claims at issue are not identical, they are not patentably distinct from each other. Furthermore, claims 15 and 20 are non-transitory computer readable medium claims corresponding to method claims 8 and 14 therefore, it would have obvious to implement a non-transitory computer readable medium claims based on the corresponding method claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rao et al. (Pub 20200019444) (hereafter Rao) in view of Gummaraju et al. (Pub 20150120928) (hereafter Gummaraju). As per claim 1, Rao teaches: A virtualized computing system, comprising: a host cluster having a virtualization layer executing on hardware platforms of hosts, the virtualization layer supporting execution of virtual machines (VMs), the VMs including first VMs and second VMs, the first VMs including container engines supporting execution of containers in the first VMs, the second VMs including non-virtualized guest operating systems; ([Paragraph 25], Three computers that implement a cluster of nodes are shown also connected to the network. These computers are Host 1 120, Host 2 130, and Host N 150. Host 1 120, Host 2 130, and Host N 150 are computer systems (host machines) which may include thereon one or more containers, one or more virtual machines (VMs), or one or more native applications. These host machines are typically self-sufficient, including a processor (or multiple processors), memory, and instructions thereon. Host 1 120, Host 2 130, and Host N 150 are each computers that together implement a cluster. [Paragraph 29], Alternatively, a host can include a plurality of such, like in the example of Host 2. In some cases, instances of the container, virtual machine, or native application may be replicated on more than one host. This is shown here as first instances of Container 1 122, Container 2 124, and Container 3 126 on Host 1 120, and second instances of each are Container 1 138, Container 2 142, and Container 3 144 on Host 2. In addition, first instances of VM 2 132 and VM 1 134 are on Host 2 130, and second instances of VM 2 154 and VM 1 152 are on Host N 150. [Paragraph 28], Host N includes instances of four virtual machines: VM 2 154, VM 1 152, VM 3 156, and VM 4 158. A virtual machine (VM) is an operating system or application environment that is installed as software, which imitates dedicated hardware. The virtual machine imitates the dedicated hardware, providing the end user with the same experience on the virtual machine as they would have on dedicated hardware. [Paragraph 30], The cluster in the example is managed by cluster load balancer system 102. System 102 may use one or more programs to deploy, scale, and manage machines and software in the cluster as an orchestration environment. Non-limiting examples of such programs/systems are Kubernetes, Apache Hadoop, and Docker. Applications operating on such a system can include database application such as Oracle database systems utilizing structured query language (SQL) databases. Note that the terms “KUBERNETES”, “ORACLE”, “APACHE”, “HADOOP”, and “DOCKER””…) Although Rao teaches lifecycle of VMs (i.e. scaling). Rao does not explicitly disclose a virtualization management server configured to manage the virtualization layer and the host cluster; and an orchestration control plane integrated with the virtualization layer, the orchestration control plane including: a lifecycle controller configured to cooperate with the virtualization layer to manage lifecycles of the first VMs, and a VM controller configured to cooperate with the virtualization management server to manage lifecycles of the second VMs. Gummaraju teaches a virtualization management server configured to manage the virtualization layer and the host cluster; and an orchestration control plane integrated with the virtualization layer, the orchestration control plane including: a lifecycle controller configured to cooperate with the virtualization layer to manage lifecycles of the first VMs, and a VM controller configured to cooperate with the virtualization management server to manage lifecycles of the second VMs. ([Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 104 may issue commands to power on, power off, reset, clone, deploy, and provision one or more VMs 112 executing on a particular host 108. In one embodiment, virtualization management module 104 is configured to communicate with hosts 108 to collect performance data and generate performance metrics (e.g., counters, statistics) related to availability, status, and performance of hosts 108, VMs 112, and resource pools 114. [Paragraph 11], Each host 108 is configured to provide a virtualization layer that abstracts processor, memory, storage, and networking resources of a hardware platform 118 into multiple virtual machines (VMs) 112 that run concurrently on each of hosts 108. VMs 112 run on top of a software interface layer, referred to herein as a hypervisor 116, that enables sharing of the hardware resources of each of hosts 108 by the VMs 112. One example of hypervisor 116 that may be used in an embodiment described herein is a VMware ESXi hypervisor provided as part of the VMware vSphere solution made commercially available from VMware, Inc.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Rao wherein a host cluster having virtualization layer which executes VMs with containers for executing container applications which can be dynamically scaled to meet demands, into teachings of Gummaraju wherein a management module of a server which manages virtualization layer and the host cluster to manage VMs lifecycle, cooperate with virtualization management server via lifecycle controller and an orchestration control plane, because this would enhance the teachings of Rao wherein by having a virtualization management server configured to manage the virtualization layer and the host cluster allows, management of administrative tasks for computer system including managing hosts, managing VMs (i.e. power on, power off, migration, reset, clone, etc.), collect metrics and allocate resources to provide various services to multiple users/clients within a multi-tenant environment.. As per claim 2, rejection of claim 1 is incorporated: Rao teaches wherein the second VMs execute non-containerized applications. ([Paragraph 25], Three computers that implement a cluster of nodes are shown also connected to the network. These computers are Host 1 120, Host 2 130, and Host N 150. Host 1 120, Host 2 130, and Host N 150 are computer systems (host machines) which may include thereon one or more containers, one or more virtual machines (VMs), or one or more native applications.) Gummaraju also teaches ([Paragraph 15], For example, if distributed computing application 124 is a Hadoop application, a VM 112 may have a runtime environment 218 (e.g., JVM) that executes distributed software component code 220 implementing a "Resource Manager" function, "Application Master" function, "Node Manager" function, "Container" function, "Name Node" function, "Data Node" function, "VM Pool Manager" function, and other functions, described further below. Alternatively, each VM 112 may include distributed software component code 220 for distributed computing application 124 configured to run natively on top of guest OS 216.) As per claim 3, rejection of claim 1 is incorporated: Gummaraju teaches wherein the orchestration control plane includes custom APIs to manage objects monitored by the VM controller. ([Paragraph 43], In one embodiment, a plurality of VMs may have been provisioned on each host when distributed computing application 124 was deployed. In other embodiments, VM pool manager 132 may dynamically provision (e.g., via API call to virtualization management module 104) the plurality of VMs at launch of distributed computing application 124. In either embodiment, VM pool manager 132 may power on a subset of the provisioned VMs based on a target pool size. The target pool size specifies threshold values for managing compute VMs 134 using power-on, power-off, and reset operations. In one embodiment, VM pool manager 132 powers on provisioned VMs until the target pool size is reached.) As per claim 4, rejection of claim 3 is incorporated: Gummaraju teaches wherein the objects include VM objects for the second VMs and VM image objects for VM images of guest software executing in the second VMs, the guest software including the non-virtualized guest operating system. ([Paragraph 43], In one embodiment, a plurality of VMs may have been provisioned on each host when distributed computing application 124 was deployed. In other embodiments, VM pool manager 132 may dynamically provision (e.g., via API call to virtualization management module 104) the plurality of VMs at launch of distributed computing application 124. In either embodiment, VM pool manager 132 may power on a subset of the provisioned VMs based on a target pool size. The target pool size specifies threshold values for managing compute VMs 134 using power-on, power-off, and reset operations. In one embodiment, VM pool manager 132 powers on provisioned VMs until the target pool size is reached. [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks… [Paragraph 15], For example, if distributed computing application 124 is a Hadoop application, a VM 112 may have a runtime environment 218 (e.g., JVM) that executes distributed software component code 220 implementing a "Resource Manager" function, "Application Master" function, "Node Manager" function, "Container" function, "Name Node" function, "Data Node" function, "VM Pool Manager" function, and other functions, described further below. Alternatively, each VM 112 may include distributed software component code 220 for distributed computing application 124 configured to run natively on top of guest OS 216. [Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 10) As per claim 5, rejection of claim 3 is incorporated: Gummaraju teaches wherein the objects include VM service objects for exposing network services of the second VMs. ([Paragraph 43], In one embodiment, a plurality of VMs may have been provisioned on each host when distributed computing application 124 was deployed. In other embodiments, VM pool manager 132 may dynamically provision (e.g., via API call to virtualization management module 104) the plurality of VMs at launch of distributed computing application 124. In either embodiment, VM pool manager 132 may power on a subset of the provisioned VMs based on a target pool size. The target pool size specifies threshold values for managing compute VMs 134 using power-on, power-off, and reset operations. In one embodiment, VM pool manager 132 powers on provisioned VMs until the target pool size is reached. [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks… [Paragraph 15], For example, if distributed computing application 124 is a Hadoop application, a VM 112 may have a runtime environment 218 (e.g., JVM) that executes distributed software component code 220 implementing a "Resource Manager" function, "Application Master" function, "Node Manager" function, "Container" function, "Name Node" function, "Data Node" function, "VM Pool Manager" function, and other functions, described further below. Alternatively, each VM 112 may include distributed software component code 220 for distributed computing application 124 configured to run natively on top of guest OS 216. [Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 10. [Paragraph 21], In one embodiment, each node manager 130 (e.g., executing on a VM 112 on a host 108) is configured to launch one or more compute VMs 134 as containers, manage compute VMs 134 executing on that host, monitor resource usage (e.g., CPU, memory, disk, network) of each compute VM 134, and report resource usage and performance metrics to resource manager 126. [Paragraph 50], At step 620, node manager 130 modifies the first VM to mount a network filesystem shared by VMs executing on the first host and associated with the first tenant. In one embodiment, a network filesystem (e.g., NFS) provided by node manager 130 may be mounted within the first VM at a mount point associated with the first tenant. [Paragraph 13], As shown, VMs 112 of hosts 108 may be provisioned and used to execute a number of workloads that deliver information technology services, including web services, database services, data processing services, and directory services.) As per claim 6, rejection of claim 3 is incorporated: Gummaraju teaches wherein the objects include virtual network resource objects for representing networks consumed by the second VMs. ([Paragraph 43], In one embodiment, a plurality of VMs may have been provisioned on each host when distributed computing application 124 was deployed. In other embodiments, VM pool manager 132 may dynamically provision (e.g., via API call to virtualization management module 104) the plurality of VMs at launch of distributed computing application 124. In either embodiment, VM pool manager 132 may power on a subset of the provisioned VMs based on a target pool size. The target pool size specifies threshold values for managing compute VMs 134 using power-on, power-off, and reset operations. In one embodiment, VM pool manager 132 powers on provisioned VMs until the target pool size is reached. [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks… [Paragraph 15], For example, if distributed computing application 124 is a Hadoop application, a VM 112 may have a runtime environment 218 (e.g., JVM) that executes distributed software component code 220 implementing a "Resource Manager" function, "Application Master" function, "Node Manager" function, "Container" function, "Name Node" function, "Data Node" function, "VM Pool Manager" function, and other functions, described further below. Alternatively, each VM 112 may include distributed software component code 220 for distributed computing application 124 configured to run natively on top of guest OS 216. [Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 10. [Paragraph 21], In one embodiment, each node manager 130 (e.g., executing on a VM 112 on a host 108) is configured to launch one or more compute VMs 134 as containers, manage compute VMs 134 executing on that host, monitor resource usage (e.g., CPU, memory, disk, network) of each compute VM 134, and report resource usage and performance metrics to resource manager 126. [Paragraph 50], At step 620, node manager 130 modifies the first VM to mount a network filesystem shared by VMs executing on the first host and associated with the first tenant. In one embodiment, a network filesystem (e.g., NFS) provided by node manager 130 may be mounted within the first VM at a mount point associated with the first tenant. [Paragraph 13], As shown, VMs 112 of hosts 108 may be provisioned and used to execute a number of workloads that deliver information technology services, including web services, database services, data processing services, and directory services.) As per claim 7, rejection of claim 3 is incorporated: Gummaraju teaches wherein the VM controller is configured to communicate with a controller in the virtualization layer to provide decoupled information to the second VMs. ([Paragraph 14], Memory 202 and local storage 206 are devices allowing information, such as executable instructions, cryptographic keys, virtual disks, configurations, and other data, to be stored and retrieved. [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container.) As per claim 8, Rao teaches: A method of application orchestration in a virtualized computing system including a host cluster having a virtualization layer directly executing on hardware platforms of hosts and a virtualization management server configured to manage the virtualization layer and the hosts, the virtualization layer supporting execution of virtual machines (VMs), the virtualization layer integrated with an orchestration control plane, the method comprising: ([Paragraph 25], Three computers that implement a cluster of nodes are shown also connected to the network. These computers are Host 1 120, Host 2 130, and Host N 150. Host 1 120, Host 2 130, and Host N 150 are computer systems (host machines) which may include thereon one or more containers, one or more virtual machines (VMs), or one or more native applications. These host machines are typically self-sufficient, including a processor (or multiple processors), memory, and instructions thereon. Host 1 120, Host 2 130, and Host N 150 are each computers that together implement a cluster. [Paragraph 29], Alternatively, a host can include a plurality of such, like in the example of Host 2. In some cases, instances of the container, virtual machine, or native application may be replicated on more than one host. This is shown here as first instances of Container 1 122, Container 2 124, and Container 3 126 on Host 1 120, and second instances of each are Container 1 138, Container 2 142, and Container 3 144 on Host 2. In addition, first instances of VM 2 132 and VM 1 134 are on Host 2 130, and second instances of VM 2 154 and VM 1 152 are on Host N 150. [Paragraph 28], Host N includes instances of four virtual machines: VM 2 154, VM 1 152, VM 3 156, and VM 4 158. A virtual machine (VM) is an operating system or application environment that is installed as software, which imitates dedicated hardware. The virtual machine imitates the dedicated hardware, providing the end user with the same experience on the virtual machine as they would have on dedicated hardware. [Paragraph 30], The cluster in the example is managed by cluster load balancer system 102. System 102 may use one or more programs to deploy, scale, and manage machines and software in the cluster as an orchestration environment. Non-limiting examples of such programs/systems are Kubernetes, Apache Hadoop, and Docker. Applications operating on such a system can include database application such as Oracle database systems utilizing structured query language (SQL) databases. Note that the terms “KUBERNETES”, “ORACLE”, “APACHE”, “HADOOP”, and “DOCKER””…) Rao teaches receiving requests to execute tasks (i.e. application request) and managing VMs/containers to execute native and/or non-native application(s). However, Rao does not explicitly disclose receiving, at the orchestration control plane, specification data for a first application and a second application; deploying, by a lifecycle controller executing in the orchestration control plane, the first application to a first VM in a host of the host cluster based on the specification data, the first VM including a container engine supporting execution of containers in the pod VM; and deploying, by a VM controller executing in the orchestration control plane and in cooperation with the virtualization management server, the second application to a second VM in the host, the second VM executing on the virtualization layer in parallel with the first VM. Gummaraju teaches receiving, at the orchestration control plane, specification data for a first application and a second application; ([Paragraph 31], Upon successfully obtaining a set of containers, at 309, application master 138-1 provides container launch specification information to node managers 130, which handles launching of the containers. Application master 138-1 may monitor progress of launched containers via communications with each node manager 130 (at 311). [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container. In one embodiment, the container launch request may be configured to launch a process that executes the task, and may include one or more commands (e.g., command line) to launch the container, initialize environment variables and configure local resources needed for running the container (e.g., binaries, shared objects, side files, libraries, Java archive files or JAR files).) deploying, by a lifecycle controller executing in the orchestration control plane, the first application to a first VM in a host of the host cluster based on the specification data, the first VM including a container engine supporting execution of containers in the pod VM; and deploying, by a VM controller executing in the orchestration control plane and in cooperation with the virtualization management server, the second application to a second VM in the host, the second VM executing on the virtualization layer in parallel with the first VM. ([Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 104 may issue commands to power on, power off, reset, clone, deploy, and provision one or more VMs 112 executing on a particular host 108. In one embodiment, virtualization management module 104 is configured to communicate with hosts 108 to collect performance data and generate performance metrics (e.g., counters, statistics) related to availability, status, and performance of hosts 108, VMs 112, and resource pools 114. [Paragraph 11], Each host 108 is configured to provide a virtualization layer that abstracts processor, memory, storage, and networking resources of a hardware platform 118 into multiple virtual machines (VMs) 112 that run concurrently on each of hosts 108. VMs 112 run on top of a software interface layer, referred to herein as a hypervisor 116, that enables sharing of the hardware resources of each of hosts 108 by the VMs 112. One example of hypervisor 116 that may be used in an embodiment described herein is a VMware ESXi hypervisor provided as part of the VMware vSphere solution made commercially available from VMware, Inc. [Paragraph 11], FIG. 1 is a block diagram that illustrates a computing system 100 with which one or more embodiments of the present disclosure may be utilized. As illustrated, computing system 100 includes a plurality of host computers, identified as hosts 108-1, 108-2, 108-3, and 108-4, and referred to collectively as hosts 108. Each host 108 is configured to provide a virtualization layer that abstracts processor, memory, storage, and networking resources of a hardware platform 118 into multiple virtual machines (VMs) 112 that run concurrently on each of hosts 108. VMs 112 run on top of a software interface layer, referred to herein as a hypervisor 116, that enables sharing of the hardware resources of each of hosts 108 by the VMs 112. One example of hypervisor 116 that may be used in an embodiment described herein is a VMware ESXi hypervisor provided as part of the VMware vSphere solution made commercially available from VMware, Inc.) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Rao wherein a host cluster having virtualization layer which executes VMs with containers for executing container applications which can be dynamically scaled to meet demands, into teachings of Gummaraju wherein application specification data/information is received to deploy application(s) to an appropriate VM(s)/container via lifecycle controller and VMs are executed in parallel, because this would enhance the teachings of Rao wherein by having a virtualization management server configured to manage the virtualization layer and the host cluster allows, management of administrative tasks for computer system including managing hosts, managing/execute VMs (i.e. power on, power off, migration, reset, clone, etc.) concurrently, collect metrics and allocate resources to provide various services to multiple users/clients within a multi-tenant environment. As per claim 9, rejection of claim 8 is incorporated: Gummaraju teaches wherein the specification data specifies a VM resource referencing a VM image resource for a VM image of guest software executing in the second VM. ([Paragraph 31], Upon successfully obtaining a set of containers, at 309, application master 138-1 provides container launch specification information to node managers 130, which handles launching of the containers. Application master 138-1 may monitor progress of launched containers via communications with each node manager 130 (at 311). [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container. In one embodiment, the container launch request may be configured to launch a process that executes the task, and may include one or more commands (e.g., command line) to launch the container, initialize environment variables and configure local resources needed for running the container (e.g., binaries, shared objects, side files, libraries, Java archive files or JAR files). [Paragraph 27], In one embodiment, a compute VM 134 may be a "lightweight" VM configured to instantiate quickly relative to conventional VMs. In some embodiments, each compute VM 134 may include a content-based read cache (CBRC) that is used to store a boot image of the compute VM in memory. The CBRC uses a RAM-based configured to cache disk blocks of a virtual machine disk file (VMDK), and serve I/O requests from the CBRC-enabled virtual machine. In one embodiment, the compute VMs may be created as linked clones from a common parent that has a substantial portion of the boot image stored in the CBRC. In this way, only one copy of the "common" boot image in the content-based read cache across multiple compute VMs. An example of content-based read cache may be found in the vSphere 5.0 product made commercially available by VMware, Inc. In some embodiments, each compute VM 134 may be configured to optimize a boot loader used to start each compute VM (i.e., GNU GRUB), and remove extraneous services and devices that might be found in conventional VMs, but are not related to or needed for launching containers. These optimized compute VM 134 configurations may reduce the time needed to ready a compute VM (i.e., boot and power on), from about 30 seconds to under 3 seconds.) Rao also teaches ([Paragraph 26], Host 1 includes instances of three containers: Container 1 122, Container 2 124, and Container 3 126. A container image is a lightweight, stand-alone, executable package of software that includes everything needed to perform a role that includes one or more tasks. The container can include code, runtime libraries, system tools, system libraries, and/or configuration settings. Containerized software operates with some independence regarding the host machine/environment. Thus, containers serve to isolate software from their surroundings.) As per claim 10, rejection of claim 8 is incorporated: Gummaraju teaches wherein the specification data specifies a VM resource referencing a VM profile resource having attributes of the second VM. ([Paragraph 31], Upon successfully obtaining a set of containers, at 309, application master 138-1 provides container launch specification information to node managers 130, which handles launching of the containers. Application master 138-1 may monitor progress of launched containers via communications with each node manager 130 (at 311). [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container. In one embodiment, the container launch request may be configured to launch a process that executes the task, and may include one or more commands (e.g., command line) to launch the container, initialize environment variables and configure local resources needed for running the container (e.g., binaries, shared objects, side files, libraries, Java archive files or JAR files). [Paragraph 27], In one embodiment, a compute VM 134 may be a "lightweight" VM configured to instantiate quickly relative to conventional VMs. In some embodiments, each compute VM 134 may include a content-based read cache (CBRC) that is used to store a boot image of the compute VM in memory. The CBRC uses a RAM-based configured to cache disk blocks of a virtual machine disk file (VMDK), and serve I/O requests from the CBRC-enabled virtual machine. In one embodiment, the compute VMs may be created as linked clones from a common parent that has a substantial portion of the boot image stored in the CBRC. In this way, only one copy of the "common" boot image in the content-based read cache across multiple compute VMs. An example of content-based read cache may be found in the vSphere 5.0 product made commercially available by VMware, Inc. In some embodiments, each compute VM 134 may be configured to optimize a boot loader used to start each compute VM (i.e., GNU GRUB), and remove extraneous services and devices that might be found in conventional VMs, but are not related to or needed for launching containers. These optimized compute VM 134 configurations may reduce the time needed to ready a compute VM (i.e., boot and power on), from about 30 seconds to under 3 seconds.) As per claim 11, rejection of claim 8 is incorporated: Gummaraju teaches wherein the specification data specifies a VM resource referencing a network resource for a virtual network connected to the second VM. ([Paragraph 31], Upon successfully obtaining a set of containers, at 309, application master 138-1 provides container launch specification information to node managers 130, which handles launching of the containers. Application master 138-1 may monitor progress of launched containers via communications with each node manager 130 (at 311). [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container. In one embodiment, the container launch request may be configured to launch a process that executes the task, and may include one or more commands (e.g., command line) to launch the container, initialize environment variables and configure local resources needed for running the container (e.g., binaries, shared objects, side files, libraries, Java archive files or JAR files). [Paragraph 27], In one embodiment, a compute VM 134 may be a "lightweight" VM configured to instantiate quickly relative to conventional VMs. In some embodiments, each compute VM 134 may include a content-based read cache (CBRC) that is used to store a boot image of the compute VM in memory. The CBRC uses a RAM-based configured to cache disk blocks of a virtual machine disk file (VMDK), and serve I/O requests from the CBRC-enabled virtual machine. In one embodiment, the compute VMs may be created as linked clones from a common parent that has a substantial portion of the boot image stored in the CBRC. In this way, only one copy of the "common" boot image in the content-based read cache across multiple compute VMs. An example of content-based read cache may be found in the vSphere 5.0 product made commercially available by VMware, Inc. In some embodiments, each compute VM 134 may be configured to optimize a boot loader used to start each compute VM (i.e., GNU GRUB), and remove extraneous services and devices that might be found in conventional VMs, but are not related to or needed for launching containers. These optimized compute VM 134 configurations may reduce the time needed to ready a compute VM (i.e., boot and power on), from about 30 seconds to under 3 seconds. [Paragraph 22], Each node manager 130 may be configured to export one or more directories within local storage 206 via a network filesystem to all compute VMs 134 executing on the host managed by node manager 130. This network filesystem may be used to store intermediate outputs and other data generated during operation of distributed computing application 124, and allows node manager 130 and compute VMs to act as if the node manager and compute VMs are all using the same local filesystem, as in a conventional Hadoop physical deployment.) As per claim 12, rejection of claim 8 is incorporated: Gummaraju teaches wherein the step of deploying comprises: cloning the second VM from a VM image referenced in the specification data; applying policies to the second VM based on the specification data; and starting the second VM on a selected host of the host cluster. ([Paragraph 16], Referring back to FIG. 1, computing system 100 includes a virtualization management module 104 that may communicate to the plurality of hosts 108 via network 110. In one embodiment, virtualization management module 104 is a computer program that resides and executes in a central server, which may reside in computing system 100, or alternatively, running as a VM in one of hosts 108. One example of a virtualization management module is the vCenter.RTM. Server product made available from VMware, Inc. Virtualization management module 104 is configured to carry out administrative tasks for the computing system 100, including managing hosts 108, managing VMs running within each host 108, provisioning VMs, migrating VMs from one host to another host, load balancing between hosts 108, creating resource pools 114 comprised of computing resources of hosts 108 and VMs 112, modifying resource pools 114 to allocate and de-allocate VMs and physical resources, and modifying configurations of resource pools 114. In one embodiment, virtualization management module 104 may issue commands to power on, power off, reset, clone, deploy, and provision one or more VMs 112 executing on a particular host 108. In one embodiment, virtualization management module 104 is configured to communicate with hosts 108 to collect performance data and generate performance metrics (e.g., counters, statistics) related to availability, status, and performance of hosts 108, VMs 112, and resource pools 114. [Paragraph 31], Upon successfully obtaining a set of containers, at 309, application master 138-1 provides container launch specification information to node managers 130, which handles launching of the containers. Application master 138-1 may monitor progress of launched containers via communications with each node manager 130 (at 311). [Paragraph 30], In response, resource manager 126 may allocate a set of resource containers based on cluster capacity, priorities, and scheduling policy. In one embodiment, resource manager 126 allocates containers based on scheduling factors and information obtained from node managers 130 (at 307) and name node 128, including what resources are available, the availability of those resources on a per-host basis, and data locality of data stored in data VMs 136. For example, resource manager 126 may allocate containers for executing a task on host 108-2 based on block information (obtained from name node 128) that indicates input data (e.g., HDFS blocks 312) for that task is located at a data VM 136-2 executing on host 108-2. Resource manager 126 may return an allocation response to application master 138-1 that includes information about the containers allocated to application master 138-1, such as container identifiers, node identifiers, and network information for contacting node managers 130 on hosts 108 that can launch the allocated containers.) As per claim 13, rejection of claim 8 is incorporated: Gummaraju teaches receiving decoupled information at a management agent in the virtualization layer from the orchestration control plane through the VM controller; and providing the decoupled information for consumption by the second application executing in the second VM, the decoupled information including at least one of configuration information and secret information. ([Paragraph 14], Memory 202 and local storage 206 are devices allowing information, such as executable instructions, cryptographic keys, virtual disks, configurations, and other data, to be stored and retrieved. [Paragraph 44], At step 604, node manager 130 receives a request to execute a first task of a plurality of tasks on the first host. As described above, a job may be broken down into a plurality of tasks that can be executed in parallel. In one embodiment, an application master 138, having been allocated containers by resource manager 126, may transmit (e.g., via API call) a container launch request to node manager 130 to launch a container that executes one or more tasks from the plurality of tasks. The container launch request may contain information needed by node manager 130 to launch a container including, but not limited to, a container identifier, a tenant identifier for whom the container is allocated, and security tokens used for authenticating the container.) As per claim 14, rejection of claim 8 is incorporated: Gummaraju teaches wherein the second application in the second VM is non-containerized. ([Paragraph 15], Alternatively, each VM 112 may include distributed software component code 220 for distributed computing application 124 configured to run natively on top of guest OS 216.) Rao also teaches ([Paragraph 25], Three computers that implement a cluster of nodes are shown also connected to the network. These computers are Host 1 120, Host 2 130, and Host N 150. Host 1 120, Host 2 130, and Host N 150 are computer systems (host machines) which may include thereon one or more containers, one or more virtual machines (VMs), or one or more native applications. [Paragraph 27], The Native 1 136 is a native application, operating system, native instruction set, or other native program that is implemented specially for the particular model of the computer or microprocessor, rather than in an emulation or compatibility mode. The virtual machines are VM 2 132 and VM 1 134.) As per claims 15-20. These are non-transitory computer readable medium claims corresponding to the method claims 8-12 and 14. Therefore, rejected based on similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Dec 01, 2025
Non-Final Rejection — §103, §112, §DP
Mar 31, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month