DETAILED ACTION
Claims 1, 3-10,12-18 and 21-22 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-6, 10, 12-15, 16, 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2015/0277779 A1 to Devarapalli et al. in view of U.S. Pub. No. 2010/0251234 A1 to Oshins.
As to claim 1, Devarapalli teaches a method comprising:
identifying (VM Migration Manager 310) a plurality of first virtual machines (VMs 320) with overlapping non-uniform memory access (NUMA) node assignments in a computing environment having a plurality of NUMA nodes (NUMA node 0 110 to NUMA node 1 120/low processor and memory usage) (“…System memory 116 further includes VM migration manager 310, as well as ordered array of VMs 320 and processor and memory usage data 322. VM migration manager 310 is software or firmware that can be loaded into processor 112 during operation of IHS 100 (FIG. 1). VM migration manager 310 manages the re-assignment of VMs from NUMA node 0 110 to NUMA node 1 120 based on ordered array of VMs 320 of the multiple VMs executing on NUMA node 0 110. The VMs are selected for migration by VM migration manager 310 based on having low processor and memory usage relative to the other VMs, which is directly correlated to the location of each VM within the ordered array of VMs 320. Ordered array of VMs 320 contains a ranked listing of the VMs executing on NUMA node 1 120. The VMs having low processor and memory usage are ranked first and the VMs having high processor and memory usage are ranked last in ordered array of VMs 320. Migration manager 310 executing on processor 112 (FIG. 1) tracks the processor and memory usage data of VM1 202 and VM2 204 (FIG. 2) over time and stores the resulting VM processor and memory usage data 322 to memory 116. VM migration manager 310 generates the ordered array of VMs 320 based on a ranking of processor and memory usage data 322…” paragraph 0037);
identifying at least one second virtual machine in the computing environment that is assigned to a NUMA node having processor and memory resources that do not meet minimum processor or memory resource requirements of the at least one second virtual machine (Ordered array of VMs 320);
ranking the first and second virtual machines based on processor or memory resource requirements thereof (Ordered array of VMs 320) (“…According to one embodiment, the relative percentages of utilization of memory resources, measured in cycles, by the individual VMs associated with NUMA Node 0 110 are tracked and recorded as the primary weighing factor utilized in ranking the VMs of NUMA Node 0 110. The utilization of processor resources can also be utilized as a secondary weighting factor. The utilization of the memory resources and the processor resources can be measured in memory cycles and processor cycles, respectively, in one embodiment. With the example of FIG. 3, the first memory usage value 332 can be given the highest weighting factor or priority in ranking the VMs. The processor usage value 338 is only used in the event the value of the first memory usage value 332 is the same for two different VMs. The VM consuming the least amount of memory (or memory cycles) is placed first into ordered array of VMs 320 and the other VMs, which consume more memory resources, are placed into ordered array of VMs 320 in ascending order of memory resource usage. For example, if there is a particular VM running on NUMA Node 1 110 that has a low first memory usage value 332, VM migration manager 310 will rank that particular VM as the highest element in the ordered array of VMs 320, regardless of the level of processor resource usage. Thus, for example, the migration manager 310 will rank the VM having the lowest first memory usage value 332 as the high entry within the ordered array of VMs 320 even if the VM has the highest processor usage value 338. VM migration manager 310 uses first memory usage value 332 to rank the VMs in the ordered array of VMs 320. In the event that the first memory usage value 332 is the same for two different VMs running on NUMA Node 1 110, VM migration manager 310 uses the respective processor usage value 338 for each of the two different VMs as a tie breaker in the ranking That is, as between the two VMs with the same first memory usage value 332, the VM with the lower processor usage value 338 would be ranked higher in ordered array of VMs 320…The ordering of the VMs within the NUMA node and selection of which VMs to migrate across NUMA nodes can be further influenced by the level of processor usage. Prior to performing a migration, the existing migration logic checks the processor load on each of the NUMA nodes. If the processor utilization of NUMA node 0 110 is more, the migration logic skips the VM in the ordered array 320, which is utilizing more processor resources, even though the VM is less memory intensive. In this case, the migration logic should select the VM (from the ordered array of VMs 320) which utilizes less processor resources and is comparatively less memory intensive. If the processor utilization of NUMA Node 0 110 is less, then VM migration manager 310 selects the first element in the ordered array 320 for migration, even if the processor resource utilization of the VM is more…” paragraphs 0039/0040);
selecting a highest-ranked virtual machine (Ordered array of VMs 320) from the first and second virtual machines (Block 610) (“…The ordering of the VMs within the NUMA node and selection of which VMs to migrate across NUMA nodes can be further influenced by the level of processor usage. Prior to performing a migration, the existing migration logic checks the processor load on each of the NUMA nodes. If the processor utilization of NUMA node 0 110 is more, the migration logic skips the VM in the ordered array 320, which is utilizing more processor resources, even though the VM is less memory intensive. In this case, the migration logic should select the VM (from the ordered array of VMs 320) which utilizes less processor resources and is comparatively less memory intensive. If the processor utilization of NUMA Node 0 110 is less, then VM migration manager 310 selects the first element in the ordered array 320 for migration, even if the processor resource utilization of the VM is more…In response to identifying that NUMA node 0 110 does not have the additional processing capacity requested, VM migration manager 310 reads the ordered array of the multiple ordered array of VMs 320 from memory 116 (block 610) and selects at least one VM from the ordered array of the multiple VMs 320 executing on NUMA node 0 110 to be re-assigned from the NUMA node 0 110 to NUMA node 1 120 (block 612). The selected VM/s include/s the lowest ranked VMs (i.e., the VMs having the lowest value of processor and memory usage data 322 relative to the other VMs. For example, as shown in the ordered array of VMs 320 (FIG. 4), processor 112 would select VM5 as the selected VM from among the node 0 VMs 410 as VM5 has the lowest processor and memory usage data value relative to the other VMs. VM migration manager 310 executing on processor 112 re-assigns and migrates the one or more selected VMs (i.e., VM5) from NUMA node 0 110 to NUMA node 1 120 (block 614). VM migration manager 310 triggers processor 122 to execute the migrated VMs (i.e., VM5) on NUMA node 1 120 (block 616)…” paragraphs 0039-0040/0047); and
determining that a first NUMA node, to which the highest-ranked virtual machine is assigned, satisfies one or more criteria for triggering migration away from the first NUMA node (Block 608) (“…Turning now to FIG. 6, a flow chart of method 600 is shown. Method 600 begins at the start block and proceeds to block 602 at which processor 112 initializes the NUMA nodes 110 and 120. The initialization of NUMA nodes 110 and 120 includes the loading of BIOS 134, O/S 136 and APPs 138 by processors 112 and 122. Block 602 further includes hypervisor 220 generating multiple VMs such as VMs 1-10 including VMs 202, 204 for execution on processor 112. Hypervisor 270 generates multiple VMs including VMs 252, 254 for execution on processor 122. At decision block 604, VM migration manager 310 executing on processor 112 determines if additional VMs are requested to be executed on NUMA node 0 110 and/or if additional processing resources are required from the existing VMs executing on NUMA node 0 110. In one embodiment, hypervisor 220 can determine that additional VMs are to be executed on NUMA node 0 110. Alternatively, the VMs 202, 204 currently executing may request additional processor and/or memory resources from hypervisor 220. In response to no additional VMs being requested or no additional processor and/or memory resources being requested, method 600 ends. In response to additional VMs being requested and/or additional processor and/or memory resources being requested, VM migration manager 310 executing on processor 112 retrieves information about the allocation and remaining availability of processing resources within NUMA node 0 110 (block 605). VM migration manager 310 executing on processor 112 determines whether there are additional processor and/or memory resources available for VM allocation on NUMA node 0 110 (block 606). VM migration manager 310 executing on processor 112 identifies if NUMA node 0 110 has the additional capacity requested to handle the processing resources required for the additional VM/s (block 608). Also at block 608, VM migration manager 310 executing on processor 112 identifies if NUMA node 1 120 has the additional capacity requested to handle the processing resources required for the additional VMs. NUMA node 0 110 communicates with NUMA node 1 120 in order to determine if NUMA node 1 120 has the additional capacity requested…” paragraph 0046).
Devarapalli does explicitly teaches in response to determining that the first NUMA node satisfies the one or more criteria, selecting a virtual machine, assigned to the first NUMA node, for migration from the first NUMA node to a second NUMA node.
Oshins teaches in response to determining that the first NUMA node satisfies the one or more criteria (stressed), selecting a virtual machine (Virtual Machine 240), assigned to the first NUMA node, for migration from the first NUMA node (NUMA Node 702) to a second NUMA node (NUMA Node 704/Some time later when, for example, another virtual machine is initialized or taken offline, hypervisor 202 can be executed by a logical processor 212A-212I and the logical processor can migrate virtual machine 240 to another NUMA node in the computer system 700) (“…Continuing with the description of FIG. 11, operation 1112 shows migrating the virtual machine to one or more other NUMA nodes. For example, and referring to FIG. 7, hypervisor 202 can schedule virtual NUMA nodes 606-608 to run on NUMA node 702 and sometime later schedule virtual NUMA nodes 606-608 to run on, for example, NUMA node 704. In this example hypervisor 202 may migrate virtual machine 240 when NUMA node 702 is stressed. For example, guest operating system 220 and 222 may generate signals that indicate that virtual machine 240 is low on memory. In this example, hypervisor 202 can be configured to reduce the workload on NUMA node 702 by moving virtual machine 240 to a different NUMA node…Continuing with the description of FIG. 11, operation 1114 shows assigning the virtual machine to a first NUMA node; and migrating the virtual machine to a second NUMA node of the plurality of NUMA nodes. For example, and referring to FIG. 7, in an embodiment virtual machine 240 can be assigned to first NUMA node 606 by hypervisor 202. That is, hypervisor instructions can be executed by a logical processor 212A-212I and virtual machine 240 can be assigned to, for example, NUMA node 702. In this example, virtual processors 230A-230D may be set to execute on logical processors 212A through 212D. Some time later when, for example, another virtual machine is initialized or taken offline, hypervisor 202 can be executed by a logical processor 212A-212I and the logical processor can migrate virtual machine 240 to another NUMA node in the computer system 700. More specifically, and referring to the previous example, hypervisor 202 can be executed and virtual machine 240 can be moved from NUMA node 702 to NUMA node 704. For example, virtual processor 230A and B may be assigned to logical processor 212E, virtual processor 230C and D may be assigned to logical processor 212F and guest physical addresses 614 and 616 can be backed by system physical addresses 622-624…” paragraphs 0062/0063).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Devarapalli with the teaching of Oshins because the teaching of Oshins would improve the system of Devarapalli by providing a technique for reducing or eliminating computing resource contention.
As to claim 3, Devarapalli teaches the method of claim 1, wherein the resource limitations comprise processor resource limitations or memory resource limitations (Processor & Memory Usage Data 322 paragraph 0037).
As to claim 4, Devarapalli teaches the method of claim 1, wherein the one or more criteria comprises a memory availability for the highest-ranked virtual machine on the NUMA node being less than a memory requirement for the virtual machine, or a processor availability for the highest-ranked virtual machine on the NUMA node being less than a processing requirement for the first virtual machine (Ordered Array of VMs 320) (“…According to one embodiment, the relative percentages of utilization of memory resources, measured in cycles, by the individual VMs associated with NUMA Node 0 110 are tracked and recorded as the primary weighing factor utilized in ranking the VMs of NUMA Node 0 110. The utilization of processor resources can also be utilized as a secondary weighting factor. The utilization of the memory resources and the processor resources can be measured in memory cycles and processor cycles, respectively, in one embodiment. With the example of FIG. 3, the first memory usage value 332 can be given the highest weighting factor or priority in ranking the VMs. The processor usage value 338 is only used in the event the value of the first memory usage value 332 is the same for two different VMs. The VM consuming the least amount of memory (or memory cycles) is placed first into ordered array of VMs 320 and the other VMs, which consume more memory resources, are placed into ordered array of VMs 320 in ascending order of memory resource usage. For example, if there is a particular VM running on NUMA Node 1 110 that has a low first memory usage value 332, VM migration manager 310 will rank that particular VM as the highest element in the ordered array of VMs 320, regardless of the level of processor resource usage. Thus, for example, the migration manager 310 will rank the VM having the lowest first memory usage value 332 as the high entry within the ordered array of VMs 320 even if the VM has the highest processor usage value 338. VM migration manager 310 uses first memory usage value 332 to rank the VMs in the ordered array of VMs 320. In the event that the first memory usage value 332 is the same for two different VMs running on NUMA Node 1 110, VM migration manager 310 uses the respective processor usage value 338 for each of the two different VMs as a tie breaker in the ranking That is, as between the two VMs with the same first memory usage value 332, the VM with the lower processor usage value 338 would be ranked higher in ordered array of VMs 320…The ordering of the VMs within the NUMA node and selection of which VMs to migrate across NUMA nodes can be further influenced by the level of processor usage. Prior to performing a migration, the existing migration logic checks the processor load on each of the NUMA nodes. If the processor utilization of NUMA node 0 110 is more, the migration logic skips the VM in the ordered array 320, which is utilizing more processor resources, even though the VM is less memory intensive. In this case, the migration logic should select the VM (from the ordered array of VMs 320) which utilizes less processor resources and is comparatively less memory intensive. If the processor utilization of NUMA Node 0 110 is less, then VM migration manager 310 selects the first element in the ordered array 320 for migration, even if the processor resource utilization of the VM is more…” paragraphs 0039/0040).
As to claim 5, Devarapalli teaches the method of claim 1, wherein selecting the virtual machine assigned to the first NUMA node for migration comprises:
(a) identifying (VM Migration Manager 310) a virtual machine assigned to the first NUMA node with a lowest memory requirement (low processor and memory usage) (“…System memory 116 further includes VM migration manager 310, as well as ordered array of VMs 320 and processor and memory usage data 322. VM migration manager 310 is software or firmware that can be loaded into processor 112 during operation of IHS 100 (FIG. 1). VM migration manager 310 manages the re-assignment of VMs from NUMA node 0 110 to NUMA node 1 120 based on ordered array of VMs 320 of the multiple VMs executing on NUMA node 0 110. The VMs are selected for migration by VM migration manager 310 based on having low processor and memory usage relative to the other VMs, which is directly correlated to the location of each VM within the ordered array of VMs 320. Ordered array of VMs 320 contains a ranked listing of the VMs executing on NUMA node 1 120. The VMs having low processor and memory usage are ranked first and the VMs having high processor and memory usage are ranked last in ordered array of VMs 320. Migration manager 310 executing on processor 112 (FIG. 1) tracks the processor and memory usage data of VM1 202 and VM2 204 (FIG. 2) over time and stores the resulting VM processor and memory usage data 322 to memory 116. VM migration manager 310 generates the ordered array of VMs 320 based on a ranking of processor and memory usage data 322…” paragraph 0037);
(b) determining whether the NUMA node satisfies one or more criteria if the identified virtual machine were migrated from the first NUMA node (Block 608) (“…Turning now to FIG. 6, a flow chart of method 600 is shown. Method 600 begins at the start block and proceeds to block 602 at which processor 112 initializes the NUMA nodes 110 and 120. The initialization of NUMA nodes 110 and 120 includes the loading of BIOS 134, O/S 136 and APPs 138 by processors 112 and 122. Block 602 further includes hypervisor 220 generating multiple VMs such as VMs 1-10 including VMs 202, 204 for execution on processor 112. Hypervisor 270 generates multiple VMs including VMs 252, 254 for execution on processor 122. At decision block 604, VM migration manager 310 executing on processor 112 determines if additional VMs are requested to be executed on NUMA node 0 110 and/or if additional processing resources are required from the existing VMs executing on NUMA node 0 110. In one embodiment, hypervisor 220 can determine that additional VMs are to be executed on NUMA node 0 110. Alternatively, the VMs 202, 204 currently executing may request additional processor and/or memory resources from hypervisor 220. In response to no additional VMs being requested or no additional processor and/or memory resources being requested, method 600 ends. In response to additional VMs being requested and/or additional processor and/or memory resources being requested, VM migration manager 310 executing on processor 112 retrieves information about the allocation and remaining availability of processing resources within NUMA node 0 110 (block 605). VM migration manager 310 executing on processor 112 determines whether there are additional processor and/or memory resources available for VM allocation on NUMA node 0 110 (block 606). VM migration manager 310 executing on processor 112 identifies if NUMA node 0 110 has the additional capacity requested to handle the processing resources required for the additional VM/s (block 608). Also at block 608, VM migration manager 310 executing on processor 112 identifies if NUMA node 1 120 has the additional capacity requested to handle the processing resources required for the additional VMs. NUMA node 0 110 communicates with NUMA node 1 120 in order to determine if NUMA node 1 120 has the additional capacity requested…” paragraph 0046); and
(c) when the first NUMA node satisfies the one or more criteria if the identified virtual machine were migrated from the first node, selecting the identified virtual machine as the virtual machine selected for migration (Blocks 610-616) (“…In response to identifying that NUMA node 0 110 has the additional processing capacity requested, VM migration manager 310 executing on processor 112 allocates the additional VMs and/or increases the processor and memory resources to NUMA node 0 110 (block 609). Method 600 then terminates. In response to identifying that NUMA node 0 110 does not have the additional processing capacity requested, VM migration manager 310 reads the ordered array of the multiple ordered array of VMs 320 from memory 116 (block 610) and selects at least one VM from the ordered array of the multiple VMs 320 executing on NUMA node 0 110 to be re-assigned from the NUMA node 0 110 to NUMA node 1 120 (block 612). The selected VM/s include/s the lowest ranked VMs (i.e., the VMs having the lowest value of processor and memory usage data 322 relative to the other VMs. For example, as shown in the ordered array of VMs 320 (FIG. 4), processor 112 would select VM5 as the selected VM from among the node 0 VMs 410 as VM5 has the lowest processor and memory usage data value relative to the other VMs. VM migration manager 310 executing on processor 112 re-assigns and migrates the one or more selected VMs (i.e., VM5) from NUMA node 0 110 to NUMA node 1 120 (block 614). VM migration manager 310 triggers processor 122 to execute the migrated VMs (i.e., VM5) on NUMA node 1 120 (block 616)…” paragraph 0047).
As to claim 6, Devarapalli teaches the method of claim 1, further comprising initiating a migration of the virtual machine selected for migration to the second NUMA node on a host with the first NUMA node (“…Referring specifically to FIG. 1, example IHS 100 comprises a non-uniform memory access (NUMA) machine 105 that includes NUMA node 0 110 and NUMA node 1 120. NUMA nodes 110 and 120 are interconnected such that the nodes share memory and I/O resources. NUMA node 0 110 includes processor 112 coupled to cache memory 114. Cache memory 114 stores frequently used data, and cache memory 114 is further coupled to system memory 116. In one embodiment, NUMA node 0 110 can include more than one (i.e., multiple) processors. NUMA node 1 120 includes processor 122 coupled to cache memory 124. Cache memory 124 is coupled to system memory 126. In one embodiment, NUMA node 1 120 can include more than one processor. NUMA nodes 110 and 120 are interconnected via system interconnect 115…” paragraph 0025).
As to claim 10, see the rejection of claim 1 above, expect for a storage system (Storage 140) and a processing system (I/O Controllers 150).
Devarapalli teachers a storage system (Storage 140) and a processing system (Processor 112).
As to claim 12, see the rejection of claim 3 above.
As to claim 13, see the rejection of claim 4 above.
As to claim 14, see the rejection of claim 5 above.
As to claim 15, see the rejection of claim 6 above.
As to claim 16, see the rejection of claim 7 above.
As to claim 21, Devarapalli teaches the method of claim 1, wherein the highest-ranked virtual machine is the virtual machine selected for migration (Ordered Array of VMs 320).
As to claim 22, see the rejection of claim 21 above.
Claims 7, 8, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2015/0277779 A1 to Devarapalli et al. in view of U.S. Pub. No. 2010/0251234 A1 to Oshins as applied to claims 1 and 10 above, and further in view of U.S. Pat. No. 11,625,175 B1 issued to Krasilnikov et al.
As to claim 7, Devarapalli as modified by Oshins teaches the method of claim 1, however it is silent with reference to initiating a migration of the virtual machine selected for migration to the second NUMA node, which is on a host different from the first NUMA node.
Krasinikov teaches initiating a migration of the virtual machine selected for migration to the second NUMA node, which is on a host different from the first NUMA node (a placement service 140 or system of the service provider network 102) (“…Generally, a placement service 140 or system of the service provider network 102 will place virtual resources 120/128 on servers 110 that have available computing resources for running the virtual resources 120/128. The placement service 140 may place virtual resources 120/128 on NUMA nodes 112/114 based on those NUMA nodes have sufficient availability to run the virtual resources 120/128. In some examples, the NUMA nodes 112/114 may each have pre-allocated “slots” which are different configurations of computing resources. That is, each NUMA node 112/114 may have multiple slots where each slot has a pre-allocated portion of, CPU core, local memory, and/or other computing resources. The placement system 140 may place or deploy virtual resources 120/128 to slots that are pre-allocated appropriate or suitable amounts of computing resources to run the particular virtual resources 120/128. The placement service 140 may utilize various heuristics or rules to attempt to place virtual resources on NUMA nodes 112/114 such that the NUMA nodes 112/114 and/or server 110 do not become overcommitted…” Col. Ln. ).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Devarapalli and Oshins with the teaching of Krasinikov because the teaching of Krasinikov would improve the system of Devarapalli and Oshins by providing a virtualization technologies to allow a single server to host and allocate multiple virtual computing resources (Krasinikov Col. 1 Ln. 23-26).
As to claim 8, Devarapalli as modified by Oshins teaches the method of claim 1, the method of claim 1 however it is silent with reference to identifying resource availability at one or more additional NUMA nodes; selecting the second NUMA node from the one or more additional NUMA nodes based on the resource availability; and initiating a migration of the virtual machine selected for migration to the second NUMA node.
Krasinikov teaches identifying resource availability at one or more additional NUMA nodes; selecting the second NUMA node from the one or more additional NUMA nodes based on the resource availability; and initiating a migration of the virtual machine selected for migration to the second NUMA node (migrate a virtual resource from the NUMA node to another NUMA node on the server that has an availability of computing resources to run the virtual resource) (“…This disclosure describes techniques for servers, or other devices, having NUMA memory architectures to migrate virtual resources between NUMA nodes in order to reduce resource contention between virtual resources running on the NUMA nodes. In some examples, the servers may monitor various metrics or operations of the NUMA nodes and/or virtual resources, and detect events that indicate that virtual resources running on a same NUMA node are contending, or are likely to contend in the future, over computing resources of the NUMA node. Upon detecting such an event, the server may determine to migrate a virtual resource from the NUMA node to another NUMA node on the server that has an availability of computing resources to run the virtual resource. In some instances, the server may determine that multiple NUMA nodes are able run the virtual resource, and select the NUMA node that has the greatest availability of computing resources for running the virtual resource. The server may then migrate the virtual resource from the overcommitted NUMA node onto the NUMA node that has availability to run the virtual resource. In this way, the server may reduce resource contention among virtual resources running on a same NUMA node...As used herein, computing resources refers to compute, memory, storage, networking, and, in some implementations, graphics processing. As an example, one virtual resource instance type may be allocated a larger amount of compute (e.g., processor cycles) and be optimized to support compute-heavy workloads, whereas another virtual resource instance type may be allocated a larger amount of storage (e.g., disk space) and be optimized to support storage-intensive workloads…” Col. 3 Ln. 49-67, Col. 5 ln. 25-33).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Devarapalli and Oshins with the teaching of Krasinikov because the teaching of Krasinikov would improve the system of Devarapalli and Oshins by providing a virtualization technologies to allow a single server to host and allocate multiple virtual computing resources (Krasinikov Col. 1 Ln. 23-26).
As to claim 17, see the rejection of claim 8 above.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2015/0277779 A1 to Devarapalli et al. in view of U.S. Pub. No. 2010/0251234 A1 to Oshins as applied to claims 1 and 10 above, and further in view of U.S. Pub. No. 2023/0305874 to Lukianov et al.
As to claim 9, Devarapalli as modified by Oshins teaches the method of claim 1, however it is silent with reference to (BUT, Lukianov teaches) identifying resource availability at one or more additional NUMA nodes on a host with the first NUMA node (Container Migration Service 34);
determining whether a NUMA node of the one or more additional NUMA nodes can support the virtual machine selected for migration (Container Migration Service 34);
when the NUMA node of the one or more additional NUMA nodes can support the virtual machine selected for migration selecting the NUMA node of the one or more additional NUMA nodes as the second node (Container Migration Service 34);
when the NUMA node of the one or more additional NUMA nodes cannot support the virtual machine selected for migration, determining whether a NUMA node on one or more additional hosts can support the virtual machine selected for migration (Container Migration Service 34);
when the NUMA node on the one or more additional hosts can support the second virtual machine, selecting the NUMA node from the one or more additional hosts for migration (Container Migration Service 34) (“…In the example above, the container migration service 34 also identifies one or more NUMA nodes of the plurality of NUMA nodes 16(0)-16(N) that have an availability of the processing resource sufficient to execute the container 26(2). Thus, according to some examples, the container migration service 34 may determine an availability of the processing resource for each of the NUMA nodes 16(0)-16(N), and may identify the one or more NUMA nodes based on a comparison of the availability of the processing resource for each of the NUMA nodes 16(0)-16(N) with the allocation 28(2) of the processing resource to the container 26(2). For purposes of illustration, assume that the container migration service 34 identifies the NUMA nodes 16(0) and 16(1) as having an availability of the processing resource sufficient to execute the container 26(2)…Next, the container migration service 34 selects a target NUMA node from among the identified one or more NUMA nodes 16(0) and 16(1) by, e.g., selecting the NUMA node having the lower node ID among the node IDs 24(0)-24(N). Assume for the sake of illustration that the container migration service 34 selects the NUMA node 16(0) as the target NUMA node 16(0). The container migration service 34 then migrates the container 26(2) from the source NUMA node 16(2) to the target NUMA node 16(0). In some examples, migrating the container 26(2) may comprise initiating execution of the container 26(2) by the target NUMA node 16(0), and terminating execution of the container 26(2) on the source NUMA node 16(2). In some examples, the container migration service 34 may initiate execution of the container 26(2) by the target NUMA node 16(0) and terminate execution of the container 26(2) on the source NUMA node 16(2) by accessing functionality provided by the node agents 32(0) and 32(2), respectively. Note that the container migration service 34 in some examples may opt not to migrate the container 26(2) if it is determined that the source NUMA node 16(2) is more appropriate for executing the container 26(2) than the selected target NUMA node 16(0). For instance, the container migration service 34 may opt not to migrate a container if all potential target NUMA nodes do not have lower node IDs than the source NUMA node, even if the potential target NUMA nodes have sufficient availability of the processing resource to execute the container…FIG. 2B shows the resulting state of the NUMA nodes 16(0)-16(N) after the container migration service 34 has processed the containers 26(0)-26(2) of the NUMA nodes 16(0)-16(2). In this example, the container migration service 34 has opted not to migrate the container 26(0) or the container 26(1) because, in both cases, the potential target NUMA nodes 16(2) and 16(N) have higher node IDs than the source NUMA nodes 16(0) and 16(1), as seen in FIG. 1. The container migration service 34, in processing the container 26(2), identifies the NUMA nodes 16(0), 16(1), and 16(N) as having sufficient available memory to execute the container 26(2). The container migration service 34 then selects the NUMA node 16(0) as the target NUMA node 16(0), and migrates the container 26(2) from the source NUMA node 16(2) to the target NUMA node 16(0)…” paragraphs 0028/0029/0036); and
when the NUMA node on the one or more additional hosts cannot support the second virtual machine selected for migration, generating a notification that no migration is available for the virtual machine selected for migration (Container Administrative Service 30) (“…Orchestration of the containers 26(0)-26(C) for the NUMA nodes 16(0)-16(N) of FIG. 1 is managed by a container administration service (captioned as “ADMIN SVC” in FIG. 1) 30 that executes on a master NUMA node, which in the example of FIG. 1 is the NUMA node 16(0). In addition, each of the NUMA nodes 16(0)-16(N) executes a corresponding node agent 32(0)-32(N) that communicates with the container administration service 30 regarding the status of containers executing on that NUMA node 16(0)-16(N), and that handles tasks such as execution and termination of containers….” paragraph 0024).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Devarapalli and Oshins with the teaching of Lukianov because the teaching of Lukianov would improve the system of Devarapalli and Oshins by providing a mechanism for notifying a user or administrator of the performance of the runtime processing.
As to claim 18, see the rejection of claim 9 above.
Response to Arguments
Applicant's arguments filed 01/0726 have been fully considered but they are not persuasive.
Applicants argued in substance that the Devarapalli prior art does not teach or suggest identifying a virtual machine that is misconfigured due to being assigned to a NUMA node whose processor and memory resources fail to meet minimum requirements of the virtual machine.
The Examiner disagrees.
The Devarapalli prior art disclosed a method for allocating virtual machines (VMs) to run within a non-uniform memory access (NUMA) system includes a first processing node and a second processing node. A request is received at the first processing node for additional capacity for at least one of (a) establishing an additional VM and (b) increasing processing resources to an existing VM on the first processing node. In response to receiving the request, a migration manager identifies whether the first processing node has the additional capacity requested. In response to identifying that the first processing node does not have the additional capacity requested, at least one VM is selected from an ordered array of the multiple VMs executing on the first processing node. The selected VM has low processor and memory usage relative to the other VMs. The selected VM is migrated from the first processing node to the second processing node for execution. Additionally, see the figure 4 (Ordered Array of VM 320).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES E ANYA/Primary Examiner, Art Unit 2194