Prosecution Insights
Last updated: April 19, 2026
Application No. 18/224,796

SCHEDULING INSTRUCTIONS USING LATENCY OF INTERCONNECTS OF PROCESSORS

Non-Final OA §101§102§103
Filed
Jul 21, 2023
Examiner
RIGGINS, ARI FAITH COLEMA
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
27.8%
-12.2% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to claims filed on 07/21/2023. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-7 are directed to a processor and fall within the statutory category of machines. Claims 8-14 are directed to a system and fall within the statutory category of machines. Claims 15-20 are directed to a method and fall within the statutory category of processes. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Step 2A Prong 1: Claims 1, 8, and 15: The limitations of “schedule one or more instructions to be performed by one or more processors based, at least in part, on latency of one or more interconnects coupled to the one or more processors”, “schedule one or more instructions to be performed by one or more processors based, at least in part, on latency of one or more interconnects coupled to the one or more processors”, and “A method comprising: … schedule one or more instructions to be performed by one or more processors based, at least in part, on latency of one or more interconnects coupled to the one or more processors”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe latency of one or more interconnects coupled to one or more processors, and based on these observations, can mentally schedule one or more instructions to be performed by one or more processors. Scheduling may be performed through mental assignment of instructions to processors. This may also be done with pencil and paper. Therefore, Yes, claims 1, 8, and 15 recite a judicial exception. Step 2A Prong 2: Claims 1, 8, and 15: The judicial exception is not integrated into a practical application. In particular, the claims recite additional element recitations of “A processor comprising: one or more circuits”, “A system comprising: one or more processors” and “A method comprising: one or more processors”, which are merely recitations of generic computing components being used as a tool to apply the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Therefore, “Do the claims recite additional elements that integrate the judicial exception into a practical application? No, these additional elements do not integrate the abstract idea into a practical application and they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. After having evaluated the inquires set forth in Steps 2A Prong 1 and 2, it has been concluded that claims 1, 8, and 15 not only recite a judicial exception but that the claims are directed to the judicial exception as the judicial exception has not been integrated into practical application. Step 2B: Claims 1, 8, and 15: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than generic computing components which do not amount to significantly more than the abstract idea. Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, Claims 1, 8, and 15 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 2, 9, and 16, the claims recite additional abstract idea recitations of “wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions and a second node label indicative of an unchangeable number an unchangeable number of the one or more processors to perform the one or more instructions”, “Wherein the one or more processors are to schedule the one or more instructions based, at least in part, on a first node label indicative of a dynamic number of processors to perform instructions and a second node label indicative of an unchangeable number of processors to perform the instructions”, and “wherein the scheduling the one or more instructions is based, at least in part, on node labels of one or more nodes that include the one or more processors performing the one or more instructions” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions and a second node label indicative of an unchangeable number an unchangeable number of the one or more processors to perform the one or more instructions, and based on these observations, can mentally schedule one or more instructions. Scheduling may be performed through mental assignment of instructions. This may also be done with pencil and paper. Further, claims 2, 9, and 16 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 2, 9, and 16 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 2, 9, and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 3, 10, and 17, the claims recite additional abstract idea recitations of “wherein the latency of the one or more interconnects is based, at least in part, on proximity of the one or more processors within a non-uniform memory access (NUMA) domain” and “wherein the latency of the one or more interconnects is based, at least in part, on proximity of one processor performing the one or more instructions to another processor performing the one or more instructions”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, person can observe the latency of one or more interconnects based in part on proximity of the one or more processors, and based on these observations, can mentally schedule one or more instructions to be performed by the one or more processors. This may also be done with pencil and paper. Further, claims 3, 10, and 17 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 3, 10, and 17 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 3, 10, and 17 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 4 and 18, the claims recite additional abstract idea recitations of “wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a constraint on number of processors to perform the one or more instructions” and “wherein the scheduling the one or more instructions is based, at least in part, on a constraint on a number of processors to perform the one or more instructions” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a constraint on a number of processors to perform one or more instructions, and based on these observations, can mentally schedule one or more instructions. This may also be done with pencil and paper. Further, claims 4 and 18 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4 and 18 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 4 and 18 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 5, the claim recites additional abstract idea recitations of “wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a constraint on placement of instructions performed by certain numbers of processors” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a constraint on placement of instructions performed by certain numbers of processors, and based on these observations, can mentally schedule one or more instructions. This may also be done with pencil and paper. Further, claim 5 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 5 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 5 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 6, the claim recites additional abstract idea recitations of “wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a dynamic labeling of one or more nodes that include the one or more processors” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a dynamic labeling of one or more nodes that include the one or more processors, and based on these observations, can mentally schedule one or more instructions. This may also be done with pencil and paper. Further, claim 6 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 6 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 6 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 7 and 13, the claims recite additional abstract idea recitations of “wherein the one or more circuits are to schedule a subsequent one or more instructions based, at least in part, on a second latency that is equivalent to the latency of the one or more interconnects coupled to the one or more processors” and “wherein the one or more processors are to schedule a second one or more instructions to be performed by a second one or more processors based, at least in part, on an equivalent latency of the latency of one or more interconnects coupled to the one or more processors” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a latency that is equivalent to the latency of one or more interconnects coupled to one or more processors, and based on these observations, can mentally schedule a second or subsequent one or more instructions. This may also be done with pencil and paper. Further, claims 7 and 13 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 7 and 13 also fail both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 7 and 13 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 11, the claim recites additional abstract idea recitations of “wherein the latency of the one or more interconnects is based, at least in part, on a socket domain”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, person can observe the latency of one or more interconnects based in part on a socket domain, and based on these observations, can mentally schedule one or more instructions to be performed by the one or more processors. This may also be done with pencil and paper. Further, claim 11 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 11 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 11 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 12, the claim recites additional abstract idea recitations of “wherein a dynamic label of processors changes based, at least in part, on completion of all instructions being performed by a plurality of processors on a node” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe completion of all instructions being performed by a plurality of processors on a node, and based on these observations, can mentally change a dynamic label of processors. This may also be done with pencil and paper. Further, claim 12 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 12 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 12 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 14, the claim recites additional element recitations of “wherein a percentage of one or more nodes assigned the first and second node label is configurable”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 14 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 14 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 14 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 19, the claim recites additional element recitations of “further comprising: performing the one or more instructions based, at least in part, on the latency of one or more interconnects coupled to the one or more processors”, which is merely a recitation of generically using a computer as a tool to implement the abstract idea (see MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, claim 19 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 19 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 19 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 20, the claim recites additional abstract idea recitations of “further comprising: generating a fitness score of a node that includes the one or more processors, the fitness score indicating a different value than a number of processors performing the one or more instructions;” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally generate a fitness score of a node such that the fitness score indicates a different value than a number of processors performing the one or more instructions. This may also be done with pencil and paper. Further, the claim recites additional abstract idea recitations of “and scheduling a second one or more instructions to be performed by a different number of the one or more processors than the number of processors performing the one or more instructions” as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally schedule a second one or more instructions to be performed by a different number of the one or more processors than the number of processors performing the one or more instructions. This may also be done with pencil and paper. Further, claim 20 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 20 also fails both Step 2A prong 2, thus the claims are directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 20 does not recite patent eligible subject matter under 35 U.S.C. § 101. Therefore, Claims 1-20 do not recite patent eligible subject matter under U.S.C. §101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3-5, 7-8, 10-11, 13, 15, and 17-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Merrifield (US 2023/0012606 A1). With regard to claim 1, Merrifield teaches: A processor comprising: one or more circuits to schedule one or more instructions to be performed by one or more processors “This virtual NUMA topology includes one or more virtual NUMA nodes, each comprising a virtual CPU and associated memory, which the hypervisor "places"----or in other words, schedules for execution-on physical NUMA nodes of the system” [Merrifield ¶21]. “Each physical NUMA node 108 is a logical grouping of a compute resource 110 and a physical memory 112 of NUMA system 100 that exhibits the property of "non-uniform memory access," which means that the compute resource is able access the physical memory of its NUMA node (referred to as local memory) faster-or in other words, with lower latency than the physical memories of other NUMA nodes (referred to as remote memories)” [Merrifield ¶ 15]. based, at least in part, on latency of one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 3, Merrifield teaches the processor of claim 1, as referenced above. Merrifield further teaches wherein the latency of the one or more interconnects is based, at least in part, on proximity of the one or more processors within a non-uniform memory access (NUMA) domain. “As shown in FIG. 2, physical SLIT 200 specifies a latency (also known as "distance") value for every pair of physical NUMA nodes (i, j) that indicates the relative latency of performing a memory access from node i to node j” [Merrifield ¶ 20]. “Like nodes 0 and 1 of blade 114(1), physical NUMA nodes 2 and 3 are coupled via an inter-socket interconnect 116(2) that allows processor socket 110(3) of node 2 to remotely access DRAM 112(4) of node 3 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110(3)/node 2), and allows processor socket 110(4) of node 3 to remotely access DRAM 112(3) of node 2 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110( 4)/node 3)” [Merrifield ¶ 17]. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 4, Merrifield teaches the processor of claim 1, as referenced above. Merrifield further teaches wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a constraint on number of processors to perform the one or more instructions. “Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory, and I/O” [Merrifield ¶ 47]. “In a NUMA system like system 100 of FIGS. 1A and 1B, the hypervisor can choose to expose a virtual NUMA topology to the guest OS of a VM based on various factors (e.g., the number of virtual central processing units (CPUs) provisioned for the VM, the amount of memory provisioned for the VM, etc.)” [Merrifield ¶ 21]. With regard to claim 5, Merrifield teaches the processor of claim 1, as referenced above. Merrifield further teaches wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a constraint on placement of instructions performed by certain numbers of processors. “This virtual NUMA topology includes one or more virtual NUMA nodes, each comprising a virtual CPU and associated memory, which the hypervisor "places"----or in other words, schedules for execution-on physical NUMA nodes of the system. For example, FIG. 3 depicts a scenario 300 in which VM 104(1) of FIG. 1A is presented a virtual NUMA topology 302 comprising three virtual NUMA nodes 0, 1, and 2 (each with two virtual CPUs), which hypervisor 102 has placed on physical NUMA nodes 0, 1, and 2 respectively” [Merrifield ¶ 21]. “Hypervisor 102 can then expose a virtual SLIT to the VM that includes latency values from the physical SLIT in accordance with the mappings and can pin the virtual NUMA nodes to their mapped physical NUMA nodes, such that the virtual NUMA nodes remain in place throughout the VM's runtime (or in other words, are never migrated way from their mapped physical nodes)” [Merrifield ¶ 24]. With regard to claim 7, Merrifield teaches the processor of claim 1, as referenced above. Merrifield further teaches: wherein the one or more circuits are to schedule a subsequent one or more instructions “Within this first loop, hypervisor 102 can determine a mapping between virtual NUMA node i and a single physical NUMA node j based on various factors present at the time of workflow execution (e.g., current compute and memory loads on physical NUMA nodes) (block 606) … Finally, at block 620, hypervisor 102 can allow one or more virtual NUMA nodes of VM 104 to be migrated (subsequent) on a temporary basis to other physical NUMA nodes (i.e., nodes different from the one on which the virtual NUMA node is initially placed) throughout the VM's runtime” [Merrifield ¶ 44]. “Further, rather than strictly pinning each virtual NUMA node to its mapped physical NUMA node, hypervisor 102 can temporarily migrate each virtual NUMA node to different physical NUMA nodes on an as-needed basis” [Merrifield ¶ 28]. based, at least in part, on a second latency that is equivalent to the latency “The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software. In cases where a NUMA system serves as a virtualization host (i.e., is configured to run a hypervisor and virtual machines (VMs)), it is useful for the hypervisor to expose the system's physical SLIT, or at least some portion thereof, in the form of a "virtual SLIT" to the guest operating system (OS) of each VM. Like the system software of a bare-metal system, the guest OS can use this SLIT information to make intelligent task placement/memory allocation decisions in accordance with the system's underlying memory characteristics” [Merrifield ¶ 3-4]. “Finally, according to the third approach (referred to as "dynamic placement" and detailed in section (6) below), hypervisor 102 can build and expose a virtual SLIT to a VM's guest OS based on one-to-one mappings in a manner that is largely similar to one-to-one static placement approach. However, rather than determining these mappings based on a static user-provided configuration, hypervisor 102 can determine the mappings dynamically at the time of VM power-on based on various runtime factors (e.g., the current compute load on each physical NUMA node, the current memory load on each physical NUMA node, etc.)” [Merrifield ¶ 27]. of the one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes” [Merrifield ¶ 2-3]. “Like nodes 0 and 1 of blade 114(1), physical NUMA nodes 2 and 3 are coupled via an inter-socket interconnect 116(2) that allows processor socket 110(3) of node 2 to remotely access DRAM 112(4) of node 3 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110(3)/node 2), and allows processor socket 110(4) of node 3 to remotely access DRAM 112(3) of node 2 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110( 4)/node 3)” [Merrifield ¶ 17]. With regard to claim 8, Merrifield teaches: A system comprising: one or more processors to schedule one or more instructions to be performed by one or more processors “This virtual NUMA topology includes one or more virtual NUMA nodes, each comprising a virtual CPU and associated memory, which the hypervisor "places"----or in other words, schedules for execution-on physical NUMA nodes of the system” [Merrifield ¶21]. “Each physical NUMA node 108 is a logical grouping of a compute resource 110 and a physical memory 112 of NUMA system 100 that exhibits the property of "non-uniform memory access," which means that the compute resource is able access the physical memory of its NUMA node (referred to as local memory) faster-or in other words, with lower latency than the physical memories of other NUMA nodes (referred to as remote memories)” [Merrifield ¶ 15]. based, at least in part, on latency of one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 10, Merrifield teaches the system of claim 8, as referenced above. Merrifield further teaches wherein the latency of the one or more interconnects is based, at least in part, on proximity of one processor performing the one or more instructions to another processor performing the one or more instructions. “As shown in FIG. 2, physical SLIT 200 specifies a latency (also known as "distance") value for every pair of physical NUMA nodes (i, j) that indicates the relative latency of performing a memory access from node i to node j” [Merrifield ¶ 20]. “Like nodes 0 and 1 of blade 114(1), physical NUMA nodes 2 and 3 are coupled via an inter-socket interconnect 116(2) that allows processor socket 110(3) of node 2 to remotely access DRAM 112(4) of node 3 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110(3)/node 2), and allows processor socket 110(4) of node 3 to remotely access DRAM 112(3) of node 2 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110( 4)/node 3)” [Merrifield ¶ 17]. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 11, Merrifield teaches the system of claim 8, as referenced above. Merrifield further teaches wherein the latency of the one or more interconnects is based, at least in part, on a socket domain. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping (domain) of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance” [Merrifield ¶ 2-3]. With regard to claim 13, Merrifield teaches the system of claim 8, as referenced above. Merrifield further teaches: wherein the one or more processors are to schedule a second one or more instructions to be performed by a second one or more processors “Within this first loop, hypervisor 102 can determine a mapping between virtual NUMA node i and a single physical NUMA node j based on various factors present at the time of workflow execution (e.g., current compute and memory loads on physical NUMA nodes) (block 606) … Finally, at block 620, hypervisor 102 can allow one or more virtual NUMA nodes of VM 104 to be migrated (subsequent) on a temporary basis to other physical NUMA nodes (i.e., nodes different from the one on which the virtual NUMA node is initially placed) throughout the VM's runtime” [Merrifield ¶ 44]. “Further, rather than strictly pinning each virtual NUMA node to its mapped physical NUMA node, hypervisor 102 can temporarily migrate each virtual NUMA node to different physical NUMA nodes on an as-needed basis” [Merrifield ¶ 28]. based, at least in part, on an equivalent latency of the latency “The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software. In cases where a NUMA system serves as a virtualization host (i.e., is configured to run a hypervisor and virtual machines (VMs)), it is useful for the hypervisor to expose the system's physical SLIT, or at least some portion thereof, in the form of a "virtual SLIT" to the guest operating system (OS) of each VM. Like the system software of a bare-metal system, the guest OS can use this SLIT information to make intelligent task placement/memory allocation decisions in accordance with the system's underlying memory characteristics” [Merrifield ¶ 3-4]. “Finally, according to the third approach (referred to as "dynamic placement" and detailed in section (6) below), hypervisor 102 can build and expose a virtual SLIT to a VM's guest OS based on one-to-one mappings in a manner that is largely similar to one-to-one static placement approach. However, rather than determining these mappings based on a static user-provided configuration, hypervisor 102 can determine the mappings dynamically at the time of VM power-on based on various runtime factors (e.g., the current compute load on each physical NUMA node, the current memory load on each physical NUMA node, etc.)” [Merrifield ¶ 27]. of one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes” [Merrifield ¶ 2-3]. “Like nodes 0 and 1 of blade 114(1), physical NUMA nodes 2 and 3 are coupled via an inter-socket interconnect 116(2) that allows processor socket 110(3) of node 2 to remotely access DRAM 112(4) of node 3 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110(3)/node 2), and allows processor socket 110(4) of node 3 to remotely access DRAM 112(3) of node 2 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110( 4)/node 3)” [Merrifield ¶ 17]. With regard to claim 15, Merrifield teaches: A method comprising: scheduling one or more instructions to be performed by one or more processors “This virtual NUMA topology includes one or more virtual NUMA nodes, each comprising a virtual CPU and associated memory, which the hypervisor "places"----or in other words, schedules for execution-on physical NUMA nodes of the system” [Merrifield ¶21]. “Each physical NUMA node 108 is a logical grouping of a compute resource 110 and a physical memory 112 of NUMA system 100 that exhibits the property of "non-uniform memory access," which means that the compute resource is able access the physical memory of its NUMA node (referred to as local memory) faster-or in other words, with lower latency than the physical memories of other NUMA nodes (referred to as remote memories)” [Merrifield ¶ 15]. based, at least in part, on latency of one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 17, Merrifield teaches the method of claim 15, as referenced above. Merrifield further teaches wherein the latency of the one or more interconnects is based, at least in part, on proximity of one processor performing the one or more instructions to another processor performing the one or more instructions. “As shown in FIG. 2, physical SLIT 200 specifies a latency (also known as "distance") value for every pair of physical NUMA nodes (i, j) that indicates the relative latency of performing a memory access from node i to node j” [Merrifield ¶ 20]. “Like nodes 0 and 1 of blade 114(1), physical NUMA nodes 2 and 3 are coupled via an inter-socket interconnect 116(2) that allows processor socket 110(3) of node 2 to remotely access DRAM 112(4) of node 3 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110(3)/node 2), and allows processor socket 110(4) of node 3 to remotely access DRAM 112(3) of node 2 (referred to as "on-blade remote DRAM" from the perspective of processor socket 110( 4)/node 3)” [Merrifield ¶ 17]. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. With regard to claim 18, Merrifield teaches the method of claim 15, as referenced above. Merrifield further teaches wherein the scheduling the one or more instructions is based, at least in part, on a constraint on a number of processors to perform the one or more instructions. “Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory, and I/O” [Merrifield ¶ 47]. “In a NUMA system like system 100 of FIGS. 1A and 1B, the hypervisor can choose to expose a virtual NUMA topology to the guest OS of a VM based on various factors (e.g., the number of virtual central processing units (CPUs) provisioned for the VM, the amount of memory provisioned for the VM, etc.)” [Merrifield ¶ 21]. With regard to claim 19, Merrifield teaches the method of claim 15, as referenced above. Merrifield further teaches further comprising: performing the one or more instructions based, at least in part, on the latency of one or more interconnects coupled to the one or more processors. “Large memory and compute systems are typically designed with multiple processor sockets, each directly attached to a pool of local memory and indirectly attached to the local memories of other processor sockets (i.e., remote memories) via an interconnect or bus. This architecture is known as a Non-Uniform Memory Access (NUMA) architecture because each processor socket can access data in its local memory faster (i.e., with lower latency) than data in remote memory. A grouping of a processor socket and its local memory is referred to as a NUMA node. Due to the higher costs of remote memory accesses, it is important for system software to be aware of the memory topology of a NUMA system and the memory access latencies between NUMA nodes. Among other things, this allows the system software to make more informed task placement/memory allocation decisions and thus improve system performance. The Advanced Configuration and Power Interface (ACPI) specification defines a System Locality Information Table (SLIT) that system firmware can use to provide node-to-node latency information to system software” [Merrifield ¶ 2-3]. “Generally speaking, the hypervisor will attempt to place virtual NUMA nodes on physical NUMA nodes in a manner that adheres to the virtual NUMA topology exposed to each VM, thereby optimizing VM performance” [Merrifield ¶ 21]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Merrifield (US 2023/0012606 A1) in view of Lu (US 2022/0075637 A1) in view of Bruno (US 2023/0401099 A1). Regarding Claim 2, Merrifield teaches the processor of claim 1, as referenced above. Merrifield further teaches an unchangeable number of the one or more processors to perform the one or more instructions. “A grouping of a processor socket and its local memory is referred to as a NUMA node” [Merrifield ¶ 2]. “Hypervisor 102 can then expose a virtual SLIT to the VM that includes latency values from the physical SLIT in accordance with the mappings and can pin the virtual NUMA nodes to their mapped physical NUMA nodes, such that the virtual NUMA nodes remain in place throughout the VM's runtime (or in other words, are never migrated way from their mapped physical nodes)” [Merrifield ¶ 24]. “Within this first loop, hypervisor 102 can determine, based on configuration provided by a user or administrator, a mapping between virtual NUMA node i and a single physical NUMA node j (block 406). Hypervisor 102 can then place virtual NUMA node i on physical NUMA node j and pin it there, such that virtual NUMA node i cannot be migrated to any other physical NUMA node throughout the VM's runtime (block 408)” [Merrifield ¶ 31]. Merrifield fails to teach wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions. However, Lu teaches wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions “With CPU and memory hot-add components 114 and 116, hypervisor 106 can turn on CPU and memory hot-add functionality for VM 108 and thereby enable a user to dynamically add vCPUs and/or memory to the VM during its runtime” [Lu ¶ 14]. “At a high level, these techniques involve computing a "virtual NUMA node size" for the VM (i.e., a maximum number of vCPUs and maximum amount of RAM to be included in each of the VM's virtual NUMA nodes), creating a virtual NUMA topology for the VM based on the computed virtual NUMA node size and the VM's provisioned vCPUs and memory, and exposing the virtual NUMA topology to the VM” [Lu ¶ 10]. Lu is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Merrifield in view of Lu fails to teach and a second node label indicative of an unchangeable number of the one or more processors to perform the one or more instructions. However, Bruno teaches and a second node label indicative of an unchangeable number of the one or more processors to perform the one or more instructions. “For nodes, such as the node 110, the infrastructure or node attributes can be separated into static attributes and dynamic attributes. Static attributes include attributes that are mostly constant over time. Static attributes may include CPU core count, RAM installed, accelerators installed, or the like. In the event that these attributes change, the node may publish the information to the scheduling engine 108” [Bruno ¶ 30]. Bruno is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield in view of Lu to incorporate the teachings of Bruno and include and a second node label indicative of an unchangeable number of the one or more processors to perform the one or more instructions. Doing so would keep track of the node capabilities for scheduling purposes. “FIG. 1 discloses aspects of automating the placement of workloads based on hierarchical sets of attributes that describe the capabilities of infrastructure (dynamic and static), the needs of services, applications, and workloads, data, and the qualities of data” [Bruno ¶ 20]. Regarding Claim 9, Merrifield teaches the system of claim 8, as referenced above. Merrifield further teaches an unchangeable number of processors to perform the instructions. “A grouping of a processor socket and its local memory is referred to as a NUMA node” [Merrifield ¶ 2]. “Hypervisor 102 can then expose a virtual SLIT to the VM that includes latency values from the physical SLIT in accordance with the mappings and can pin the virtual NUMA nodes to their mapped physical NUMA nodes, such that the virtual NUMA nodes remain in place throughout the VM's runtime (or in other words, are never migrated way from their mapped physical nodes)” [Merrifield ¶ 24]. “Within this first loop, hypervisor 102 can determine, based on configuration provided by a user or administrator, a mapping between virtual NUMA node i and a single physical NUMA node j (block 406). Hypervisor 102 can then place virtual NUMA node i on physical NUMA node j and pin it there, such that virtual NUMA node i cannot be migrated to any other physical NUMA node throughout the VM's runtime (block 408)” [Merrifield ¶ 31]. Merrifield fails to teach wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a first node label indicative of a maximum number of the one or more processors to perform the one or more instructions. However, Lu teaches wherein the one or more processors are to schedule the one or more instructions based, at least in part, on a first node label indicative of a dynamic number of processors to perform instructions “With CPU and memory hot-add components 114 and 116, hypervisor 106 can turn on CPU and memory hot-add functionality for VM 108 and thereby enable a user to dynamically add vCPUs and/or memory to the VM during its runtime” [Lu ¶ 14]. “At a high level, these techniques involve computing a "virtual NUMA node size" for the VM (i.e., a maximum number of vCPUs and maximum amount of RAM to be included in each of the VM's virtual NUMA nodes), creating a virtual NUMA topology for the VM based on the computed virtual NUMA node size and the VM's provisioned vCPUs and memory, and exposing the virtual NUMA topology to the VM” [Lu ¶ 10]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include wherein the one or more processors are to schedule the one or more instructions based, at least in part, on a first node label indicative of a dynamic number of processors to perform instructions. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Merrifield in view of Lu fails to teach and a second node label indicative of an unchangeable number of processors to perform the instructions. However, Bruno teaches and a second node label indicative of an unchangeable number of processors to perform the instructions. “For nodes, such as the node 110, the infrastructure or node attributes can be separated into static attributes and dynamic attributes. Static attributes include attributes that are mostly constant over time. Static attributes may include CPU core count, RAM installed, accelerators installed, or the like. In the event that these attributes change, the node may publish the information to the scheduling engine 108” [Bruno ¶ 30]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield in view of Lu to incorporate the teachings of Bruno and include and a second node label indicative of an unchangeable number of processors to perform the instructions. Doing so would keep track of the node capabilities for scheduling purposes. “FIG. 1 discloses aspects of automating the placement of workloads based on hierarchical sets of attributes that describe the capabilities of infrastructure (dynamic and static), the needs of services, applications, and workloads, data, and the qualities of data” [Bruno ¶ 20]. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Merrifield (US 2023/0012606 A1) in view of Lu (US 2022/0075637 A1). Regarding Claim 6, Merrifield teaches the processor of claim 1, as referenced above. Merrifield fails to teach wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a dynamic labeling of one or more nodes that include the one or more processors. However, Lu teaches wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a dynamic labeling of one or more nodes that include the one or more processors. “With CPU and memory hot-add components 114 and 116, hypervisor 106 can turn on CPU and memory hot-add functionality for VM 108 and thereby enable a user to dynamically add vCPUs and/or memory to the VM during its runtime” [Lu ¶ 14]. “At a high level, these techniques involve computing a "virtual NUMA node size" for the VM (i.e., a maximum (dynamic labeling) number of vCPUs and maximum amount of RAM to be included in each of the VM's virtual NUMA nodes), creating a virtual NUMA topology for the VM based on the computed virtual NUMA node size and the VM's provisioned vCPUs and memory, and exposing the virtual NUMA topology to the VM” [Lu ¶ 10]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include wherein the one or more circuits are to schedule the one or more instructions based, at least in part, on a dynamic labeling of one or more nodes that include the one or more processors. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Regarding Claim 16, Merrifield teaches the method of claim 15, as referenced above. Merrifield fails to teach wherein the scheduling the one or more instructions is based, at least in part, on node labels of one or more nodes that include the one or more processors performing the one or more instructions. However, Lu teaches wherein the scheduling the one or more instructions is based, at least in part, on node labels of one or more nodes that include the one or more processors performing the one or more instructions. “Upon receiving this request, hypervisor 106 can check whether any existing virtual NUMA node in the VM's virtual NUMA topology has not yet reached its maximum vCPU or memory limit, per the virtual NUMA node size (label) computed at block 302 of FIG. 3A (block 310) … If the answer at block 310 is yes, hypervisor 106 can add the new vCPU or new memory region to that existing virtual NUMA node, thereby fulfilling the hot-add request (block 312)” [Lu ¶ 22-23]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include wherein the scheduling the one or more instructions is based, at least in part, on node labels of one or more nodes that include the one or more processors performing the one or more instructions. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Merrifield (US 2023/0012606 A1) in view of Lu (US 2022/0075637 A1) in view of Jacobs (US 2014/0115593 A1). Regarding Claim 12, Merrifield teaches the system of claim 8, as referenced above. Merrifield fails to teach wherein a dynamic label of processors changes. However, Lu teaches wherein a dynamic label of processors changes “Upon receiving this request, hypervisor 106 can check whether any of the existing virtual NUMA nodes in the virtual NUMA topology of VM 108 include a placeholder (i.e., disabled) vCPU, per the mappings populated in the VM's virtual firmware data structure at block 506 of FIG. SA (block 510). If the answer is yes, hypervisor 106 can enable that placeholder vCPU by changing is corresponding indicator (dynamic label) from "disabled" to "enabled," thereby causing VM 108 to see it as a newly available vCPU and fulfilling the vCPU hot-add request (block 512)” [Lu ¶ 59]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include wherein a dynamic label of processors changes. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Additionally, Lu teaches the hot removal of virtual CPUs during runtime. “Further, in certain embodiments logic component 120 of hypervisor 106 can enable the hot-removal of vCPUs, memory regions, and/or fully or partially populated virtual NUMA nodes from VM 108's virtual NUMA topology (in addition to hot-add)” [Lu ¶ 29]. However, Merrifield in view of Lu fails to explicitly teach based, at least in part, on completion of all instructions being performed by a plurality of processors on a node. Jacobs teaches based, at least in part, on completion of all instructions being performed by a plurality of processors on a node. “In response to execution of the request completing, the dispatcher 138 removes the stored record of the first virtual processor consuming a home node first physical processor” [Jacobs ¶ 51]. Jacobs is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield in view of Lu to incorporate the teachings of Jacobs and include based, at least in part, on completion of all instructions being performed by a plurality of processors on a node. Doing so would allow for improvements to the performance of virtual processors. “In addition to actively removing virtual processors receiving excess capacity off of their home affinity domain (node) in favor of allowing a virtual processor of a partition to receive entitled cycles in its home affinity domain, the dispatcher takes additional steps to return virtual processors back to their home affinity domain as quickly as possible in the event no choice exists but to run them outside of their home affinity domain (node)” [Jacobs ¶ 17]. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Merrifield (US 2023/0012606 A1) in view of Lu (US 2022/0075637 A1) in view of Bruno (US 2023/0401099 A1) in view of Loudon (US 2017/0315835 A1). Regarding Claim 14, Merrifield in view of Lu in view of Bruno teaches the system of claim 9, as referenced above. Merrifield in view of Lu in view of Bruno fails to explicitly teach wherein a percentage of one or more nodes assigned the first and second node label is configurable. However, Loudon teaches wherein a percentage of one or more nodes assigned the first and second node label is configurable. “In embodiments, time critical virtual machines are fixed to a particular NUMA node. In embodiments, non-time critical virtual machines run applications that do not have such strict requirements” [Loudon ¶ 79]. “In embodiments, the configuring of the first virtual machine comprises pinning the first virtual machine to run only on the subset 106 of cores of the plurality. Due to this pinning, the first virtual machine is dedicated to run on cores 110A and 110B only. In embodiments, the configuring of the second virtual machine comprises not pinning the second virtual machine to run on any particular core of the plurality 110” [Loudon ¶ 54-55 Examiner notes configuring a virtual machine configures its nodes which determine the percentage of the one or more nodes assigned to the first and second node labels]. Loudon is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield in view of Lu in view of Bruno to incorporate the teachings of Loudon and include wherein a percentage of one or more nodes assigned the first and second node label is configurable. Doing so would allow for different configurations for time critical and non-time critical applications to improve processing. “Embodiments comprise splitting virtual machines into two types. "Time critical" virtual machines run applications that require uncontended access to CPU resources (for example, those performing media packet forwarding). In embodiments, time critical virtual machines are fixed to a particular NUMA node. In embodiments, non-time critical virtual machines run applications that do not have such strict requirements” [Loudon ¶ 79]. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Merrifield (US 2023/0012606 A1) in view of Lu (US 2022/0075637 A1) in view of Rana (US 2017/0230733 A1). Regarding Claim 20, Merrifield teaches the method of claim 15, as referenced above. Merrifield fails to teach further comprising: generating a fitness score of a node that includes the one or more processors, indicating a different value than a number of processors performing the one or more instructions; and scheduling a second one or more instructions to be performed by a different number of the one or more processors than the number of processors performing the one or more instructions. However, Lu teaches: further comprising: generating (a determination) a fitness score of a node that includes the one or more processors, “For example, in a particular implementation pertaining to CPU hot-add, hypervisor 106 can determine whether any existing virtual NUMA node is associated with a "placeholder" vCPU in the virtual firmware data structure which indicates that the virtual NUMA node is not yet full (described in section (4) below). If the answer at block 310 is yes, hypervisor 106 can add the new vCPU or new memory region to that existing virtual NUMA node, thereby fulfilling the hot-add request (block 312)” [Lu ¶ 22-23]. the fitness score indicating a different value than a number of processors performing the one or more instructions; “Turning now to FIG. 3B, at block 308 hypervisor 106 can receive (from, e.g., a user or administrator of VM 108) a request to hot-add a new vCPU or a new memory region to VM 108. Upon receiving this request, hypervisor 106 can check whether any existing virtual NUMA node in the VM's virtual NUMA topology has not yet reached its maximum vCPU or memory limit, per the virtual NUMA node size computed at block 302 of FIG. 3A (block 310)” [Lu ¶ 22 Examiner notes adding a new vCPU increases the number of processors performing the instructions]. and scheduling a second one or more instructions to be performed by a different number of the one or more processors than the number of processors performing the one or more instructions. “CPU hot-add (sometimes referred to as CPU hot-plug) and memory hot-add are features in modern hypervisors that enable a user to add virtual processing cores (i.e., vCPUs) and memory (i.e., RAM) respectively to running virtual machines (VMs)” [Lu ¶ 1]. “If the answer at block 310 is yes, hypervisor 106 can add the new vCPU or new memory region to that existing virtual NUMA node, thereby fulfilling the hot-add request (block 312)” [Lu ¶ 23]. It would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield to incorporate the teachings of Lu and include further comprising: generating a fitness score of a node that includes the one or more processors, the fitness score indicating a different value than a number of processors performing the one or more instructions; and scheduling a second one or more instructions to be performed by a different number of the one or more processors than the number of processors performing the one or more instructions. Doing so would allow for the addition of virtual processors and memory supported by the system hypervisor. “However, as part of block 506, rather than simply creating and populating a set of mappings in the VM's virtual firmware that associate each existing vCPU and memory region of VM 108 with a corresponding existing virtual NUMA node in the virtual NUMA topology, hypervisor 106 can also create/populate a set of mappings that associate "placeholder" vCPUs (i.e., vCPUs that are not currently present in the virtual NUMA topology) with corresponding existing or placeholder virtual NUMA nodes, based on the maximum number of vCPUs supported by hypervisor 106” [Lu ¶ 31]. Merrifield in view of Lu fails to teach further comprising: generating a fitness score of a node that includes the one or more processors, the fitness score. However, Rana teaches further comprising: generating a fitness score of a node that includes the one or more processors, the fitness score. “In various embodiments, scoring logic 128 and/or orchestrator 106 may also rank nodes for workload placement or other purposes (e.g., capacity planning or rebalancing activities including killing or migrating workloads or services or tuning elements on the virtual layer) based on their availability scores and/or edge tension scores. In various embodiments, nodes may be selected for ranking based, at least in part, on available capacity and/or tensions with one or more neighboring nodes, and based, at least in part, on associated features … Selection of one or more nodes may be performed at initial placement of a workload and/or during operation, e.g., in response to a rebalancing” [Rana ¶ 78]. “For example, features associated with processors 140 or co-processors 148 may include one or more of a number of cores, processor speed, cache architecture, memory architecture (e.g., non-uniform memory access (NUMA)), instruction set architecture (ISA), etc” [Rana ¶ 32]. Rana is considered to be analogous to the claimed invention because it is in the same field of task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Merrifield in view of Lu to incorporate the teachings of Rana and include further comprising: generating a fitness score of a node that includes the one or more processors, the fitness score. Doing so would allow for further comparing of node capabilities for more efficient hardware usage through workload placement. “In various embodiments, scoring logic 128 and/or orchestrator 106 may also rank nodes for workload placement or other purposes (e.g., capacity planning or rebalancing activities including killing or migrating workloads or services or tuning elements on the virtual layer) based on their availability scores and/or edge tension scores” [Rana ¶ 78]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI F RIGGINS whose telephone number is (571)272-2772. The examiner can normally be reached Monday-Friday 7:00AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.F.R./Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month