DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the 35 U.S.C. 101 rejections (Remarks p. 7) have been fully considered and are persuasive. The 35 U.S.C. 101 rejections have been withdrawn.
Applicant’s arguments with respect to the 35 U.S.C. 103 rejections (Remarks pp. 7-9) have been fully considered but are moot in view of the Examiner’s new ground of rejections based on added references.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2) and Lu (US 20220113785 A1).
Regarding Claim 1, Nicholas teaches a method, implemented at a computer system that includes a processor system, comprising (
Nicholas discloses, “Computer system 100 can include processor 102, e.g., an execution core. While one processor 102 is illustrated, in other embodiments computer system 100 may have multiple processors, e.g., multiple execution cores per processor substrate and/or multiple processor substrates that could each have multiple execution cores,” ¶ 0020.), comprising:
determining, from virtualization-stack or hypervisor configuration data, that a virtual machine (VM) (Fig. 4 410 or 418) possesses a performance entitlement indicating that processor idle states are to be disabled for a physical processor core executing the VM (
PNG
media_image1.png
394
523
media_image1.png
Greyscale
Nicholas discloses, “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.
The virtual machine associated with the virtual processor thus possesses a performance entitlement because it is entitled to using the physical processor that is being selected. Said status of possessing a performance entitlement indicates that that the physical processor will not be idle while running the virtual processor, effectively disabling the processor idle states as a result (mapped to claimed “processor idle states are to be disabled”). The above Fig. 4 of Nicholas shows this in detail with virtual machines 410 and 418. This mapping is consistent with Spec. ⁋ 22.
Since the physical processor map, mapped to the claimed “configuration data”, is associated with the “virtualization system scheduler”, part of a virtualization stack, it is “virtualization-stack configuration data”, and the data stored on said map is used to determine which physical processors are idle or not.);
associating a virtual processor core of the VM with the physical processor core, including disabling a processor idle state at the physical processor core based on the VM possessing the performance entitlement (
Nicholas discloses, “One hardware resource that a hypervisor time-slices is a physical processor. Generally, a physical processor is exposed within a virtual machine as a virtual processor,” ¶ 0002, and “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.),
Nicholas does not teach wherein disabling the processor idle state comprises a hypervisor overriding a host-level power-management policy based on the performance entitlement.
Nicholas also does not explicitly disclose disassociating the virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core.
However, Wada teaches disassociating the virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core (
Wada discloses, “Since the virtual processor 303 is idle, the virtual processor 303 is disallocated from the physical processor 002 when entering the suspend mode,” Col 5, Lines 64-66, “The hypervisor 100 binds the idle process 020 with the physical processor 002. In this embodiment, there are idle processes #0 and #1. The idle process #0 is bound to the physical processor #0, and the idle process #1 is bound to the physical processor #1. When a virtual processor 303 schedulable to the physical processor 002 cannot be found, the hypervisor 100 executes the idle process 020 bound to this physical processor 002 until a schedulable virtual processor 303 is found,” Col 6, Lines 4-12, and “Idle processes are scheduled when a virtual computer virtual processor that can run on a physical processor is not found during priority comparison,” Col 21, Lines 55-57.).
Nicholas and Wada are both considered to be analogous to the claimed invention because they are in the same field of virtualization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas to incorporate the teachings of Wada and provide disassociating the virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core. Doing so would help improve efficiency of the usage of the physical processor. (Wada discloses, “Other virtual processors 303 can be allocated to the physical processors 002 until the virtual computer 300 of this virtual processor 303 exits suspend mode. This makes it possible to use the one or more physical processors 002 efficiently,” Col 5, Lines 66-67 and Col 6, Lines 1-3.).
Nicholas in view of Wada does not teach wherein disabling the processor idle state comprises a hypervisor overriding a host-level power-management policy based on the performance entitlement.
However, Lu teaches a hypervisor overriding a host-level power-management policy based on the performance entitlement (
Lu discloses, “receiving a platform-level power management event from an operating system or hypervisor of the computing node; and overriding the advisory power management decision, and prompting the computing node to adjust performance of the processor according to the platform-level power management event,” ¶ 0082.
The claimed “host-level power-management policy” is mapped to the disclosed “advisory power management decision”.
Here, Lu’s hypervisor is responsible for overriding a power management decision in order to adjust performance of a processor. After the combination of Nicholas in view of Wada, with Lu, said overriding is now done based on the performance entitlement in order to disable a processor idle state, wherein the “performance entitlement” event is Nicholas’ attachment of the virtual processor to a physical processor.).
Nicholas in view of Wada, and Lu are both considered to be analogous to the claimed invention because they are in the same field of power management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada to incorporate the teachings of Lu and provide a hypervisor overriding a host-level power-management policy based on the performance entitlement. Doing so would provide greater support for overclocking (Lu discloses, “In this case, the power management system 102 (or the power management agent 212) may override e current advisory power management decision determined based on the decision model, and attempt to select a maximum possible performance state and a minimum possible idle resiliency as a recommendation to prompt the at least computing node to perform a recommended operation on the processor,” ¶ 0068.).
Regarding Claim 2, Nicholas in view of Wada and Lu teaches the method of claim 1, wherein the processor idle state is a deep sleep idle state (
Nicholas discloses, “Referring to schedulers 416 and 426, these schedulers can give preference to unparked, i.e., active, virtual processors rather than parked virtual processors when it schedules any non-affinitized threads. This lets the parked virtual processors enter a deeper C-state. When the virtual processors idle, the corresponding physical processors may also idle and virtualization system power manager 434 can transition the physical processors to a deeper C-state,” ¶ 0041.).
Regarding Claim 4, Nicholas in view of Wada and Lu teaches the method of claim 1, wherein disabling the processor idle state comprises one of: disabling the processor idle state prior to associating the virtual processor core with the physical processor core;
disabling the processor idle state concurrent with associating the virtual processor core with the physical processor core;
or disabling the processor idle state after associating the virtual processor core with the physical processor core (
Nicholas discloses, “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.
Also note that the claim includes all three possible relations between “disabling the processor idle state” and “associating the virtual processor core with the physical processor core”: “prior to,” “concurrent with,” and “after.”.).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), and Das (US 20210096896 A1).
Regarding Claim 3, Nicholas in view of Wada and Lu teaches the method of claim 2. Nicholas in view of Wada and Lu does not teach wherein the deep sleep idle state is a C3 or higher numbered C-state.
However, Das teaches wherein the deep sleep idle state is a C3 or higher numbered C-state (
Das discloses, “…allow the processor core 18 to be set to an idle C-state level of C3,” ¶ 0059.).
Nicholas in view of Wada and Lu, and Das are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Lu to incorporate the teachings of Das and provide wherein the deep sleep idle state is a C3 or higher numbered C-state. Doing so would help improve power conservation. (Das discloses, “C3—L1/L2 caches flush, clocks off,” ¶ 0027.).
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), and Qu (US 20210279107 A1).
Regarding Claim 5, Nicholas in view of Wada and Lu teaches the method of claim 1, wherein the virtual processor core is a first virtual processor core, the physical processor core is a first physical processor core, and the VM is a first VM, the method further comprising: core of the second VM with a second physical processor core
Nicholas discloses, “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle.” ¶ 0037.
PNG
media_image1.png
394
523
media_image1.png
Greyscale
Fig. 4 shows that the process to associate a virtual processor core to a physical processor core applies to either VM 410 or 422, and any one of which could be mapped to the claimed “second VM.”).
Nicholas in view of Wada and Lu does not teach determining that a second VM lacks the performance entitlement; and based on the second VM lacking the performance entitlement, associating a second virtual processor core of the second VM with a second physical processor core without disabling the processor idle state at the second physical processor core.
However, Qu teaches determining that a second VM lacks the performance entitlement, and based on the second VM lacking the performance entitlement, not disabling the processor idle state at the second physical processor core (
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082, and “For instance, if the VMI profile 308 specifies that a VNF or virtual machine is to have low latency throughput and that it is to include two vCPUs (e.g., a data plane vCPU and a control plane vCPU), the VMI instantiation system 310 may determine that at least two processor cores from the server 316 are required to implement the VNF or virtual machine. Alternatively, if the VMI profile 308 specifies that a VNF or virtual machine does not require low latency throughput and that it is to include two vCPUs, the VMI instantiation system 310 may determine that shareable resources may be allocated for the VNF or virtual machine,” ¶ 0052.
Qu teaches the “non-low latency” is determined based on a virtual machine’s status, stating “In an example, the user can specify, in the VMI profile, that the resulting VNF or virtual machine is to have low-latency throughput. For instance, in the VMI profile, the user may provide an entry (e.g., “low-latency=TRUE,” etc.) that, as a result of being processed by the virtual machine management service 102, causes the virtual machine management service 102 to determine that low latency throughput is required for the VNF or other virtual machine to be implemented through instantiation of the VMI 106. Alternatively, in the VMI profile, the user may indicate that low latency throughput is not required (e.g., “low-latency=FALSE,” etc.),” ¶ 0036.
A virtual machine is determined to “lack[] the performance entitlement” when the system determines that a VMI (virtual machine image) is set to “low-latency=FALSE.” The mapping is consistent with the specification, because the specification states “In embodiments, this performance entitlement is a ‘low-latency entitlement’ signaling that an associated VM is a ‘low-latency’ VM (LLVM). In embodiments, a low-latency entitlement is associated with a particular VM and indicates that idle states at a physical processor can be disabled when that VM’s virtual processor is associated therewith,” ¶ 0021.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared. This interpretation is consistent with Spec. ¶ 0021.).
Nicholas in view of Wada and Lu, and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Lu to incorporate the teachings of Qu and provide determining that a second VM lacks the performance entitlement; and based on the second VM lacking the performance entitlement, associating a second virtual processor core of the second VM with a second physical processor core without disabling the processor idle state at the second physical processor core. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Regarding Claim 6, Nicholas in view of Wada, Lu, and Qu teaches the method of claim 5, further comprising, associating the second virtual processor core with the second physical processor core, including enabling the processor idle state at the second physical processor core based on the second VM lacking the performance entitlement (
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082, and “For instance, if the VMI profile 308 specifies that a VNF or virtual machine is to have low latency throughput and that it is to include two vCPUs (e.g., a data plane vCPU and a control plane vCPU), the VMI instantiation system 310 may determine that at least two processor cores from the server 316 are required to implement the VNF or virtual machine. Alternatively, if the VMI profile 308 specifies that a VNF or virtual machine does not require low latency throughput and that it is to include two vCPUs, the VMI instantiation system 310 may determine that shareable resources may be allocated for the VNF or virtual machine,” ¶ 0052.
Qu teaches the “non-low latency” is determined based on a virtual machine’s status, stating “In an example, the user can specify, in the VMI profile, that the resulting VNF or virtual machine is to have low-latency throughput. For instance, in the VMI profile, the user may provide an entry (e.g., “low-latency=TRUE,” etc.) that, as a result of being processed by the virtual machine management service 102, causes the virtual machine management service 102 to determine that low latency throughput is required for the VNF or other virtual machine to be implemented through instantiation of the VMI 106. Alternatively, in the VMI profile, the user may indicate that low latency throughput is not required (e.g., “low-latency=FALSE,” etc.),” ¶ 0036.
A virtual machine is determined to “lack[] the performance entitlement” when the system determines that a VMI (virtual machine image) is set to “low-latency=FALSE.” The mapping is consistent with the specification, because the specification states “In embodiments, this performance entitlement is a ‘low-latency entitlement’ signaling that an associated VM is a ‘low-latency’ VM (LLVM). In embodiments, a low-latency entitlement is associated with a particular VM and indicates that idle states at a physical processor can be disabled when that VM’s virtual processor is associated therewith,” ¶ 0021.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared.).
Nicholas in view of Wada and Lu, and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Lu to incorporate the teachings of Qu and provide further comprising, associating the second virtual processor core with the second physical processor core, including enabling the processor idle state at the second physical processor core based on the second VM lacking the performance entitlement. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), Qu (US 20210279107 A1), and Duchesneau (US 20140183957 A1).
Regarding Claim 7, Nicholas in view of Wada, Lu, and Qu teaches the method of claim 5. Nicholas in view of Wada, Lu, and Qu does not teach wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core.
However, Duchesneau teaches wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core (
Duchesneau discloses, “Some advanced multicore processors may recognize idle cores (or may be configured with idle cores) that may be powered off or placed in a processor state that dissipates very little power, thereby creating TDP headroom for the remaining cores, which may be placed in a turbo/overclocking mode that may greatly exceed the performance rate that is possible when all cores are operating,” ¶ 0648.
“The processor idle state was disabled” is mapped to the state where cores are neither powered off nor placed in a processor state that dissipates very little power. This means that there will be less TDP headroom for remaining cores, which may cause the termination of overclocking. Therefore, the first physical processor core with overclocking utilizes a higher clock rate than when there is no overclocking (when the second physical processor core is powered off or in a low power state).).
Nicholas in view of Wada, Lu, and Qu, and Duchesneau are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Lu, and Qu to incorporate the teachings of Duchesneau and provide wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core. Doing so would help allow improving computer performance by overclocking when appropriate. (Duchesneau discloses, “This may be particularly valuable for single-threaded tasks that cannot take advantage of multiple cores,” ¶ 0648.).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), and Suryanarayana (US 20230342477 A1).
Regarding Claim 8, Nicholas in view of Wada and Lu teaches the method of claim 1. Nicholas in view of Wada and Lu does not teach further comprising exposing the virtual processor core to the VM as a performance core.
However, Suryanarayana teaches further comprising exposing the virtual processor core to the VM as a performance core (
Suryanarayana discloses, “FIG. 4 illustrates a run-time OS view or resource allocation in accordance with disclosed teachings. As depicted in FIG. 4, both performance core 111 and efficiency core 111 are fully exposed to workloads of the runtime OS 401,” ¶ 0034.
After the combination of Nicholas in view of Wada and Lu with Suryanarayana, the runtime OS from Suryanarayana runs on a VM from Nicholas in view of Wada and Lu. Because the performance core is exposed the runtime OS, it is also exposed to Nicholas in view of Wada’s VM running the OS.).
Nicholas in view of Wada and Lu, and Suryanarayana are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Lu to incorporate the teachings of Suryanarayana and provide further comprising exposing the virtual processor core to the VM as a performance core. Doing so would help allow for increased performance. (Suryanarayana discloses, “The hybrid core 101 illustrated FIG. 1 is includes one or more comparatively large, high speed performance cores 111 and one or more comparatively small efficiency cores 112 that are optimized for per watt performance,” ¶ 0023.).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), Suryanarayana (US 20230342477 A1), and Qu (US 20210279107 A1).
Regarding Claim 9, Nicholas in view of Wada, Lu, and Suryanarayana teaches the method of claim 8, wherein the virtual processor core is a first virtual processor core and the physical processor core is a first physical processor core, the method further comprising:
a second virtual processor core to the VM as an efficiency core (
Suryanarayana discloses, “As depicted in FIG. 4, both performance core 111 and efficiency core 111 are fully exposed to workloads of the runtime OS 401,” ¶ 0034.).
Nicholas in view of Wada and Lu, and Suryanarayana are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Lu to incorporate the teachings of Suryanarayana and provide wherein the virtual processor core is a first virtual processor core and the physical processor core is a first physical processor core, the method further comprising: exposing a second virtual processor core to the VM as an efficiency core. Doing so would help allow for increased performance. (Suryanarayana discloses, “The hybrid core 101 illustrated FIG. 1 is includes one or more comparatively large, high speed performance cores 111 and one or more comparatively small efficiency cores 112 that are optimized for per watt performance,” ¶ 0023.).
Nicholas in view of Wada, Lu, and Suryanarayana does not teach associating a second virtual processor core of the VM with a second physical processor core without disabling the processor idle state at the second physical processor core.
However, Qu teaches associating a second virtual processor core of the VM with a second physical processor core without disabling the processor idle state at the second physical processor core (
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared.).
Nicholas in view of Wada, Lu, and Suryanarayana, and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Lu, and Suryanarayana to incorporate the teachings of Qu and provide associating a second virtual processor core of the VM with a second physical processor core without disabling the processor idle state at the second physical processor core. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Lu (US 20220113785 A1), Suryanarayana (US 20230342477 A1), Qu (US 20210279107 A1), and Duchesneau (US 20140183957 A1).
Regarding Claim 10, Nicholas in view of Wada, Lu, Suryanarayana, and Qu teaches the method of claim 9. Nicholas in view of Wada, Lu, Suryanarayana, and Qu does not teach wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core.
However, Duchesneau teaches wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core (
Duchesneau discloses, “Some advanced multicore processors may recognize idle cores (or may be configured with idle cores) that may be powered off or placed in a processor state that dissipates very little power, thereby creating TDP headroom for the remaining cores, which may be placed in a turbo/overclocking mode that may greatly exceed the performance rate that is possible when all cores are operating,” ¶ 0648.
“The processor idle state was disabled” is mapped to the state where cores are neither powered off nor placed in a processor state that dissipates very little power. This means that there will be less TDP headroom for remaining cores, which causes the termination of overclocking. Therefore, the first physical processor core with overclocking utilizes a higher clock rate than when there is no overclocking (when the second physical processor core is powered off or in a low power state).).
Nicholas in view of Wada, Lu, Suryanarayana, and Qu, and Duchesneau are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Lu, Suryanarayana, and Qu to incorporate the teachings of Duchesneau and provide wherein the first physical processor core utilizes a higher clock rate than if the processor idle state was disabled at the second physical processor core. Doing so would help allow improving computer performance by overclocking when appropriate. (Duchesneau discloses, “This may be particularly valuable for single-threaded tasks that cannot take advantage of multiple cores,” ¶ 0648.).
Claims 11-12, 14-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Qu (US 20210279107 A1), and Lu (US 20220113785 A1).
Regarding Claim 11, Nicholas teaches a computer system comprising: a processor system (
Nicholas discloses, “Computer system 100 can include processor 102, e.g., an execution core. While one processor 102 is illustrated, in other embodiments computer system 100 may have multiple processors, e.g., multiple execution cores per processor substrate and/or multiple processor substrates that could each have multiple execution cores,” ¶ 0020.);
and a computer storage medium that stores computer-executable instructions that are executable by the processor system to at least (
Nicholas discloses, “The computer-readable storage media 110 can provide non volatile and volatile storage of processor executable instructions 122, data structures, program modules and other data for the computer system 100 such as executable instructions,” ¶ 0021.):
determine, from virtualization-stack or hypervisor configuration data, that a first virtual machine (VM) possesses a performance entitlement indicating that processor idle states are to be disabled for a physical processor core of the processor system executing the VM (
Nicholas discloses, “The computer-readable storage media 110 can provide non volatile and volatile storage of processor executable instructions 122, data structures, program modules and other data for the computer system 100 such as executable instructions,” ¶ 0021.);
associate a first virtual processor core of the first VM with the physical processor core of the processor system, including disabling a processor idle state at the physical processor core based on the first VM possessing the performance entitlement (
Nicholas discloses, “One hardware resource that a hypervisor time-slices is a physical processor. Generally, a physical processor is exposed within a virtual machine as a virtual processor,” ¶ 0002, and “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.).
Nicholas does not teach wherein disabling the processor idle state comprises a hypervisor overriding a host-level power-management policy based on the performance entitlement;
subsequent to disabling the processor idle state at the physical processor core, disassociate the first virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core;
and subsequent to disassociating the first virtual processor core from the physical processor core, associate a second virtual processor core of a second VM with the physical processor core without disabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement.
However, Wada teaches subsequent to disabling the processor idle state at the physical processor core, disassociate the first virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core (
Nicholas teaches disabling the processor idle state at the physical processor core by indicating on an idle physical processor map as not idle, stating “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.
Wada teaches that subsequently the physical processor core may become idle again, stating, “Since the virtual processor 303 is idle, the virtual processor 303 is disallocated from the physical processor 002 when entering the suspend mode,” Col 5, Lines 64-66, “The hypervisor 100 binds the idle process 020 with the physical processor 002. In this embodiment, there are idle processes #0 and #1. The idle process #0 is bound to the physical processor #0, and the idle process #1 is bound to the physical processor #1. When a virtual processor 303 schedulable to the physical processor 002 cannot be found, the hypervisor 100 executes the idle process 020 bound to this physical processor 002 until a schedulable virtual processor 303 is found,” Col 6, Lines 4-12, and “Idle processes are scheduled when a virtual computer virtual processor that can run on a physical processor is not found during priority comparison,” Col 21, Lines 55-57.).
Nicholas and Wada are both considered to be analogous to the claimed invention because they are in the same field of virtualization. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas to incorporate the teachings of Wada and provide subsequent to disabling the processor idle state at the physical processor core, disassociate the first virtual processor core from the physical processor core, including re-enabling the processor idle state at the physical processor core. Doing so would help improve efficiency of the usage of the physical processor. (Wada discloses, “Other virtual processors 303 can be allocated to the physical processors 002 until the virtual computer 300 of this virtual processor 303 exits suspend mode. This makes it possible to use the one or more physical processors 002 efficiently,” Col 5, Lines 66-67 and Col 6, Lines 1-3.).
Nicholas in view of Wada does not teach subsequent to disassociating the first virtual processor core from the physical processor core, associate a second virtual processor core of a second VM with the physical processor core without disabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement.
However, Qu teaches subsequent to disassociating the first virtual processor core from the physical processor core, associate a second virtual processor core of a second VM with the physical processor core without disabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement (
Nicholas in view of Wada teaches disassociating the first virtual processor core from the physical processor core, and the physical processor core becoming idle.
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082, and “For instance, if the VMI profile 308 specifies that a VNF or virtual machine is to have low latency throughput and that it is to include two vCPUs (e.g., a data plane vCPU and a control plane vCPU), the VMI instantiation system 310 may determine that at least two processor cores from the server 316 are required to implement the VNF or virtual machine. Alternatively, if the VMI profile 308 specifies that a VNF or virtual machine does not require low latency throughput and that it is to include two vCPUs, the VMI instantiation system 310 may determine that shareable resources may be allocated for the VNF or virtual machine,” ¶ 0052.
Qu teaches the “non-low latency” is determined based on a virtual machine’s status, stating “In an example, the user can specify, in the VMI profile, that the resulting VNF or virtual machine is to have low-latency throughput. For instance, in the VMI profile, the user may provide an entry (e.g., “low-latency=TRUE,” etc.) that, as a result of being processed by the virtual machine management service 102, causes the virtual machine management service 102 to determine that low latency throughput is required for the VNF or other virtual machine to be implemented through instantiation of the VMI 106. Alternatively, in the VMI profile, the user may indicate that low latency throughput is not required (e.g., “low-latency=FALSE,” etc.),” ¶ 0036.
A virtual machine is determined to “lack[] the performance entitlement” when the system determines that a VMI (virtual machine image) is set to “low-latency=FALSE.” The mapping is consistent with the specification, because the specification states “In embodiments, this performance entitlement is a ‘low-latency entitlement’ signaling that an associated VM is a ‘low-latency’ VM (LLVM). In embodiments, a low-latency entitlement is associated with a particular VM and indicates that idle states at a physical processor can be disabled when that VM’s virtual processor is associated therewith,” ¶ 0021.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared.).
Nicholas in view of Wada, and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada to incorporate the teachings of Qu and provide subsequent to disassociating the first virtual processor core from the physical processor core, associate a second virtual processor core of a second VM with the physical processor core without disabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Nicholas in view of Wada and Qu does not teach wherein disabling the processor idle state comprises a hypervisor overriding a host-level power-management policy based on the performance entitlement.
However, Lu teaches a hypervisor overriding a host-level power-management policy based on the performance entitlement (
Lu discloses, “receiving a platform-level power management event from an operating system or hypervisor of the computing node; and overriding the advisory power management decision, and prompting the computing node to adjust performance of the processor according to the platform-level power management event,” ¶ 0082.
Here, Lu’s hypervisor is responsible for overriding a power management decision in order to adjust performance of a processor. After the combination of Nicholas in view of Wada and Qu, with Lu, said overriding is now done based on the performance entitlement in order to disable a processor idle state.).
Nicholas in view of Wada and Qu, and Lu are both considered to be analogous to the claimed invention because they are in the same field of power management. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada and Qu to incorporate the teachings of Lu and provide a hypervisor overriding a host-level power-management policy based on the performance entitlement. Doing so would provide greater support for overclocking (Lu discloses, “In this case, the power management system 102 (or the power management agent 212) may override e current advisory power management decision determined based on the decision model, and attempt to select a maximum possible performance state and a minimum possible idle resiliency as a recommendation to prompt the at least computing node to perform a recommended operation on the processor,” ¶ 0068.).
Claim 18 is a computer-readable hardware storage device claim corresponding to the computer system Claim 11 (¶ 0021 of Nicholas). Therefore, Claim 18 is rejected for the same reason set forth in the rejection of Claim 11.
Regarding Claim 12, Nicholas in view of Wada, Qu, and Lu teaches the computer system of claim 11, wherein the processor idle state is a deep sleep idle state (
Nicholas discloses, “Referring to schedulers 416 and 426, these schedulers can give preference to unparked, i.e., active, virtual processors rather than parked virtual processors when it schedules any non-affinitized threads. This lets the parked virtual processors enter a deeper C-state. When the virtual processors idle, the corresponding physical processors may also idle and virtualization system power manager 434 can transition the physical processors to a deeper C-state,” ¶ 0041.).
Claim 19 is a computer-readable hardware storage device claim corresponding to the computer system Claim 12. Therefore, Claim 19 is rejected for the same reason set forth in the rejection of Claim 12.
Regarding Claim 14, Nicholas in view of Wada, Qu, and Lu teaches the computer system of claim 11, wherein disabling the processor idle state comprises one of: disabling the processor idle state prior to associating the first virtual processor core with the physical processor core; disabling the processor idle state concurrent with associating the first virtual processor core with the physical processor core; or disabling the processor idle state after associating the first virtual processor core with the physical processor core (
Nicholas discloses, “Virtualization system scheduler 432 can select a physical processor to run the virtual processor and set a bit in an idle physical processor map that indicates that the physical processor is running a thread as opposed to being idle. Similar to the idle virtual processor map, the idle physical processor map can be used by virtualization system scheduler 432 to determine what physical processors can be selected to run a virtual processor,” ¶ 0037.
Also note that the claim includes all three possible relations between “disabling the processor idle state” and “associating the virtual processor core with the physical processor core”: “prior to,” “concurrent with,” and “after.”.).
Claim 20 is a computer-readable hardware storage device claim corresponding to the computer system Claim 14. Therefore, Claim 20 is rejected for the same reason set forth in the rejection of Claim 14.
Regarding Claim 15, Nicholas in view of Wada, Qu, and Lu teaches the computer system of claim 11, the computer-executable instructions also executable by the processor system to, associate the second virtual processor core with the physical processor core, including enabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement (
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082, and “For instance, if the VMI profile 308 specifies that a VNF or virtual machine is to have low latency throughput and that it is to include two vCPUs (e.g., a data plane vCPU and a control plane vCPU), the VMI instantiation system 310 may determine that at least two processor cores from the server 316 are required to implement the VNF or virtual machine. Alternatively, if the VMI profile 308 specifies that a VNF or virtual machine does not require low latency throughput and that it is to include two vCPUs, the VMI instantiation system 310 may determine that shareable resources may be allocated for the VNF or virtual machine,” ¶ 0052.
Qu teaches the “non-low latency” is determined based on a virtual machine’s status, stating “In an example, the user can specify, in the VMI profile, that the resulting VNF or virtual machine is to have low-latency throughput. For instance, in the VMI profile, the user may provide an entry (e.g., “low-latency=TRUE,” etc.) that, as a result of being processed by the virtual machine management service 102, causes the virtual machine management service 102 to determine that low latency throughput is required for the VNF or other virtual machine to be implemented through instantiation of the VMI 106. Alternatively, in the VMI profile, the user may indicate that low latency throughput is not required (e.g., “low-latency=FALSE,” etc.),” ¶ 0036.
A virtual machine is determined to “lack[] the performance entitlement” when the system determines that a VMI (virtual machine image) is set to “low-latency=FALSE.” The mapping is consistent with the specification, because the specification states “In embodiments, this performance entitlement is a ‘low-latency entitlement’ signaling that an associated VM is a ‘low-latency’ VM (LLVM). In embodiments, a low-latency entitlement is associated with a particular VM and indicates that idle states at a physical processor can be disabled when that VM’s virtual processor is associated therewith,” ¶ 0021.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared.).
Nicholas in view of Wada and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada to incorporate the teachings of Qu and provide the computer-executable instructions also executable by the processor system to, associate the second virtual processor core with the physical processor core, including enabling the processor idle state at the physical processor core based on the second VM lacking the performance entitlement. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Qu (US 20210279107 A1), Lu (US 20220113785 A1), and Das (US 20210096896 A1).
Regarding Claim 13, Nicholas in view of Wada, Qu, and Lu teaches the computer system of claim 12. Nicholas in view of Wada, Qu, and Lu does not teach wherein the deep sleep idle state is a C3 or higher numbered C-state.
However, Das teaches wherein the deep sleep idle state is a C3 or higher numbered C-state (
Das discloses, “…allow the processor core 18 to be set to an idle C-state level of C3,” ¶ 0059.).
Nicholas in view of Wada, Qu, and Lu, and Das are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Qu, and Lu to incorporate the teachings of Das and provide wherein the deep sleep idle state is a C3 or higher numbered C-state. Doing so would help improve power conservation. (Das discloses, “C3—L1/L2 caches flush, clocks off,” ¶ 0027.).
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholas (US 20120278800 A1) in view of Wada (US 8695007 B2), Qu (US 20210279107 A1), Lu (US 20220113785 A1), and Suryanarayana (US 20230342477 A1).
Regarding Claim 16, Nicholas in view of Wada, Qu, and Lu teaches the computer system of claim 11. Nicholas in view of Wada, Qu, and Lu does not teach the computer-executable instructions also executable by the processor system to expose the first virtual processor core to the first VM as a performance core.
However, Suryanarayana teaches the computer-executable instructions also executable by the processor system to expose the first virtual processor core to the first VM as a performance core (
Suryanarayana discloses, “FIG. 4 illustrates a run-time OS view or resource allocation in accordance with disclosed teachings. As depicted in FIG. 4, both performance core 111 and efficiency core 111 are fully exposed to workloads of the runtime OS 401,” ¶ 0034.
After the combination of Nicholas in view of Wada, Qu, and Lu with Suryanarayana, the runtime OS from Suryanarayana runs on a VM from Nicholas in view of Wada, Qu, and Lu. Because the performance core is exposed the runtime OS, it is also exposed to Nicholas in view of Wada, Qu, and Lu’s VM running the OS.).
Nicholas in view of Wada, Qu, and Lu, and Suryanarayana are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Qu, and Lu to incorporate the teachings of Suryanarayana and provide the computer-executable instructions also executable by the processor system to expose the first virtual processor core to the first VM as a performance core. Doing so would help allow for increased performance. (Suryanarayana discloses, “The hybrid core 101 illustrated FIG. 1 is includes one or more comparatively large, high speed performance cores 111 and one or more comparatively small efficiency cores 112 that are optimized for per watt performance,” ¶ 0023.).
Regarding Claim 17, Nicholas in view of Wada, Qu, and Lu, and Suryanarayana teaches the computer system of claim 16. Nicholas in view of Wada, Qu, and Lu teaches wherein the physical processor core is a first physical processor core, the computer-executable instructions also executable by the processor system to: associate a third virtual processor core of the first VM with a second physical processor core without disabling the processor idle state at the second physical processor core (
Qu teaches subsequent to a physical processor core having become idle/sharable, associating a non-low-latency VMI’s virtual processor core to the physical processor core, stating:
“In an example, if the CPU pinning requirements specify that low latency throughput is not required (e.g., non-low latency), the virtual machine management service allocates available processor capacity from any of the processor cores to the VNFs or other virtual machines to be implemented using the virtual machine image. For instance, if the virtual machine management service allocates one or more processors that were previously unallocated to other VNFs or other virtual machines, the virtual machine management service may indicate that these one or more processors are shareable. Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.
The idle state is enabled/maintained when the VM is not shown as busy, and indicated as can be shared.).
Nicholas in view of Wada, and Qu are both considered to be analogous to the claimed invention because they are in the same field of device computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada to incorporate the teachings of Qu and wherein the physical processor core is a first physical processor core, the computer-executable instructions also executable by the processor system to: associate a third virtual processor core of the first VM with a second physical processor core without disabling the processor idle state at the second physical processor core. Doing so would help optimize utilization of resources. (Qu discloses, “Thus, in response to future requests to allocate available processor capacity for a non-low latency VNF or other virtual machine, the virtual machine management service may allocate the available capacity from these shareable processors for use by the new non-low latency VNF or other virtual machine subject to the CPU pinning requirements of the previously implemented VNF or virtual machine and the CPU pinning requirements of the new non-low latency VNF or other virtual machine,” ¶ 0082.).
Nicholas in view of Wada, Qu, and Lu does not teach expos[ing] the third virtual processor core to the first VM as an efficiency core.
However, Suryanarayana teachers expos[ing] the third virtual processor core to the first VM as an efficiency core (
Suryanarayana discloses, “As depicted in FIG. 4, both performance core 111 and efficiency core 111 are fully exposed to workloads of the runtime OS 401,” ¶ 0034.
After the combination of Nicholas in view of Wada, Qu, and Lu, with Suryanarayana, the runtime OS from Suryanarayana runs on a VM from Nicholas in view of Wada, Qu, and Lu. Because the efficiency core is exposed the runtime OS, it is also exposed to the VM running the OS. The efficiency core’s idle state is not disabled, according to the disclosure.).
Nicholas in view of Wada, Qu, and Lu, and Suryanarayana are both considered to be analogous to the claimed invention because they are in the same field of server computing. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nicholas in view of Wada, Qu, and Lu to incorporate the teachings of Suryanarayana and provide exposing the third virtual processor core to the first VM as an efficiency core. Doing so would help allow for increased performance. (Suryanarayana discloses, “The hybrid core 101 illustrated FIG. 1 is includes one or more comparatively large, high speed performance cores 111 and one or more comparatively small efficiency cores 112 that are optimized for per watt performance,” ¶ 0023.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tsirkin (US 20210216344 A1): Managing Processor Overcommit for Virtual Machines
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW SUN whose telephone number is (571)272-6735. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW NMN SUN/Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195