DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
2. Claims 1-5, 7-13, 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sehgal et al. (Pub. No. US20220405135)
As per claims 1, 8, 16, Sehgal discloses a system, comprising:
one or more processors (fig.2, CPU Cores);
a memory (fig.12, data storage device 1218 ) having stored thereon instructions (fig.12, sets of instructions 1225 ) that, upon execution by the one or more processors, (paragraph 17, peripheral component interconnect (PCI) devices across all of the hardware partitions of the host system.) cause the one or more processors to implement a peripheral component interface (PCI) engine (fig.1, pci device 190) to manage availability of a plurality of external resources utilized by (paragraph 35, The scheduler 143 may synchronize the available resources by transmitting a query for available resources to host system 110a.) a plurality of worker nodes (fig.2, NUMA node 204s ) within a containerized software environment, wherein the plurality of external resources is provided to respective worker nodes through a plurality of PCI slots (paragraph 31, processing logic of processing device 160a may provide the available resources to the scheduler), the process including:
determine a usage count (paragraph 19, identify a particular hardware to execute the workload in view of the available resources and the amount to be used to execute) for each worker node within the plurality of worker nodes(fig.2, NUMA node 204a ), wherein the usage count comprises a number of PCI slots of the plurality of PCI slots for a respective worker node consumed by the plurality of external resources (paragraph 34, The workload 202 includes an amount of resources that are to be used to execute workload 202.);
determine an allocability count for a first worker node of the plurality of worker nodes based on the usage count (paragraph 35, the scheduler 143 may synchronize the available resources by transmitting a query for available resources to host system 110a); and
publish a PCI availability of the first worker node to a scheduler (fig.1, scheduler 143) associated with the containerized software environment based on the allocability count (paragraph 31, The scheduler 143 may be tasked with processing (e.g., receiving, assigning, etc.) one or more workloads associated with the architecture 100).
As per claims 2, 9, 17, Sehgal discloses wherein the instructions to determine the usage count for each worker node within the plurality of worker nodes, upon execution, further cause the one or more processors to:
deploy a collector service for monitoring PCI usage counts on each respective worker nodes of the plurality of worker nodes, wherein the collector service, once deployed, executes on each respective worker node of the plurality of worker nodes to (paragraph 47, The information from device manager 512 may indicate the number of PCI devices available at NUMA node 504a is two and the number of PCI devices available at NUMA node 504b is three):
monitor a host path associated with a respective worker node for PCI slot usage (paragraph 47, The information from device manager 512 may indicate the number of PCI devices available at NUMA node 504a is two and the number of PCI devices available at NUMA node 504b is three);
generate a notification responsive to detecting a change to the PCI slot usage (paragraph 19, if the workload uses two CPU cores and a first hardware partition has four available CPU cores, while a second hardware partition has one available CPU core); and
transmit the notification indicating the change to the PCI slot usage to an aggregator service. (paragraph 19, if the workload uses two CPU cores and a first hardware partition has four available CPU cores, while a second hardware partition has one available CPU core)
As per claims 3, 10, 18, Sehgal discloses wherein the instructions to determine the usage count for each worker node within the plurality of worker nodes, upon execution, further cause the one or more processors to:
detect addition of a first external resource at the first worker node of the plurality of worker nodes (paragraph 31, the agent 142 may provide the available resources to the scheduler 143);
determine an updated capacity count for the first worker node based on the addition of the first external resource, wherein the updated capacity count indicates a current number of PCI slots of the plurality of PCI slots that are available on the first worker node (paragraph 33, NUMA node 204a and NUMA node 204b each include different amounts of resources that are available for execution of workloads); and
update the usage count for the first worker node based on the updated capacity count (paragraph 35, Upon receiving the workload 202, the scheduler 143 may synchronize the available resources of NUMA node 204a and NUMA node 204 with host system).
As per claims 4, 11, wherein the instructions to determine the usage count for each worker node of the plurality of worker nodes, upon execution, further cause the one or more processors to:
receive, from a collector service deployed within the containerized software environment, the usage count for each of the plurality of worker nodes (paragraph 35, upon receiving the query, processing logic of host system 110a may provide a response to scheduler 143 that indicates that NUMA node); and
calculate the allocability count for each of the plurality of worker nodes based on the usage count (paragraph 35, the scheduler 143 may synchronize the available resources by transmitting a query for available resources to host system 110a).
As per claims 5, 12, 19, Sehgal discloses the system further comprising instructions that, upon execution, cause the one or more processors to:
determine a driver type associated with the plurality of PCI slots for a respective worker node of the plurality of worker nodes (paragraph 36, the scheduler 143 may compare the available resources at NUMA node 204a and NUMA node 204b to the amount of resources that are to be used to execute workload 202);
determine a capacity count for the respective worker node based on the driver type (paragraph 43, the host system 110a may include a parameter 506 that is associated with the use of a scheduling hint when assigning a container 502 to a particular NUMA node); and
determine the allocability count for the respective worker node based on the capacity count (paragraph 37, Upon comparing the available resources at NUMA node 204a and NUMA node 204b to the amount of resources to execute workload 202, the scheduler 143 may determine that NUMA node 204b has insufficient available resources to execute workload 202 because NUMA node 204b has one available CPU core).
As per claims 7, 15 and 20, Sehgal discloses wherein the instructions to publish the PCI availability of the first worker node to the scheduler associated with the containerized software environment, upon execution, further cause the one or more processors to:
update annotation metadata associated with the first worker node with the allocability count (paragraph 43, the host system 110a may include a parameter 506 that is associated with the use of a scheduling hint when assigning a container 502 to a particular NUMA node); and
update an extended resource capacity associated with the first worker node with the allocability count (paragraph 35, Upon receiving the workload 202, the scheduler 143 may synchronize the available resources of NUMA node 204a and NUMA node 204 with host system).
As per claim 13, Sehgal discloses wherein publishing, by the PCI engine, the PCI availability of the first worker node to the scheduler associated with the containerized software environment comprises:
updating, by the PCI engine, annotation metadata associated with the first worker node with the allocability count(paragraph 43, the host system 110a may include a parameter 506 that is associated with the use of a scheduling hint when assigning a container 502 to a particular NUMA node).
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sehgal et al. (Pub. No. US20220405135) in view of Gole et al. (Pub. No. US20230195383)
As per claims 6 and 14 Sehgal discloses all the limitations as the above but does not explicitly disclose wherein the containerized software environment comprises a Kubernetes cluster. However, Gole discloses this (paragraph 94, deployment of a production ready Kubermetes cluster in the Azure cloud for executing the cloud storage OS 140, and the cloud manager.)
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Gole with the teaching of Sehgal so as to offering benefits such as high availability, resilience, and efficient resource utilization so as to yield the predicatable result so as to control efficiently, thus enhance the system performance.
5. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Einkauf [US Patent No. US12069128] discloses A service provider may apply customer-selected or customer-defined auto-scaling policies to a cluster of resources (e.g., virtualized computing resource instances or storage resource instances in a MapReduce cluster)
Conclusion
6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T HUYNH whose telephone number is (571)272-3635 or via e-mail addressed to [kim.huynh3@uspto.gov]. The examiner can normally be reached on M-F 7.00AM- 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tsai Henry can be reached at (571)272-4176 or via e-mail addressed to [Henry.Tsai@USPTO.GOV].
The fax phone numbers for the organization where this application or proceeding is assigned are (571)273-8300 for regular communications and After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (571)272-2100.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K. T. H./
Examiner, Art Unit 2184
/HENRY TSAI/ Supervisory Patent Examiner, Art Unit 2184