DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-16 and 18-21 are pending.
Response to Arguments
Regarding: Prior Art Rejections:
Applicant’s amendments and arguments regarding the rejection of claims 1-16 and 18-21 under 35 U.S.C. 102 and 103 have been fully considered and are moot due to new grounds of rejection necessitated by amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 5, 7, 9, 10, 12, 13, 15, 18, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 20160378560 A1 in view of Hussain et al. US 11032213 B1.
Lu is cited in a previous office action.
Regarding claim 1, Lu teaches the invention substantially as claimed including:
A method for scheduling computing resources, the method comprising:
submitting a resource allocation plan by a workload scheduler to a resource scheduler ([0048] The policy engine 304A provides the final allocation plan 316 to the resource manager 114 through the scheduler 120);
allocating by the resource scheduler a first resource allocation of first resources in accordance with the resource allocation plan and notifying the workload scheduler of the first resource allocation ([0048] The policy engine 304A provides the final allocation plan 316 to the resource manager 114 through the scheduler 120. The scheduler 120 can then receive another new container from the resource manager 114 in a next iteration; [0046] scheduler 120 receives a reference 306 to a newly allocated container provided by a resource manager 114; Fig 3 316 Request for/Release Containers; Examiner notes: Scheduler 120 receives request for/release containers and request new container from resource manager 114 which sends reference to the new container);
running workloads of the workload scheduler on the first resources by the workload scheduler ([0022] Upon receiving the containers, the HNP 124 divides the user program 112 into tasks that execute on worker node computers 106 and 108. The HNP 124 assigns the tasks to the worker node computers 106 and 108, and maps the containers of the worker node computers 106 and 108 to the respective tasks. The HNP 124 then calls (132) the scheduler 120 to launch the tasks, including to launch one or more user processes once the worker node computers 106 and 108 to perform the jobs in parallel; [0051] The scheduler maps the computing resources provided by the resource manager to computing resources usable by the user program executing on the worker node computers);
allocating by the resource scheduler a second resource allocation of second resources in accordance with the resource allocation plan and notifying the workload scheduler of the second resource allocation (Fig 3; [0035] The first policy engine 202 and the second policy engine 204 may negotiate with the resource manager 114 through the scheduler 120 in multiple iterations to correct the deficiency or excess; [0046] A scheduler 120 receives a reference 306 to a newly allocated container provided by a resource manager 114; [0048] The iterations continue until a termination condition is satisfied, e.g., when sufficient amount of resources has been allocated to the user program or when allocation failed, due to time out; Examiner notes: multiple iterations involves a second iteration of the resource allocation process of Fig. 3), wherein the second resource allocation is based on the same resource allocation plan as the first resource allocation, and is allocated subsequent to the first resource allocation ([0048] a next iteration. The iterations continue until a termination condition is satisfied, e.g., when sufficient amount of resources has been allocated to the user program or when allocation failed, due to time out; Examiner notes: the next iteration represents the second resource allocation in efforts to reach the termination condition); and
running the workloads of the workload scheduler on the second resources by the workload scheduler ([0022] Upon receiving the containers, the HNP 124 divides the user program 112 into tasks that execute on worker node computers 106 and 108. The HNP 124 assigns the tasks to the worker node computers 106 and 108, and maps the containers of the worker node computers 106 and 108 to the respective tasks. The HNP 124 then calls (132) the scheduler 120 to launch the tasks, including to launch one or more user processes once the worker node computers 106 and 108 to perform the jobs in parallel).
Lu does not explicitly teach wherein the second resource allocation is based on the same resource allocation plan as the first resource allocation.
However, Hussain teaches wherein the second resource allocation is based on the same resource allocation plan as the first resource allocation (the infrastructure modeling service 120 may provide the infrastructure template 122 and/or executable code corresponding to the configuration data in the infrastructure template 122 to a secondary infrastructure modeling service to provision the computing resources 114 in the secondary service provider network 104 on behalf of the user. In this way, the service provider network 102 may provide a unified development interface 118 to allow users 108 to create or define infrastructure schemas 134, and corresponding infrastructure templates 122, for provisioning computing resources 114 in a host service provider network 102, and also in secondary service provider network(s) 104, in at least Col 11).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Hussain’s infrastructure templating/schema concept with the existing system of Lu. A person of ordinary skill in the art would have been motivated to make this combination to provide the resulting system with the advantage of standardizing resource allocation through reusable allocation schemas (see Hussain Col 2 52-60 the infrastructure templates may be written in a human-readable language, and machine-readable language, such as JSON, XML, YAML, and so forth. The infrastructure modeling service may create and configure computing resources that are described in the infrastructure templates using one or more computing resource services provided by the service provider networks (e.g., storage service, compute service, database service, etc.).).
Regarding claim 2, Lu and Hussain teaches the method of claim 1.
Lu further teaches wherein the resource allocation plan includes at least one allocation plan attribute chosen from a group of attributes consisting of allocation specifications, allocation goals, scheduling hints, and time constraints ([0029] the system maps requirements on resources by the user program to requests that can be understood by the system. The requirements can include data locality, process network distance, or process topology, in addition to single process resource requirements, e.g., requirements on central processing unit (CPU) cycles or memory resources; [0030] Rules for satisfying requirements of various tasks for computing resources can be grouped as policies. Each policy includes a set of rules of allocating resources. Each user program can be associated with one or more policies that optimize performance of tasks specific to the user program).
Regarding claim 4, Lu and Hussain teaches the method of claim 1.
Lu further teaches releasing at least a portion of the first resource allocation or at least a portion of the second resource allocation by the workload scheduler back to the resource scheduler when the at least a portion of the first resource allocation or the at least a portion of the second resource allocation is no longer required to run the workloads of the workload scheduler ([0038] releasing two units of resources on a third worker node computer … An analyzer may suggest releasing resources in exchange for receiving better resources).
Regarding claim 5, Lu and Hussain teaches the method of claim 1.
Lu further teaches offering by the resource scheduler to the workload scheduler a third resource allocation when the resource allocation plan has not been completed and the resource scheduler has additional resources to allocate in accordance with the resource allocation plan ([0048] The policy engine 304A combines the suggestions 314 from the analyzers 310 and 312 to determine a final allocation plan 316. The final allocation plan 316 can include request for additional containers or request to release allocated containers. The policy engine 304A provides the final allocation plan 316 to the resource manager 114 through the scheduler 120. The scheduler 120 can then receive another new container from the resource manager 114 in a next iteration).
Regarding claim 7, Lu and Hussain teaches the method of claim 1.
Lu further teaches in response to a notification from the resource scheduler that additional resources are available (Fig 3 RPC between scheduler 120 and resource manager 114; [0046] A scheduler 120 receives a reference 306 to a newly allocated container provided by a resource manager 114), modifying the resource allocation plan by the workload scheduler or submitting a new resource allocation plan by the workload scheduler to the resource scheduler (Fig 3 Scheduler 120 interfaces with Policy Engine 304A to determine appropriate allocation plan; [0033] Each of the policy engines 202 and 204 operates to satisfy a requirement of computing resources according to a respective policy through multiple iterations of providing input to the respective analyzers, receiving suggestions from the analyzers, deciding whether to modify the suggestions and whether to communicate to the resource manager 114 to request more resources or to release resources, until an optimal solution is reached under the respective policy or until timeout).
Regarding claims 9, 10, 12, 13, and 15, they are the apparatus of claims 1, 2, 4, 5, and 7 respectively. Therefore, they are rejected for the same reasons as claims 1, 2, 4, 5, and 7 respectively.
Regarding claim 18, it is the non-transitory computer-readable medium of claim 1. Therefore, it is rejected for the same reasons as claim 1.
Lu further teaches a non-transitory computer-readable medium comprising instructions ([0069] Computer-readable media suitable for storing computer program instructions).
Regarding claims 19 and 21, they are the non-transitory computer-readable media of claims 2 and 4 respectively. Therefore, they are rejected for the same reasons as claims 2 and 4 respectively.
Claims 3, 11 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 20160378560 A1 in view of Hussain et al. US 11032213 B1 in view of Blanding et al. US 9081627 B1.
Blanding is cited in a previous office action.
Regarding claim 3, Lu and Hussain teaches the method of claim 1.
Lu further teaches wherein the resource allocation plan includes a request for fusible resources ([0022] The allocated resources are designated as containers),
Lu does not explicitly teach the method further comprising fusing by the resource scheduler at least a portion of the first resource allocation with at least a portion of the second resource allocation.
However, Blanding teaches fusing by the resource scheduler at least a portion of the first resource allocation with at least a portion of the second resource allocation (The allocation resource reassignment requires that resources be removed (typically in a separate step) from containers to which they were previously assigned in order to be reassigned to other containers. This removal step can be accomplished all at once (i.e., prior to any resource increases being made) or a bit at a time as resources are needed for increases, Col 7 30-36).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Blanding’s reallocation of resources from one container to another with the system of Lu and Hussain. A person of ordinary skill in the art would have been motivated to make this combination to provide Lu and Hussain’s system with the advantage of workload prioritization and improvement of the utilization of available resources to save costs (see Blanding Col 2 25-28 the most important transfers, e.g., those transfers that are associated with the highest priority resource destinations, are performed first, and are thus the least burdened by delays associated with resource transfer; Col 2 29-37 reassignment can likewise be improved by selecting first for reassignment to satisfy said higher priority allocations those available resources that can be reassigned at the least cost. For example, the resource to be transferred can be selected as a function of how quickly it can be transferred. For another example, a resource to be transferred can be selected as a function of the cost associated with depriving its current owner of its use earlier rather than later in the transfer sequence).
Regarding claim 11, it is the apparatus of claim 3. Therefore, it is rejected for the same reasons as claim 3.
Regarding claim 20, it is the non-transitory computer-readable medium of claim 3. Therefore, it is rejected for the same reasons as claim 3.
Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 20160378560 A1 in view of Hussain et al. US 11032213 B1 in view of Venkatesh et al. US 20190042321 A1.
Venkatesh is cited in a previous office action.
Regarding claim 6, Lu and Hussain teaches the method of claim 5.
Lu further teaches wherein the resource allocation plan includes a request for fusible resources ([0022] The allocated resources are designated as containers), the method further comprising accepting the third resource allocation by the workload scheduler ([0048] The policy engine 304A provides the final allocation plan 316 to the resource manager 114 through the scheduler 120. The scheduler 120 can then receive another new container from the resource manager 114 in a next iteration).
Lu does not explicitly teach fusing by the resource scheduler at least a portion of the third resource allocation with at least a portion of the first resource allocation or at least a portion the second resource allocation.
However, Venkatesh teaches fusing by the resource scheduler at least a portion of the third resource allocation with at least a portion of the first resource allocation or at least a portion the second resource allocation ([0039] The ECMS can also operate across multiple host machines. Generally the ECMS can scale up and/or scale down containers across different host machines. To respond to high resource utilization of a given container, ECMS will respond by scaling up the resources allocated to the container. However, as utilization increases, there will be a point when the host runs out of available resources. In response, the container or containers are scaled out across different hosts that have sufficient resources available; Examiner notes: the new resources allocated as part of scaling up are fused with the container that requires scaling).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Venkatesh’s elastic container management system with the system of Lu and Hussain. A person of ordinary skill in the art would have been motivated to make this combination to provide Lu and Hussain’s system with the advantage of scalable containers improving resource utilization and allocation (see Venkatesh [0029] To address utilization, resources among demanding are dynamically increased and/or decreased. A proportional allocation of the resource requirements including, but are not limited to, CPU usage, memory usage, bandwidth allocation, network usage, and the like).
Regarding claim 14, it is the apparatus of claim 6. Therefore, it is rejected for the same reasons as claim 6.
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al. US 20160378560 A1 in view of Hussain et al. US 11032213 B1 in view of Fuller et al. US 20140068624 A1.
Fuller is cited in a previous office action.
Regarding claim 8, Lu and Hussain teaches the method of claim 1.
Lu further teaches wherein the workload scheduler is a first workload scheduler and the resource allocation plan is a first resource allocation plan ([0022] The scheduler 120 then negotiates (128) with the resource manager 114 in YARN application master resource manager (“YARN AM-RM”) protocol to allocate required computing resources using a policy driven paradigm)
Lu does not explicitly teach the method further comprising: submitting a second resource allocation plan by a second workload scheduler to the resource scheduler to run workloads of the second workload scheduler.
However, Fuller teaches submitting a second resource allocation plan by a second workload scheduler to the resource scheduler to run workloads of the second workload scheduler ([0058] multiple task managers for handling requests for different types of tasks; [0059] task manager (351, 352 or 353) receives the request to execute a task (task instance) from the client, determines an appropriate resource set for the type of task, and requests the appropriate resource set from the resource manager (340)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Fuller’s utilization of multiple task schedulers with the system of Lu and Hussain. A person of ordinary skill in the art would have been motivated to make this combination to provide Lu and Hussain’s system with the advantage of allocating resources to execute a variety of task types (see Fuller [0041] a flexible scheduling mechanism has been proposed to address the problem mentioned above, that is, when there is a high number of hits, computing resources will be created automatically to expand the processing capacity of the system, and when there is a low number of hits and the system is idle, computing resources will be reduced automatically to save costs; [0041] the software architecture (300) works with various types of tasks, workloads and hosts, for resources budgeted into pools from which resource sets of resources are drawn).
Regarding claim 16, it is the apparatus of claim 8. Therefore, it is rejected for the same reasons as claim 8.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to HARRISON LI whose telephone number is (703) 756-1469. The
examiner can normally be reached Monday-Friday 9:00am-5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing
using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Aimee Li can be reached on (571) 272-4169. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./
Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195