Prosecution Insights
Last updated: April 19, 2026
Application No. 18/361,688

METHOD AND SYSTEM FOR PRIORITY-BASED RESOURCE SCHEDULING WITH LOAD BALANCING

Non-Final OA §103§112
Filed
Jul 28, 2023
Examiner
ALAM, SHIHAB
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Waterlabs AI Technologies Private Limited
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
61.9%
+21.9% vs TC avg
§112
23.8%
-16.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to claims filed 07/28/2023. Claims 1-20 are pending. Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in India on 07/30/2022. It is noted, however, that applicant has not filed a certified copy of the IN202211043744 application as required by 37 CFR 1.55. Claim Objections Claims 1, 6, 7-15, and 20 are objected to because of the following informalities: Contain part numbers in parenthesis which are being interpreted as those respectively detailed in the specifications and drawings and must be removed. Claim 13 is objected to because of the following informalities: A grammatical error in the last limitation of the claim “and an output port configured TO communicate control information TO the first server and the second server”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "automated processor" in claims 13-20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the disclosure as originally filed, hereafter “disclosure”, points to 202, Fig. 2 as the corresponding structure; however, this is a generic computing component and in accordance with MPEP § 2181 (II)(B), when the corresponding structure of computer implemented mean plus function limitations corresponds to a general purpose computer, an algorithm is required to transform the general purpose computer into a special purpose computer to be sufficient as corresponding structure. Upon further review of the disclosure, Applicant has failed to define the algorithm for each of the claimed functions and has instead only provided either verbatim support for the claimed function (which is insufficient as a step of steps of a corresponding algorithm) or exemplary language that does not make clear the metes and bounds of the algorithm. As such, see rejections under 35 U.S.C. § 112(a) and (b) below. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 13-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 13-20 recite an "automated processor" which invokes 35 U.S.C. § 112(f), see claim interpretation above. The disclosure does not recite sufficient corresponding structure (in this instance computer + algorithm), again see claim interpretation above. As such, and in accordance with MPEP § 2181 (ll)(B), last paragraph "When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a)." The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 13-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “automated processor” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure fails to disclose sufficient corresponding structure (in this instance computer+ algorithm), see claim interpretation above. As such, and in accordance with MPEP § 2181 (II)(B) "For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b).". Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 14 and 15 are further rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 14 and 15 both recite a “first server” and “second server” as well as a “round-robin technique” in Claim 14 and “graph theory technique” in Claim 15. It is unclear whether these elements of the claims are referring to the same ones declared in independent Claim 13 or new instances of these elements, thus claims 14 and 15 are therefore rejected under 35 U.S.C. 112(b). Furthermore, for the purposes of compact prosecution the Examiner will interpret the “first server”, “second server”, “round-robin technique” and “graph theory technique” as referring to the same elements declared in Claim 13. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Bahramshahry et al. (US 20200026569 A1) (hereinafter Bahramshahry), in view of Rojas-Cessa et al. (WO 2009029833 A1) (hereinafter Cessa). Regarding Claim 1 Bahramshahry teaches: A method for priority-based resource scheduling with load balancing, “A scheduler responsible for performing the scheduling processes and generally will seek to perform a variety of functions in addition to scheduling work, such as optimizing utilizing of resources through a load balancing process which thus permits multiple users to share system resources more effectively.”, (Bahramshahry: ¶006), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶687). the method comprising: receiving, by a processor (202) associated with a resource scheduling system (110), from a plurality of client devices (104), “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices”, (Bahramshahry: ¶063), “customer organizations (104A, 104B, and 104C) which utilize web services and other service offerings as provided by the host organization 150 by communicably interfacing to the host organization 150 via network 195”, (Bahramshahry: ¶ 066), “executing a scheduler via the processor of the system, wherein the scheduler performs at least the following operations”, (Bahramshahry: ¶660). one or more client requests to execute one or more tasks on one or more servers (116) associated with one or more Virtual Machines (VMs), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution from one or more workload queues”, (Bahramshahry: ¶ 317), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “scheduling the selected workload task for execution with the computing resource and allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶678), “virtual machine 685 having mapped computing resources such as vCPU, RAM, a base image, a virtual image, IP space and network links, etc. The virtual machine 685 executes the workload tasks 641 in conjunction with memory 695”, (Bahramshahry: ¶221, Figure 6). wherein the one or more client requests comprises request parameters; “the workload discovery engine is to further identify a plurality of associated workload task requirements for each of the pending workload tasks”, (Bahramshahry: ¶290), “the scheduler is to evaluate a specified customer preference for executing workload tasks at a specified one of the plurality of computing resources as represented within the SLT for the respective workload task”, (Bahramshahry: ¶295), “in which the cloud-based service receives inputs from the client device at the user interface 626 to configure use of the scheduling service”, (Bahramshahry: ¶223). determining, by the processor (202), usage-related information of each server associated with one or more VMs, upon receiving the one or more client requests; “discovery engine 192 capable of discovering available compute resources by which to complete workloads and further capable to discover pending workloads awaiting assignment to compute resources”, (Bahramshahry: ¶073), “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “the scheduler is to schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “the scheduler is to evaluate pricing data represented within the local cache by the plurality of resource characteristics identified for each of the plurality of computing resources”, (Bahramshahry: ¶ 294), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution”, (Bahramshahry: ¶317). Examiner Notes: because pending workload tasks arise from received client requests, this reinforces that resource determination occurs after receipt of the request. prioritizing, by the processor (202), the received one or more client requests, based on the request parameters associated with the one or more client requests; “based on how the scheduler allocates resources and prioritizes competing needs”, (Bahramshahry: ¶007), “the SLT is identified by the policy engine based further on a customer identifier or an organizational identifier or a service tier associated with each respective workload task”, (Bahramshahry: ¶292), “the SLT identified for each of the workload tasks defines a Quality of Service (QoS) expectation for each workload task; in which the scheduler does not guarantee or commit to meeting the QoS expectation for any individual workload task”, (Bahramshahry: ¶293), “and associate each pending workload task within the local cache with a priority marker, a QoS indicator, and/or the SLT based on the workload queue from which the task was retrieved”, (Bahramshahry: ¶0289), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶ 687). assigning, by the processor (202), computing resources in the one or more servers (116) to execute one or more tasks for the one or more client devices (104) using a dynamic programming technique, based on the determined usage-related information and the prioritized one or more client requests; “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources”, (Bahramshahry: ¶278), “schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “scheduler will adjust one or more of re-try logic, priority, end-to-end execution time, preferred resource allocation range, and aging for each workload task”, (¶293), “dynamically allocate compute capacity (any of CPU, RAM, IP addresses, etc.) via which to perform a specific type of work according to needs”, (Bahramshahry ¶377), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). monitoring dynamically, by the processor (202), the usage-related information of the one or more servers (116); “the scheduler 125 is enabled to utilize the local cache 140 to make decisions on resource allocation while leveraging the various services to monitor external resources”…” resource pools or third party clouds may go online and offline or may become available to perform work or be wholly consumed and therefore unavailable to perform work”…“There are additional factors which may change such as pricing and preference and performance metrics, each of which may likewise be monitored and updated by the compute resource”, (Bahramshahry: ¶077), “the workload discovery 135 component will then query that discovered compute cloud requesting all running tasks and completed tasks”, (Bahramshahry: ¶079), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). migrating, by the processor (202), from a first server to a second server of the one or more servers (116), the one or more tasks using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644), “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “If resources are available to allocation another instance of a pending task, then the round-robin capacity round process simply allocates that instance”, (Bahramshahry: ¶110), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”… “a first priority 1 workload task may be sent to a first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed, with such auxiliary services then updating the local cache”, (Bahramshahry: ¶102), “The scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: 110), “identifying” … “a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶349). wherein the first server is executing the one or more tasks for the one or more client devices (104); “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices 106A-C (e.g., such as mobile devices, smart phones, tablets, PCs, etc.)”, (Bahramshahry: ¶063), “to schedule at least a portion of the plurality of workload tasks 640 for execution via the one or more computing resources 628”, (Bahramshahry: ¶224), “identifies, via a compute resource discovery engine, a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶340). and initiating, by the processor (202), the one or more tasks on the second server, in response to migrating the one or more tasks, “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”, (Bahramshahry: ¶126), “scheduling one of the pending workload tasks into capacity within the plurality of computing resources freed up by the terminated workload task”, (Bahramshahry: ¶346), “scheduling the workload tasks potentially affected by the failure condition of the external service for a repeated execution on the plurality of computing resources”, (Bahramshahry: ¶712), “scheduling the multiple workload copies for execution on different computing resources”, (Bahramshahry: ¶687), “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644). wherein initiating the one or more tasks on the second server is to balance a load for resource utilization of the first server and the second server. “such as optimizing utilizing of resources through a load balancing process”, (Bahramshahry: ¶006), “the planner 127 may be utilized to allocate resource for the most efficient utilization or for best performance”, (Bahramshahry: ¶114), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “a scheduler 1242 to schedule one of the pending workload tasks 1239 into capacity within the plurality of computing resources 1240 freed up by the terminated workload task 1241”, (Bahramshahry: ¶319). Further regarding Claim 1, Bahramshahry fails to teach: migrating, by the processor (202), from a first server to a second server of the one or more servers (116), the one or more tasks using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 2 Bahramshahry teaches: the request parameters comprise at least one of a demand comprising several tasks, a timeline, a pricing category, and a Service Level Agreement (SLA). “the produce 126 phase prepares a comprehensive list of all pending work for a single workload type”, (Bahramshahry: ¶109), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “There are additional factors which may change such as pricing and preference and performance metrics”, (Bahramshahry: ¶077), “Other considerations may likewise be employed, such as the lowest cost resources or the most preferred among two or more resources from competing clouds”, (Bahramshahry: ¶114), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109). Regarding Claim 3 Bahramshahry teaches: the usage-related information comprises at least one of an active time, a running time, and a load level. “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed”, (Bahramshahry: ¶102), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “which causes the workload, while executing, to take much longer to execute than normal due to the test failures”, (Bahramshahry: ¶607), “based further on execution of the workload tasks overlapping in time with a time frame associated with the failure condition”, (Bahramshahry: ¶711), “allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶677). Regarding Claim 4 Bahramshahry fails to teach: the graph theory comprises K-colour set problem technique which is used for task scheduling. However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 5 Bahramshahry teaches: the round-robin technique is used for time-based server utilization, is based on a round-robin time scheduler report provided by the round-robin technique “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “For a next capacity round, the scheduler then proceeds to calculate the next capacity round by taking into account the recently planned tasks at phase 127 and then the scheduling cycle is optionally finalized 131. A subsequent analyze 132 phase then applies post-scheduling analysis to check any decisions made during the scheduler's allocation rounds”, (Bahramshahry: ¶108). Examiner notes: the scheduler report is being interpreted as which tasks were allocated, how much capacity was consumed, what capacity remains available, and which tasks were deferred to later rounds. Regarding Claim 6 Bahramshahry teaches: the round-robin technique and the dynamic programming technique is used to prioritize Service Level Agreement (SLA) requirements of the one or more client devices (104). “The capacity round implements a round-robin resource allocation”, (Bahramshahry: ¶110), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109), “calculate an allocation route based on the service level targets and capacity that is known to be available”, (Bahramshahry: ¶124), “scheduling the pending workload tasks to execute via the one or more computing resources in compliance with the selected SLT specified for each of the pending workload tasks”, (Bahramshahry: ¶695). Regarding Claim 7, Bahramshahry teaches: A resource scheduling system (110) for priority-based resource scheduling with load balancing, “A scheduler responsible for performing the scheduling processes and generally will seek to perform a variety of functions in addition to scheduling work, such as optimizing utilizing of resources through a load balancing process which thus permits multiple users to share system resources more effectively.”, (Bahramshahry: ¶006), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶687). the method comprising: a processor (202); a memory (206) coupled to the processor (202), wherein the memory (206) comprises processor-executable instructions, “a processor and a memory to execute instructions at the system”, (Bahramshahry: Abstract), “executing a scheduler via the processor of the system”, (Bahramshahry: ¶660), “supported by a processor and a memory to execute such functionality”, (Bahramshahry: ¶221). which on execution causes the processor (202) to: receive, from a plurality of client devices (104), “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices”, (Bahramshahry: ¶063), “customer organizations (104A, 104B, and 104C) which utilize web services and other service offerings as provided by the host organization 150 by communicably interfacing to the host organization 150 via network 195”, (Bahramshahry: ¶ 066), “executing a scheduler via the processor of the system, wherein the scheduler performs at least the following operations”, (Bahramshahry: ¶660). one or more client requests to execute one or more tasks on one or more servers (116) associated with one or more Virtual Machines (VMs), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution from one or more workload queues”, (Bahramshahry: ¶ 317), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “scheduling the selected workload task for execution with the computing resource and allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶678), “virtual machine 685 having mapped computing resources such as vCPU, RAM, a base image, a virtual image, IP space and network links, etc. The virtual machine 685 executes the workload tasks 641 in conjunction with memory 695”, (Bahramshahry: ¶221, Figure 6). wherein the one or more client requests comprises request parameters; “the workload discovery engine is to further identify a plurality of associated workload task requirements for each of the pending workload tasks”, (Bahramshahry: ¶290), “the scheduler is to evaluate a specified customer preference for executing workload tasks at a specified one of the plurality of computing resources as represented within the SLT for the respective workload task”, (Bahramshahry: ¶295), “in which the cloud-based service receives inputs from the client device at the user interface 626 to configure use of the scheduling service”, (Bahramshahry: ¶223). determine usage-related information of each server associated with one or more VMs, upon receiving the one or more client requests; “discovery engine 192 capable of discovering available compute resources by which to complete workloads and further capable to discover pending workloads awaiting assignment to compute resources”, (Bahramshahry: ¶073), “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “the scheduler is to schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “the scheduler is to evaluate pricing data represented within the local cache by the plurality of resource characteristics identified for each of the plurality of computing resources”, (Bahramshahry: ¶ 294), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution”, (Bahramshahry: ¶317). Examiner Notes: because pending workload tasks arise from received client requests, this reinforces that resource determination occurs after receipt of the request. prioritize the received one or more client requests, based on the request parameters associated with the one or more client requests; “based on how the scheduler allocates resources and prioritizes competing needs”, (Bahramshahry: ¶007), “the SLT is identified by the policy engine based further on a customer identifier or an organizational identifier or a service tier associated with each respective workload task”, (Bahramshahry: ¶292), “the SLT identified for each of the workload tasks defines a Quality of Service (QoS) expectation for each workload task; in which the scheduler does not guarantee or commit to meeting the QoS expectation for any individual workload task”, (Bahramshahry: ¶293), “and associate each pending workload task within the local cache with a priority marker, a QoS indicator, and/or the SLT based on the workload queue from which the task was retrieved”, (Bahramshahry: ¶0289), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶ 687). assign computing resources in the one or more servers (116) to execute one or more tasks for the one or more client devices (104) using a dynamic programming technique, based on the determined usage-related information and the prioritized one or more client requests; “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources”, (Bahramshahry: ¶278), “schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “scheduler will adjust one or more of re-try logic, priority, end-to-end execution time, preferred resource allocation range, and aging for each workload task”, (¶293), “dynamically allocate compute capacity (any of CPU, RAM, IP addresses, etc.) via which to perform a specific type of work according to needs”, (Bahramshahry ¶377), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). monitor dynamically, the usage-related information of the one or more servers (116); “the scheduler 125 is enabled to utilize the local cache 140 to make decisions on resource allocation while leveraging the various services to monitor external resources”…” resource pools or third party clouds may go online and offline or may become available to perform work or be wholly consumed and therefore unavailable to perform work”…“There are additional factors which may change such as pricing and preference and performance metrics, each of which may likewise be monitored and updated by the compute resource”, (Bahramshahry: ¶077), “the workload discovery 135 component will then query that discovered compute cloud requesting all running tasks and completed tasks”, (Bahramshahry: ¶079), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). migrate from a first server to a second server of the one or more servers (116), the one or more tasks using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644), “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “If resources are available to allocation another instance of a pending task, then the round-robin capacity round process simply allocates that instance”, (Bahramshahry: ¶110), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”… “a first priority 1 workload task may be sent to a first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed, with such auxiliary services then updating the local cache”, (Bahramshahry: ¶102), “The scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: 110), “identifying” … “a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶349). wherein the first server is executing the one or more tasks for the one or more client devices (104); “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices 106A-C (e.g., such as mobile devices, smart phones, tablets, PCs, etc.)”, (Bahramshahry: ¶063), “to schedule at least a portion of the plurality of workload tasks 640 for execution via the one or more computing resources 628”, (Bahramshahry: ¶224), “identifies, via a compute resource discovery engine, a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶340). and initiate the one or more tasks on the second server, in response to migrating the one or more tasks, “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”, (Bahramshahry: ¶126), “scheduling one of the pending workload tasks into capacity within the plurality of computing resources freed up by the terminated workload task”, (Bahramshahry: ¶346), “scheduling the workload tasks potentially affected by the failure condition of the external service for a repeated execution on the plurality of computing resources”, (Bahramshahry: ¶712), “scheduling the multiple workload copies for execution on different computing resources”, (Bahramshahry: ¶687), “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644). wherein initiating the one or more tasks on the second server is to balance a load for resource utilization of the first server and the second server. “such as optimizing utilizing of resources through a load balancing process”, (Bahramshahry: ¶006), “the planner 127 may be utilized to allocate resource for the most efficient utilization or for best performance”, (Bahramshahry: ¶114), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “a scheduler 1242 to schedule one of the pending workload tasks 1239 into capacity within the plurality of computing resources 1240 freed up by the terminated workload task 1241”, (Bahramshahry: ¶319). Further regarding Claim 7, Bahramshahry fails to teach: migrate from a first server to a second server of the one or more servers (116), the one or more tasks using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 8, Bahramshahry teaches: the request parameters comprise at least one of a demand comprising several tasks, a timeline, a pricing category, and a Service Level Agreement (SLA). “the produce 126 phase prepares a comprehensive list of all pending work for a single workload type”, (Bahramshahry: ¶109), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “There are additional factors which may change such as pricing and preference and performance metrics”, (Bahramshahry: ¶077), “Other considerations may likewise be employed, such as the lowest cost resources or the most preferred among two or more resources from competing clouds”, (Bahramshahry: ¶114), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109). Regarding Claim 9, Bahramshahry teaches: the usage-related information comprises at least one of an active time, a running time, and a load level. “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed”, (Bahramshahry: ¶102), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “which causes the workload, while executing, to take much longer to execute than normal due to the test failures”, (Bahramshahry: ¶607), “based further on execution of the workload tasks overlapping in time with a time frame associated with the failure condition”, (Bahramshahry: ¶711), “allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶677). Regarding Claim 10 Bahramshahry fails to teach: the graph theory comprises K-colour set problem technique which is used for task scheduling. However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 11, Bahramshahry teaches: the round-robin technique is used for time-based server utilization, based on a round-robin time scheduler report provided by the round-robin technique. “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “For a next capacity round, the scheduler then proceeds to calculate the next capacity round by taking into account the recently planned tasks at phase 127 and then the scheduling cycle is optionally finalized 131. A subsequent analyze 132 phase then applies post-scheduling analysis to check any decisions made during the scheduler's allocation rounds”, (Bahramshahry: ¶108). Examiner notes: the scheduler report is being interpreted as which tasks were allocated, how much capacity was consumed, what capacity remains available, and which tasks were deferred to later rounds. Regarding Claim 12, Bahramshahry teaches: the round-robin technique and the dynamic programming technique is used to prioritize Service Level Agreement (SLA) requirements of the one or more client devices (104). “The capacity round implements a round-robin resource allocation”, (Bahramshahry: ¶110), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109), “calculate an allocation route based on the service level targets and capacity that is known to be available”, (Bahramshahry: ¶124), “scheduling the pending workload tasks to execute via the one or more computing resources in compliance with the selected SLT specified for each of the pending workload tasks”, (Bahramshahry: ¶695). Regarding Claim 13, Bahramshahry teaches: A system for priority-based resource scheduling with load balancing, “A scheduler responsible for performing the scheduling processes and generally will seek to perform a variety of functions in addition to scheduling work, such as optimizing utilizing of resources through a load balancing process which thus permits multiple users to share system resources more effectively.”, (Bahramshahry: ¶006), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶687). comprising: an input port configured to receive requests to execute one or more tasks; “request interface 176”, (Bahramshahry: Fig 1A), “requesting, at a scheduler, information from the local cache specifying the one or more computing resources available to execute workload tasks and the plurality of workload tasks to be scheduled for execution”, (Bahramshahry: ¶278), “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices 106A-C (e.g., such as mobile devices, smart phones, tablets, PCs, etc.)”, (Bahramshahry: ¶063), “he computer system 800 also may include a user interface 810 (such as a video display unit, a liquid crystal display, etc.), an alphanumeric input device 812 (e.g., a keyboard), a cursor control device 814 (e.g., a mouse), and a signal generation device 816 (e.g., an integrated speaker)”, (Bahramshahry: ¶266). Examiner notes: “Input port” is being interpreted as a request interface configured to receive requests from in this case client devices. at least one automated processor (202) associated with a resource scheduling system (110), configured to: receive from a plurality of client devices (104), “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices”, (Bahramshahry: ¶063), “customer organizations (104A, 104B, and 104C) which utilize web services and other service offerings as provided by the host organization 150 by communicably interfacing to the host organization 150 via network 195”, (Bahramshahry: ¶ 066), “executing a scheduler via the processor of the system, wherein the scheduler performs at least the following operations”, (Bahramshahry: ¶660). one or more client requests to execute one or more tasks on one or more servers (116) associated with one or more Virtual Machines (VMs), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution from one or more workload queues”, (Bahramshahry: ¶ 317), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “scheduling the selected workload task for execution with the computing resource and allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶678), “virtual machine 685 having mapped computing resources such as vCPU, RAM, a base image, a virtual image, IP space and network links, etc. The virtual machine 685 executes the workload tasks 641 in conjunction with memory 695”, (Bahramshahry: ¶221, Figure 6). wherein the one or more client requests comprises request parameters; “the workload discovery engine is to further identify a plurality of associated workload task requirements for each of the pending workload tasks”, (Bahramshahry: ¶290), “the scheduler is to evaluate a specified customer preference for executing workload tasks at a specified one of the plurality of computing resources as represented within the SLT for the respective workload task”, (Bahramshahry: ¶295), “in which the cloud-based service receives inputs from the client device at the user interface 626 to configure use of the scheduling service”, (Bahramshahry: ¶223). determine usage-related information of each server associated with one or more VMs, upon receiving the one or more client requests; “discovery engine 192 capable of discovering available compute resources by which to complete workloads and further capable to discover pending workloads awaiting assignment to compute resources”, (Bahramshahry: ¶073), “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “the scheduler is to schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “the scheduler is to evaluate pricing data represented within the local cache by the plurality of resource characteristics identified for each of the plurality of computing resources”, (Bahramshahry: ¶ 294), “identifying, via a workload discovery engine, pending workload tasks to be scheduled for execution”, (Bahramshahry: ¶317). Examiner Notes: because pending workload tasks arise from received client requests, this reinforces that resource determination occurs after receipt of the request. prioritize the received one or more client requests, based on the request parameters associated with the one or more client requests; “based on how the scheduler allocates resources and prioritizes competing needs”, (Bahramshahry: ¶007), “the SLT is identified by the policy engine based further on a customer identifier or an organizational identifier or a service tier associated with each respective workload task”, (Bahramshahry: ¶292), “the SLT identified for each of the workload tasks defines a Quality of Service (QoS) expectation for each workload task; in which the scheduler does not guarantee or commit to meeting the QoS expectation for any individual workload task”, (Bahramshahry: ¶293), “and associate each pending workload task within the local cache with a priority marker, a QoS indicator, and/or the SLT based on the workload queue from which the task was retrieved”, (Bahramshahry: ¶0289), “rating the pending workload tasks based on one or more of a workload type, a specified priority, and current available compute capacity”, (Bahramshahry: ¶ 687). assign computing resources in the one or more servers (116) to execute one or more tasks for the one or more client devices (104) using a dynamic programming technique, based on the determined usage-related information and the prioritized one or more client requests; “identifying, via a compute resource discovery engine, one or more computing resources available to execute workload tasks”, (Bahramshahry: ¶278), “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources”, (Bahramshahry: ¶278), “schedule the pending workload tasks based further on the associated workload task requirements and which of the plurality of computing resources available to execute workload tasks satisfies the associated workload task requirements”, (Bahramshahry: ¶290), “scheduler will adjust one or more of re-try logic, priority, end-to-end execution time, preferred resource allocation range, and aging for each workload task”, (¶293), “dynamically allocate compute capacity (any of CPU, RAM, IP addresses, etc.) via which to perform a specific type of work according to needs”, (Bahramshahry ¶377), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). dynamically monitor the usage-related information of the one or more servers (116); “the scheduler 125 is enabled to utilize the local cache 140 to make decisions on resource allocation while leveraging the various services to monitor external resources”…” resource pools or third party clouds may go online and offline or may become available to perform work or be wholly consumed and therefore unavailable to perform work”…“There are additional factors which may change such as pricing and preference and performance metrics, each of which may likewise be monitored and updated by the compute resource”, (Bahramshahry: ¶077), “the workload discovery 135 component will then query that discovered compute cloud requesting all running tasks and completed tasks”, (Bahramshahry: ¶079), “Such adaptability is realized via a scheduler which determines independently where the resources should be allocated on an iteration by iteration basis, be it minute by minute, or some other time span for each iterative cycle (refer to the iterative cycle at FIG. 1C). Such a scheduler, by design, embraces the concept of eventual consistency, thus permitting a very decoupled solution”, (Bahramshahry: ¶380). migrate the one or more tasks from a first server to a second server of the one or more servers (116) using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644), “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “If resources are available to allocation another instance of a pending task, then the round-robin capacity round process simply allocates that instance”, (Bahramshahry: ¶110), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”… “a first priority 1 workload task may be sent to a first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed, with such auxiliary services then updating the local cache”, (Bahramshahry: ¶102), “The scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: 110), “identifying” … “a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶349). wherein the first server executes the one or more tasks for the one or more client devices (104); “a hosted computing environment 111 is communicably interfaced with a plurality of user client devices 106A-C (e.g., such as mobile devices, smart phones, tablets, PCs, etc.)”, (Bahramshahry: ¶063), “to schedule at least a portion of the plurality of workload tasks 640 for execution via the one or more computing resources 628”, (Bahramshahry: ¶224), “identifies, via a compute resource discovery engine, a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶340). and initiate the one or more tasks on the second server, in response to migrating the one or more tasks, “scheduling at least a portion of the plurality of workload tasks for execution via the one or more computing resources based on the information requested from the local cache”, (Bahramshahry: ¶278), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”, (Bahramshahry: ¶126), “scheduling one of the pending workload tasks into capacity within the plurality of computing resources freed up by the terminated workload task”, (Bahramshahry: ¶346), “scheduling the workload tasks potentially affected by the failure condition of the external service for a repeated execution on the plurality of computing resources”, (Bahramshahry: ¶712), “scheduling the multiple workload copies for execution on different computing resources”, (Bahramshahry: ¶687), “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644). wherein the one or more tasks are initiated on the second server to balance a load for resource utilization of the first server and the second server; “such as optimizing utilizing of resources through a load balancing process”, (Bahramshahry: ¶006), “the planner 127 may be utilized to allocate resource for the most efficient utilization or for best performance”, (Bahramshahry: ¶114), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “a scheduler 1242 to schedule one of the pending workload tasks 1239 into capacity within the plurality of computing resources 1240 freed up by the terminated workload task 1241”, (Bahramshahry: ¶319). and an output port configured communicate control information the first server and the second server. “the external cloud interface 627 provides a communications link to third party private and public computing clouds 628 on behalf of the scheduling service 665”, (Bahramshahry: ¶222), “FIG. 7B shows that user system 712 may include a processor system 712A, memory system 712B, input system 712C, and output system 712D”, (Bahramshahry: ¶253), “user system 712 might include an HTTP client commonly referred to as a “browser” for sending and receiving HTTP messages to and from an HTTP server at system 716”… “such as round-robin HTTP request distributors to balance loads and distribute incoming HTTP requests evenly over a plurality of servers”, (Bahramshahry: ¶247). Further regarding Claim 13, Bahramshahry fails to teach: migrate the one or more tasks from a first server to a second server of the one or more servers (116) using at least one of a round-robin technique and graph theory technique, based on the monitored usage-related information, However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 14, Bahramshahry teaches: the at least one automated processor is configured to migrate the one or more tasks from a first server to a second server of the one or more servers (116) using a round-robin technique. “even when the service outage has ended or in the instance of the external service dependency having since been restored, the scheduling service may nevertheless migrate the workload execution to the different cloud to avoid repeating the same problem”, (Bahramshahry: ¶644), “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “If resources are available to allocation another instance of a pending task, then the round-robin capacity round process simply allocates that instance”, (Bahramshahry: ¶110), “The scheduler's 125 planning 127 operation then proceeds to specifically delineate which task will be performed by which compute cloud from the list of selected workload tasks”… “a first priority 1 workload task may be sent to a first third party cloud 199 with other priority 2 tasks being sent to different third-party compute clouds”, (Bahramshahry: ¶126), “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed, with such auxiliary services then updating the local cache”, (Bahramshahry: ¶102), “The scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: 110), “identifying” … “a plurality of computing resources currently executing scheduled workload tasks”, (Bahramshahry: ¶349). Regarding Claim 15 Bahramshahry fails to teach: the at least one automated processor is configured to migrate the one or more tasks from a first server to a second server of the one or more servers (116) a graph theory technique. However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 16, Bahramshahry teaches: the request parameters comprise at least one of a demand comprising several tasks, a timeline, a pricing category, and a Service Level Agreement (SLA). “the produce 126 phase prepares a comprehensive list of all pending work for a single workload type”, (Bahramshahry: ¶109), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “There are additional factors which may change such as pricing and preference and performance metrics”, (Bahramshahry: ¶077), “Other considerations may likewise be employed, such as the lowest cost resources or the most preferred among two or more resources from competing clouds”, (Bahramshahry: ¶114), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109). Regarding Claim 17, Bahramshahry teaches: the usage-related information comprises at least one of an active time, a running time, and a load level. “query various computing clouds to check whether they are accessible and available and what workload tasks they are presently executing or have completed”, (Bahramshahry: ¶102), “the capacity round implements a round-robin resource allocation which singularly focuses on available capacity”, (Bahramshahry: ¶110), “not all tasks will be selected and planned for execution, thus causing them to age in terms of time since submission as well as possibly increase in priority for subsequent scheduling rounds”, (Bahramshahry: ¶127), “which causes the workload, while executing, to take much longer to execute than normal due to the test failures”, (Bahramshahry: ¶607), “based further on execution of the workload tasks overlapping in time with a time frame associated with the failure condition”, (Bahramshahry: ¶711), “allocating the virtual resource exclusively to the computing resource for the duration of execution of the selected workload task”, (Bahramshahry: ¶677). Regarding Claim 18, Bahramshahry fails to teach: the graph theory comprises K-colour set problem technique which is used for task scheduling. However, Cessa teaches: “In one embodiment, a graph coloring theory can be used to facilitate a measurement- task scheduling algorithm for network measurement system 100”, (Cessa: ¶023), “This problem can be described as a vertex coloring problem. For a conflict graph G(V,E) with vertices V = V(G), each vertex can be assigned a color out of k (e.g., integers 1, ..., k) colors such that no two adjacent vertices have the same color”… “the color set to be used in the conflict graph can represent a total number of time slots in a measurement cycle”, (Cessa: ¶029). It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine “the graph theory comprises K-colour set problem technique which is used for task scheduling” of Cessa with the methods and systems of Bahramshahry in order to schedule tasks by assigning colors to items so conflicting items don’t get the same color and loads are spread as evenly as possible. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, for the purpose of resolving “measurement contention and to provide efficient task processing”, (Cessa: ¶023). Regarding Claim 19, Bahramshahry teaches: the round-robin technique is used for time-based server utilization, is based on a round-robin time scheduler report provided by the round-robin technique. “The capacity round implements a round-robin resource allocation which singularly focuses on available capacity”… “the scheduler 125 then iterates through as many rounds as required to either exhaust all available resources or exhaust all produced tasks”, (Bahramshahry: ¶110), “For a next capacity round, the scheduler then proceeds to calculate the next capacity round by taking into account the recently planned tasks at phase 127 and then the scheduling cycle is optionally finalized 131. A subsequent analyze 132 phase then applies post-scheduling analysis to check any decisions made during the scheduler's allocation rounds”, (Bahramshahry: ¶108). Examiner notes: the scheduler report is being interpreted as which tasks were allocated, how much capacity was consumed, what capacity remains available, and which tasks were deferred to later rounds. Regarding Claim 20, Bahramshahry Teaches: the round-robin technique and the dynamic programming technique is used to prioritize Service Level Agreement (SLA) requirements of the one or more client devices (104). “The capacity round implements a round-robin resource allocation”, (Bahramshahry: ¶110), “the producer 126 additionally specifies the importance or priority for every task created according to the workload type's SLT or required QoS”, (Bahramshahry: ¶109), “calculate an allocation route based on the service level targets and capacity that is known to be available”, (Bahramshahry: ¶124), “scheduling the pending workload tasks to execute via the one or more computing resources in compliance with the selected SLT specified for each of the pending workload tasks”, (Bahramshahry: ¶695). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIHAB ALAM whose telephone number is (571)272-8705. The examiner can normally be reached Mon - Fri 7:30am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.A./Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Feb 09, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month