Prosecution Insights
Last updated: April 19, 2026
Application No. 17/826,071

DYNAMIC AUTOSCALING OF SERVER RESOURCES USING INTELLIGENT DEMAND ANALYTIC SYSTEMS

Non-Final OA §101§103
Filed
May 26, 2022
Examiner
XU, ZUJIA
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Paypal Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
114 granted / 169 resolved
+12.5% vs TC avg
Strong +82% interview lift
Without
With
+81.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
33 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to Request for Continued Examination and Applicant Amendment and Arguments filed on 10 December, 2025. Claims 1-20 are pending in this application. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10 December, 2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1, Statutory Category: Yes, the claim 1 is a server system that performing a series of steps and therefore falls in the statutory category of a machine. Step 2A- Prong 1: Judicial Exception Recited: Yes, the claim recites: “analyzing, past computing usage of server resources of the server system during one or more computing tasks over a previous period of time, determining, based on the analyzing, a predicted future computing usage of the server resources at a first time during a future period of time; determining available server resources from the server resources during the future period of time; determining, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule; determining an allocation of at least one machine from the plurality of machine pools that is designated for a dedicated handling of a computing task associated with the predicted future computing usage, and configuring a computing capacity of the available server resources allocated to one or more computing devices at current time based on the determined allocation of the at least one machine to the dedicated handing of the computing task”, “dynamically sizing the plurality of machine pools to allocate the at least one machine to the computing task, and allocating a reserve machine pool from the plurality of machine pools for additional computing tasks; analyzing a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time; and retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage”. As drafted, the claim as a whole recites a server system that performing a series of steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily judging/evaluating/determining/analyzing the past computing usages to determining the usage pattern, predicting/determining the future computing usage at future time based on the previous usage pattern, determining/identifying whether there is available server resources at the future time, determining/identifying, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule, determining/identifying an allocation and judging/scheduling/adjusting/configuring the capacity of available computing resource based on the previous determined prediction and determined allocation (i.e., mentally determining if resources will be increased at future time, then adjusting/scheduling/configuring more resources at current time), dynamically assigning/changing/modifying/sizing the plurality of machine pools to allocate the at least one machine to the computing task, and allocating/assigning a reserve machine pool from the plurality of machine pools for additional computing tasks; analyzing/determining a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time; and retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage (i.e., mentally performing mathematical calculation based on the input, to adjusting/changing the baseline level based on the input data/current computing usage). Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion). Therefore, yes, the claims do recite judicial exceptions. Step 2A- Prong 2: Integrated into a practical Application: No, this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the server system to perform operations” are directed to generic computing components/Functions (MPEP § 2106.05(b) merely applying the abstract idea (MPEP § 2106.05(f)). In addition, the limitation of “using a machine learning (ML) engine comprising one or more ML models” is directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Further, “wherein the one or more ML models are trained based on demand metrics for a plurality of machine pools capable of processing the one or more computing tasks using the server resources of the server system” and “wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing task” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). Furthermore, “executing the computing task using the at least one machine from at least one of the plurality of machine pools” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)) (i.e., merely applying all the steps of analyzing, determining, determining, configuring). The combination of these additional elements is no more than mere instructions to apply the exception using a generic computer component (MPEP 2106.05(f)). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea. Step 2B: Claim provides an Inventive Concept: No. The additional elements “wherein the one or more ML models are trained based on demand metrics for a plurality of machine pools capable of processing the one or more computing tasks using the server resources of the server system” and “wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing task” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). The limitation of “using a machine learning (ML) engine comprising one or more ML models” is directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). And, “executing the computing task using the at least one machine from at least one of the plurality of machine pools” which is merely applying the judicial exception or abstract idea (See MPEP 2106.05(f)) (i.e., merely applying all the steps of analyzing, determining, determining, configuring). Further, the limitation of “a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the server system to perform operations” are directed to generic computing components/Functions (MPEP § 2106.05(b) merely applying the abstract idea (MPEP § 2106.05(f)). These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B. For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible. Independent claims 10 and 19 are rejected for the same reason as claim 1 above. In addition, independent claim 19 further recites “A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to case a machine to perform operations”. These additional elements are directed to generic computing components/Functions (MPEP § 2106.05(b) merely applying the abstract idea (MPEP § 2106.05(f)). With respect to the dependent claim 2, the claim elaborates that wherein the configuring the computing capacity comprises increasing an amount of the available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (“configuring the computing capacity comprises increasing an amount of the available computing resources” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 3, the claim elaborates that wherein the configuring the computing capacity comprises decreasing an amount of available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (“configuring the computing capacity comprises decreasing an amount of available computing resources” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 4, the claim elaborates that wherein the configuring the computing capacity further comprises preventing the amount of the server resources from falling at or below a baseline threshold of the available server resources (“configuring the computing capacity further comprises preventing the amount of the computing resources from falling…” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind, such that human mind can adjusting/scheduling the resources capacity to ensuring/guarantee that resource will be above a baseline threshold). With respect to the dependent claim 5, the claim elaborates that wherein, prior to the analyzing, the operations further comprise: applying a short-term bias to the past computing usage based on changes to the past computing usage over a portion of the previous period of time, wherein the short-term bias comprises a fitted curve for the past computing usage over the portion of the previous period of time, wherein the analyzing further uses the applied short-term bias (“applying a short-term bias… wherein the short-term bias comprises a fitted curve for the past computing usage over the portion of the previous period of time, wherein the analyzing further uses the applied short-term bias” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 6, the claim elaborates that wherein the one or more ML models comprise at least one continuous learning model, and wherein the analyzing the past computing usage comprises fitting the past computing usage to a short term curve model for a first portion of the previous period of time, a medium term curve model for a second portion of the previous period of time that is longer than the first portion, and a long term curve model for a third portion of the previous period of time longer than the first portion and the second portion (“wherein the one or more ML models comprise at least one continuous learning model” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). In addition, “wherein the analyzing the past computing usage comprises fitting the past computing usage to a short term curve model…a medium term curve model…and a long term curve model” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 7, the claim elaborates that wherein the computing capacity of the available server resources comprises a capacity for performing a processing of electronic transactions using the available server resources (“a capacity for performing a processing of electronic transactions…” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). With respect to the dependent claim 8, the claim elaborates that wherein processing the electronic transactions comprises assessing a risk of fraud for the electronic transactions and assessing an availability of funds to pay for the electronic transactions (“wherein processing the electronic transactions comprises assessing a risk of fraud for the electronic transactions and assessing an availability of funds to pay for the electronic transactions” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). With respect to the dependent claim 9, the claim elaborates that wherein the configuring the computing capacity comprises reserving the computing capacity on one or more external servers that are located in a different location than the server system (“wherein the configuring the computing capacity comprises reserving the computing capacity…” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 11, the claim elaborates that wherein the configuring the computing capacity comprises: providing an increased amount of the available server resources to be accessible by the one or more computing devices at the future period of time from one of a standard amount of the available server resources set by the computer system or a previous amount of the available server resources provided by the computer system to the one or more computing devices (“providing an increased amount of the available server resources to be accessible” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 12, the claim elaborates that wherein the configuring the computing capacity comprises: providing a decreased amount of available server resources to be accessible by the one or more computing devices at the future period of time from one of a standard amount of the available server resources set by the computer system or a previous amount of the available server resources provided by the computer system to the one or more computing devices (“providing a decreased amount of available server resources…” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 13, the claim elaborates that wherein the providing the decreased amount uses a minimum threshold amount of the available server resources to be provided at the future period of time (“wherein the providing the decreased amount uses a minimum threshold amount of the available computing resources to be provided at the future period of time” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). With respect to the dependent claim 14, the claim elaborates that wherein the one or more ML models comprise a continuous learning model trained for pattern identification over the previous period of time based on at least one of daily network traffic or weekly network traffic (“wherein the one or more ML models comprise a continuous learning model trained for pattern identification over the previous period of time” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). With respect to the dependent claim 15, the claim elaborates that wherein the determining the predicted future computing usage of the server resources further uses one or more of a linear curve, a first degree curve, or a second degree curve fitted from past computing usage using the one or more ML models (“determining the predicted future computing usage of the server resources further uses one or more of a linear curve, a first degree curve, or a second degree curve fitted from past computing usage” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. In addition, “using the one or more ML models” are directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 16, the claim elaborates that wherein the computing capacity is utilized by the one or more computing devices to process electronic transactions using one or more digital wallets or one or more digital accounts provided by the computing system (“wherein the computing capacity is utilized by the one or more computing devices to process electronic transactions using one or more digital wallets” are directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 17, the claim elaborates that wherein the computing capacity comprises one or more fraud detection engines and one or more payment platforms provided by the computing system during electronic transaction processing (“wherein the computing capacity comprises one or more fraud detection engines and one or more payment platforms” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). With respect to the dependent claim 18, the claim elaborates that wherein configuring the computing capacity comprises: utilizing one or more external cloud computing resources to provide the computing capacity to the one or more computing devices (“utilizing one or more external cloud computing resources to provide the computing capacity” is directed to an attempt to generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h))). With respect to the dependent claim 20, the claim elaborates that wherein the configuring the computing capacity comprises one of: increasing an amount of the available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time; or decreasing an amount of available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (“configuring the computing capacity comprises one of: increasing an amount of the available server resources” and “decreasing an amount of available server resources” are being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 10-12, 15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic et al. (US Pub. 2021/0273858 A1) in view of Bogdany et al. (US Pub. 2013/0254374 A1) and further in view of Bryc et al. (US Patent. 10,778,599 B2), Grebenisan et al. (US Pub. 2021/0303631 A1) and Vermeulen et al. (US Patent. 8,589,549 B1). Radovanovic, Bryc and Grebenisan were cited in the previous Office Action. As per claim 1, Radovanovic teaches the invention substantially as claimed including A server system (Radovanovic, Fig. 1, 100) comprising: a non-transitory memory (Radovanovic, Fig. 1, 134 memory); and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the server system to perform operations comprising (Radovanovic, Fig. 1, 132 processor, 138 instructions; Claim 27, lines 1-4, One or more tangible non-transitory computer-readable media storing computer-readable instructions that when executed by one or more processors cause the one or more processors to perform operations): analyzing, using a machine learning (ML) device comprising one or more ML models, past computing usage of server resources of the server system during one or more computing tasks over a previous period of time (Radovanovic, Fig. 1, 100 server system; Fig. 2, 204 input data (as including past computing usage of server resources), 200 machine-learned computing device, network state determination model (as ML model); [0006] lines 3-6, a machine-learned model trained to receive input data including information associated with a plurality of nodes associated with a resource availability and a resource usage; [0003] lines 2-8, Operations associated with the state of a geographic area can be implemented on a variety of computing devices. These operations can include processing data associated with the geographic area for later access and use by a user or computing system. Further, the operations can include sending and receiving data to remote computing systems…the types of operations and the way in which the operations (as one or more computing tasks executed using the server resources) are performed can change over time; [0224] lines 6-13, the computing system 102 can use PCA to determine a set of resource costs associated with the plurality of nodes based on previously determined data (e.g., the historical training data described in 1002 and 1004 of the method 1000 that is depicted in FIG. 10) associated with the state of the network (e.g., the resource availability and resource usage in the past; also see [0123] lines 8-9, the resource usage can be associated with usage (e.g., usage of network bandwidth) (as server resources) and [0142]), wherein the one or more ML models are trained based on demand metrics for processing the one or more computing tasks using the server resources of the server system (Radovanovic, [0071] lines 4-10, the machine-learned mode can generate data indicative of future energy costs and/or prices, based on prior training using historical energy costs and/or prices. Further, the machine-learned mode can generate data indicative of future network bandwidth costs and/or prices, based on prior training using historical network bandwidth costs and/or prices; [0073] lines 2-10, historical training data including historical resource availability (e.g., how much network bandwidth was available at certain days or certain hours of the day in the past), historical resource usage (e.g., how much network bandwidth was used at certain days or certain hours of the day in the past) (as whole as demand metrics, i.e., how much resources used at the past), and/or a ground-truth resource cost (e.g., the price of network bandwidth in the past) for a resource provided in association with a plurality of nodes over a plurality of time intervals); determining, based on the analyzing, a predicted future computing usage of the server resources at a first time during a future period of time (Radovanovic, [0030] lines 2-18, a computing system that receives network data that includes information associated with a network that includes a plurality of nodes respectively associated with a plurality of resources…the resource usage can be associated with usage of the resource from at least the portion of nodes at the initial time interval…resource usage is total regional nodal usage. Furthermore, the system can, through use of the network data and a machine-learned model, determine network topology information and/or predict various aspects of the network including the state of the network at a future time interval (as a first time during a future period of time ) including resource costs, resource usage, and/or resource costs of a portion of the nodes of the network); determining available server resources from the server resources during the future period of time (Radovanovic, [0005] lines 13-18, determining, by the one or more computing devices, based at least in part on the network data and a machine-learned model, the resource availability and the resource usage for at least the portion of the plurality of nodes at a time interval subsequent to the initial time interval; [0031] lines 7-10, provides a way to more accurately predict various aspects of a network including resource costs, resource availability, resource usage, whether a node in a network is active, and/or the state of connections between nodes in the network); configuring a computing capacity of the available server resources allocated to one or more computing devices (Radovanovic. [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion (as configuring a computing capacity of the available server resources)). Radovanovic fails to explicitly teach determining, using the ML engine, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule. However, Bogdany teaches determining, using the ML engine, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule (Bogdany, Fig. 4, 70 computing resource allocation engine, 72 rules, 82 baseline, 84 forecasted, 76A-N; [0063] lines 1-10, engine 70 may perform multiple functions similar to a general-purpose computer. Specifically, among other functions, engine 70 may (among other things): determine a baseline computing resource allocation 82 (e.g., a level of computing resource needed to be an expected/traditional/usual level of traffic in the networked computing environment 50) for the networked computing environment 50 based upon historical computing resource data 76A-N stored in at least one computer storage device 74A-N (e.g., by analyzing analyze historical log(s)); receive social networking trend data 80 corresponding to usage of a set of social networking websites 78A-N; analyze the social networking trend data 80 (e.g., in real-time) to determine a forecasted computing resource allocation 84 based on social networking trends; [0065] lines 1-10, embodiments of the present invention may utilize various analytical measures (e.g., log analysis to mine for commonly or frequently occurring trends, popularity of a given topic, etc.) trends, to determine a baseline computing resource allocation. This baseline computing resource allocation typically indicates appropriate levels of computing resources to serve expected/typical levels of network traffic. For example, in an "events" infrastructure, a baseline computing resource allocation plan/protocol for a tennis tournament may be derived through a historical log analysis that described an impact that various players may have on the infrastructure on specific days of the tournament). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic with Bogdany because Bogdany’s teaching of determining baseline level of resource for allocation would have provided Radovanovic’s system with the advantage and capability to allow the system to meet the dynamic customer computational demands in order to improving the system performance and resource utilization (see Bogdany, [0009]). Radovanovic and Bogdany fails to explicitly teach the demand metrics is for a plurality of machine pools capable of processing the one or more computing tasks, wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing tasks; determining an allocation of at least one machine from the plurality of machine pools that is designated for a dedicated handling of a computing task associated with the predicted future computing usage, and when configuring the computing capacity, it is configuring at a current time based on the determined allocation of the at least one machine to the dedicated handing of the computing task, and executing the computing task using the at least one machine from at least one of the plurality of machine pools, wherein the executing the computing task comprises: dynamically sizing the plurality of machine pools to allocate the at least one machine to the computing task, and allocating a reserve machine pool from the plurality of machine pools for additional computing tasks; analyzing a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time; and retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage. However, Bryc teaches the demand metrics is for a plurality of machine pools capable of processing the one or more computing tasks (Bryc, Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee (as including plurality of machine pools capable of processing); also see Col 1, lines 25-30, Amazon® has a concept of an Auto Scaling Group (ASG) in its cloud system, which automatically provisions (scales up) additional EC2® resource instances after detecting increases in certain traffic/load-related metrics, such as CPU or memory utilization. Deprovisioning is similarly automatic as load decreases), wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing tasks (Bryc, Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee; Col 3, lines 50-61, provision computing resource instances such as virtual machines, each virtual machine having certain processing and memory resources. However, the technology described herein may be used in an enterprise's own network hardware instead of or in addition to cloud-based services, and may be used on virtual and/or physical machines. Moreover, the technology is independent of any particular virtual machine or other machine configuration, and indeed, may be used to allocate more specific computing resources (memory and CPU resources) within virtual and/or physical machines, and similarly may be used to allocate storage, bandwidth, number of connections and other computing resources); determining an allocation of at least one machine from the plurality of machine pools that is designated for a dedicated handling of a computing task associated with the predicted future computing usage (Bryc, Fig. 3, services 1-3 (as including computing task). Time Tc (as future); Col 8, lines 58-62, allocate more specific computing resources (memory and CPU resources) within virtual and/or physical machines, and similarly may be used to allocate storage, bandwidth, number of connections and other computing resources; Col 5, lines 8-14, depending on the computing resource parameters available for changing the current allocation, such as the number of virtual machines, the memory and/or the processing power, the data service can modify the amount of computing resources needed. The cloud computing system 112 (e.g., the vendor) may work with the data service 102 (e.g., the customer) to meet the customer needs; Col 6, lines 31-48, Fig. 3, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load), and when configuring the computing capacity, it is configuring at a current time based on the determined allocation of the at least one machine to the dedicated handing of the computing task (Bryc, Fig. 5; Col 2, lines 38-40, FIG. 5 is an example graphical representation of how proactive provisioning of computing resources based upon predictions occurs before actual demand takes place; Col 4, lines 26-33, The technology described herein comprises predictive computing resource provisioning logic 114 that uses predictive data comprising historical data 116 and/or state data 118 to proactively provision computing resources in advance of their actual need. Example historical data 116 includes (but is not limited to) prior traffic patterns, prior load (e.g., the size and shape of the traffic) and any other metrics that may be used to predict a need for computing resources; also see Col 11, lines 35-39, smoothing is one optimization set forth above, so as to only request re-provisioning when a threshold change occurs. Instead of making a change every minute, smoothing can reduce the number of change events (as configuring at the current time based on the determined allocation (i.e., allocation/configuration the resources in advance (i.e., as current time); also see Col 3, lines 50-61 for allocation of machines of machine pools)), and executing the computing task using the at least one machine from at least one of the plurality of machine pools (Bryc, Fig. 2, 108 services, 110; Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee)). wherein the executing the computing task comprises: dynamically sizing the plurality of machine pools to allocate the at least one machine to the computing task (Bryc, Col 5, lines 8-14, depending on the computing resource parameters available for changing the current allocation, such as the number of virtual machines, the memory and/or the processing power, the data service can modify the amount of computing resources needed. The cloud computing system 112 (e.g., the vendor) may work with the data service 102 (e.g., the customer) to meet the customer needs; Col 6, lines 31-48, Fig. 3, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load (as dynamically sizing based on the needs), and allocating a reserve machine pool from the plurality of machine pools for additional computing tasks (Bryc, Fig. 1, 104(1) to 104(m) clients (as including additional computing tasks); Fig. 3 different services (as including additional computing tasks); Col 6, lines 14-30, Fig. 3 shows the concept of a number of (e.g., three) different services 308(1)-308(3) (As including additional computing tasks) having a number of corresponding resources (each labeled “R”) running on the cloud computing system to support that service's functionality. In FIG. 3, the number of boxes labeled “R” in each service represents the current number of resource instances allocated to that service at a given time. Thus for example, at time Ta, service 1 is labeled 308(1)(Ta) to represent service 1 at time Ta, which as can be seen, has three resource instances allocated thereto. At time Ta, the service 308(2)(Ta) has two resource instances, and the service 308(3)(Ta) has four resources instances. Note that FIG. 3 is only for purposes of illustration, and that any practical number of services may be running concurrently; for example, any service may have any practical number of resource instances allocated to that service at a given time, such as on the order of tens, hundreds, thousands or even more); analyzing a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time (Bryc, Col 6, lines 31-48, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic and Bogdany with Bryc because Bryc’s teaching of configuring the resource in advance based on the future resource predication with historical load analysis would have provided Radovanovic and Bogdany’s system with the advantage and capability to efficiently allocating the resources before the future needs in order to optimizing the resource allocation and system efficiency (see Bryc, Col 10, lines 38-45, “optimizing/smoothing”). Radovanovic, Bogdany and Bryc fail to explicitly teach that the machine learning (ML) device is the machine learning (ML) engine comprising one or more ML models. However, Grebenisan teaches the machine learning (ML) device is the machine learning (ML) engine comprising one or more ML models (Grebenisan, Fig. 1B, 124 machine learning engine, 126, 128 and 130 (as one or more ML models); [0109] lines 3-5, the machine learning engine 124 for performing the forecasting of expected storage demands (e.g. via the trends and behavior module 126); also see [0042] lines 1-3, trained machine learning model comprises multiple machine trained machine learning models). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany and Bryc with Grebenisan because Grebenisan’s teaching of machine learning engine having different machine learning models/modules that each for providing different operations for predicting the future resource usage would have provided Radovanovic, Bogdany and Bryc’s system with the advantage and capability to increasing the accuracy for predicting the future resource usage which improving the resource utilization and system performance (see Grebenisan, [0020] “creating an accurate prediction of storage usage” and [0023] “it would be advantageous if future storage usage demands of a distributed file management system could be accurately anticipated, such that storage needs may be better managed for real-time use and adaptability within a shared distributed file system of a big data platform”). Radovanovic, Bogdany, Bryc and Grebenisan fail to explicitly teach retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage. However, Vermeulen teaches retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage (Vermeulen, Col 13, lines 19-35, resource utilization during a given interval of time may be predicted according to static, invariant inputs, such as the type of the resource 100 being predicted, the identity of the interval, etc. For example, in one embodiment, utilization of a communication resource 100 may be statically predicted according to the interval of interest (e.g., may be statically predicted to be 15% during the hour 1:00-1:59 a.m., 67% during the hour 10:00-10:59 a.m., etc.). By contrast, a dynamic predictive model of resource utilization may take into account similar static variables as a static prediction, but may also depend on actual historical behavior of some inputs to the prediction. For example, in one embodiment a dynamic prediction of utilization of a communication resource 100 during the interval of 10:00-10:59 a.m. may include a static baseline prediction (e.g., 67%) that may be adjusted up or down depending on historical utilization of the communication resource 100 during the preceding hour (as current computing usage, since it can be just current time utilization for predicting upcoming interval, and this is dynamic predictive model, which is retained every time when predicting based on constantly feeding of the current resource usage input); also see Col 13, lines 48-49, a dynamic prediction of utilization may take into account primarily the current state of such variables). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc and Grebenisan with Vermeulen because Vermeulen’s teaching of utilizing the model for keep predicating the subsequence interval time of the resource usage based on constantly feeding of the current resource usage input would have provided Radovanovic, Bogdany, Bryc and Grebenisan’s system with the advantage and capability to accurately predicating the future resource usage and adjusting the bassline level of the resource in order to improving the system performance and efficiency. As per claim 2, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Radovanovic further teaches wherein the configuring the computing capacity comprises increasing an amount of the available computing resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (Radovanovic, [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices (as increasing an amount of the available computing resources) and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion). As per claim 3, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Radovanovic further teaches wherein the configuring the computing capacity comprises decreasing an amount of available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (Radovanovic, [0198] lines 7-22, the network optimization can include a different combination of the resource generation mix (e.g., different combinations of solar power, hydroelectric power, and/or natural gas power) and/or changes to the available capacity of the one or more nodes. By way of further example, based on data associated with the one or more predictions (e.g., a predicted future resource usage) the computing device 102 can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource within the network. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will decrease in one hour, the network computing system can send one or more signals to begin a reduction (as decreasing an amount of available server resources) in the amount of electrical power that is provided by one or more electrical power stations). As per claim 10, Radovanovic teaches the invention substantially as claimed including A method comprising: detecting, by a computer system, past computing usage of server resources of the computer system during an execution of one or more computing tasks over a previous period of time (Radovanovic, Fig. 1, 100 (as computer system); Fig. 2, 204 input data (as including past computing usage of server resources), 200 machine-learned computing device, network state determination model; [0006] lines 3-6, a machine-learned model trained to receive input data including information associated with a plurality of nodes associated with a resource availability and a resource usage; [0003] lines 2-8, Operations associated with the state of a geographic area can be implemented on a variety of computing devices. These operations can include processing data associated with the geographic area for later access and use by a user or computing system. Further, the operations can include sending and receiving data to remote computing systems…the types of operations and the way in which the operations (as one or more computing tasks executed using the server resources) are performed can change over time; [0224] lines 6-13, the computing system 102 can use PCA to determine a set of resource costs associated with the plurality of nodes based on previously determined data (e.g., the historical training data described in 1002 and 1004 of the method 1000 that is depicted in FIG. 10) associated with the state of the network (e.g., the resource availability and resource usage in the past; also see [0123] lines 8-9, the resource usage can be associated with usage (e.g., usage of network bandwidth) (as server resources) and [0142]); determining, by the computer system using a machine learning (ML) device comprising one or more ML models, a predicted future computing usage of the server resources at a first time during a future period of time based on the past computing usage (Radovanovic, Fig. 1, 100 (as computer system); Fig. 2, 204 input data (as including past computing usage of server resources), 200 machine-learned computing device, network state determination model; [0030] lines 2-18, a computing system that receives network data that includes information associated with a network that includes a plurality of nodes respectively associated with a plurality of resources…the resource usage can be associated with usage of the resource from at least the portion of nodes at the initial time interval…resource usage is total regional nodal usage. Furthermore, the system can, through use of the network data and a machine-learned model, determine network topology information and/or predict various aspects of the network including the state of the network at a future time interval including resource costs, resource usage, and/or resource costs of a portion of the nodes of the network), wherein the one or more ML models are trained based on demand metrics for using the server resources of the computer system (Radovanovic, [0071] lines 4-10, the machine-learned mode can generate data indicative of future energy costs and/or prices, based on prior training using historical energy costs and/or prices. Further, the machine-learned mode can generate data indicative of future network bandwidth costs and/or prices, based on prior training using historical network bandwidth costs and/or prices; [0073] lines 2-10, historical training data including historical resource availability (e.g., how much network bandwidth was available at certain days or certain hours of the day in the past), historical resource usage (e.g., how much network bandwidth was used at certain days or certain hours of the day in the past) (as whole as demand metrics, i.e., how much resources used at the past), and/or a ground-truth resource cost (e.g., the price of network bandwidth in the past) for a resource provided in association with a plurality of nodes over a plurality of time intervals); determining, by the computer system, available server resources from the server resources during the future period of time (Radovanovic, [0005] lines 13-18, determining, by the one or more computing devices, based at least in part on the network data and a machine-learned model, the resource availability and the resource usage for at least the portion of the plurality of nodes at a time interval subsequent to the initial time interval; [0031] lines 7-10, provides a way to more accurately predict various aspects of a network including resource costs, resource availability, resource usage, whether a node in a network is active, and/or the state of connections between nodes in the network); and configuring, by the computer system, a computing capacity of the available server resources allocated to one or more computing devices (Radovanovic. [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion (as configuring a computing capacity of the available server resources). Radovanovic fails to explicitly teach determining, by the computer system, using the ML engine, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule. However, Bogdany teaches determining, by the computer system, using the ML engine, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule (Bogdany, Fig. 4, 70 computing resource allocation engine, 72 rules, 82 baseline, 84 forecasted, 76A-N; [0063] lines 1-10, engine 70 may perform multiple functions similar to a general-purpose computer. Specifically, among other functions, engine 70 may (among other things): determine a baseline computing resource allocation 82 (e.g., a level of computing resource needed to be an expected/traditional/usual level of traffic in the networked computing environment 50) for the networked computing environment 50 based upon historical computing resource data 76A-N stored in at least one computer storage device 74A-N (e.g., by analyzing analyze historical log(s)); receive social networking trend data 80 corresponding to usage of a set of social networking websites 78A-N; analyze the social networking trend data 80 (e.g., in real-time) to determine a forecasted computing resource allocation 84 based on social networking trends; [0065] lines 1-10, embodiments of the present invention may utilize various analytical measures (e.g., log analysis to mine for commonly or frequently occurring trends, popularity of a given topic, etc.) trends, to determine a baseline computing resource allocation. This baseline computing resource allocation typically indicates appropriate levels of computing resources to serve expected/typical levels of network traffic. For example, in an "events" infrastructure, a baseline computing resource allocation plan/protocol for a tennis tournament may be derived through a historical log analysis that described an impact that various players may have on the infrastructure on specific days of the tournament). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic with Bogdany because Bogdany’s teaching of determining baseline level of resource for allocation would have provided Radovanovic’s system with the advantage and capability to allow the system to meet the dynamic customer computational demands in order to improving the system performance and resource utilization (see Bogdany, [0009]). Radovanovic and Bogdany fail to explicitly teach the demand metrics is for a plurality of machine pools capable of processing the one or more computing tasks, wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing tasks; determining, by the computer system, an allocation of at least one machine from the plurality of machine pools that is designated for a dedicated handling of a computing task associated with the predicted future computing usage, and when configuring the computing capacity, it is configuring at the current time based on the determined allocation of the at least one machine and executing, by the computer system, the computing task using the at least one machine from at least one of the plurality of machine pools, wherein the executing the computing task comprises: dynamically sizing the plurality of machine pools to allocate the at least one machine to the computing task, and allocating a reserve machine pool from the plurality of machine pools for additional computing tasks; analyzing, by the computer system, a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time; and retraining, by the computer system, the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage. However, Bryc teaches the demand metrics is for a plurality of machine pools capable of processing the one or more computing tasks (Bryc, Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee (as including plurality of machine pools capable of processing); also see Col 1, lines 25-30, Amazon® has a concept of an Auto Scaling Group (ASG) in its cloud system, which automatically provisions (scales up) additional EC2® resource instances after detecting increases in certain traffic/load-related metrics, such as CPU or memory utilization. Deprovisioning is similarly automatic as load decreases), wherein the available server resources include at least a portion of the plurality of machine pools each having one or more machines operable to perform the one or more computing tasks (Bryc, Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee; Col 3, lines 50-61, provision computing resource instances such as virtual machines, each virtual machine having certain processing and memory resources. However, the technology described herein may be used in an enterprise's own network hardware instead of or in addition to cloud-based services, and may be used on virtual and/or physical machines. Moreover, the technology is independent of any particular virtual machine or other machine configuration, and indeed, may be used to allocate more specific computing resources (memory and CPU resources) within virtual and/or physical machines, and similarly may be used to allocate storage, bandwidth, number of connections and other computing resources); determining, by the computer system, an allocation of at least one machine from the plurality of machine pools that is designated for a dedicated handling of a computing task associated with the predicted future computing usage (Bryc, Fig. 3, services 1-3 (as including computing task). Time Tc (as future); Col 8, lines 58-62, allocate more specific computing resources (memory and CPU resources) within virtual and/or physical machines, and similarly may be used to allocate storage, bandwidth, number of connections and other computing resources; Col 5, lines 8-14, depending on the computing resource parameters available for changing the current allocation, such as the number of virtual machines, the memory and/or the processing power, the data service can modify the amount of computing resources needed. The cloud computing system 112 (e.g., the vendor) may work with the data service 102 (e.g., the customer) to meet the customer needs; Col 6, lines 31-48, Fig. 3, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load), and when configuring the computing capacity, it is configuring, by the computer system, at a current time based on the determined allocation of the at least one machine to the dedicated handing of the computing task (Bryc, Fig. 5; Col 2, lines 38-40, FIG. 5 is an example graphical representation of how proactive provisioning of computing resources based upon predictions occurs before actual demand takes place; Col 4, lines 26-33, The technology described herein comprises predictive computing resource provisioning logic 114 that uses predictive data comprising historical data 116 and/or state data 118 to proactively provision computing resources in advance of their actual need. Example historical data 116 includes (but is not limited to) prior traffic patterns, prior load (e.g., the size and shape of the traffic) and any other metrics that may be used to predict a need for computing resources; also see Col 11, lines 35-39, smoothing is one optimization set forth above, so as to only request re-provisioning when a threshold change occurs. Instead of making a change every minute, smoothing can reduce the number of change events (as configuring at the current time based on the determined allocation (i.e., allocation/configuration the resources in advance (i.e., as current time); also see Col 3, lines 50-61 for allocation of machines of machine pools)), and executing, by the computer system, the computing task using the at least one machine from at least one of the plurality of machine pools (Bryc, Fig. 2, 108 services, 110; Col 1, lines 6-12, Many web applications such as those that provide data services use large amounts of provisioned network resources. These applications/data services may be run on cloud computing resources to service client requests. For example, Amazon® Elastic Compute Cloud (Amazon EC2®) is a cloud-based service that supports enterprise data services by providing variable computing capacity for a fee)). wherein the executing the computing task comprises: dynamically sizing the plurality of machine pools to allocate the at least one machine to the computing task (Bryc, Col 5, lines 8-14, depending on the computing resource parameters available for changing the current allocation, such as the number of virtual machines, the memory and/or the processing power, the data service can modify the amount of computing resources needed. The cloud computing system 112 (e.g., the vendor) may work with the data service 102 (e.g., the customer) to meet the customer needs; Col 6, lines 31-48, Fig. 3, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load (as dynamically sizing based on the needs), and allocating a reserve machine pool from the plurality of machine pools for additional computing tasks (Bryc, Fig. 1, 104(1) to 104(m) clients (as including additional computing tasks); Fig. 3 different services (as including additional computing tasks); Col 6, lines 14-30, Fig. 3 shows the concept of a number of (e.g., three) different services 308(1)-308(3) (As including additional computing tasks) having a number of corresponding resources (each labeled “R”) running on the cloud computing system to support that service's functionality. In FIG. 3, the number of boxes labeled “R” in each service represents the current number of resource instances allocated to that service at a given time. Thus for example, at time Ta, service 1 is labeled 308(1)(Ta) to represent service 1 at time Ta, which as can be seen, has three resource instances allocated thereto. At time Ta, the service 308(2)(Ta) has two resource instances, and the service 308(3)(Ta) has four resources instances. Note that FIG. 3 is only for purposes of illustration, and that any practical number of services may be running concurrently; for example, any service may have any practical number of resource instances allocated to that service at a given time, such as on the order of tens, hundreds, thousands or even more); analyzing, by the computer system, a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time (Bryc, Col 6, lines 31-48, at some time Tb, the predictive computing resource provisioning logic 114 (FIG. 1) or an offline scheduler has predicted (e.g., based upon the historical data 116, state data 118, and possibly the current allocation data 122) that a traffic increase is likely forthcoming. Thus, starting at time Tb, (the prediction time), the predictive computing resource provisioning logic 114 instructs the cloud computing system 112 to begin spinning up new computing resources. Note that the increase request may be on a per-service basis, e.g., service 1 needs to increase to from three to six resource instances, service 3 to increase from two to four resource instances, and service 3 from four to five instances, as represented by the services 308(1)(Tc), 308(2)(Tc) and 308(3)(Tc). Thus, by the provisioning time Tc, which is some (generally relatively short) time before the actual traffic/load increase starts, the services already have the sufficient resource instances to handle the increased traffic/load). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic and Bogdany with Bryc because Bryc’s teaching of configuring the resource in advance based on the future resource predication with historical load analysis would have provided Radovanovic and Bogdany’s system with the advantage and capability to efficiently allocating the resources before the future needs in order to optimizing the resource allocation and system efficiency (see Bryc, Col 10, lines 38-45, “optimizing/smoothing”). Radovanovic, Bogdany and Bryc fail to explicitly teach that the machine learning (ML) device is the machine learning (ML) engine comprising one or more ML models. However, Grebenisan teaches the machine learning (ML) device is the machine learning (ML) engine comprising one or more ML models (Grebenisan, Fig. 1B, 124 machine learning engine, 126, 128 and 130 (as one or more ML models); [0109] lines 3-5, the machine learning engine 124 for performing the forecasting of expected storage demands (e.g. via the trends and behavior module 126); also see [0042] lines 1-3, trained machine learning model comprises multiple machine trained machine learning models). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany and Bryc with Grebenisan because Grebenisan’s teaching of machine learning engine having different machine learning models/modules that each for providing different operations for predicting the future resource usage would have provided Radovanovic, Bogdany and Bryc’s system with the advantage and capability to increasing the accuracy for predicting the future resource usage which improving the resource utilization and system performance (see Grebenisan, [0020] “creating an accurate prediction of storage usage” and [0023] “it would be advantageous if future storage usage demands of a distributed file management system could be accurately anticipated, such that storage needs may be better managed for real-time use and adaptability within a shared distributed file system of a big data platform”). Radovanovic, Bogdany, Bryc and Grebenisan fail to explicitly teach retraining, by the computer system, the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage. However, Vermeulen teaches retraining, by the computer system, the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage (Vermeulen, Fig. 2; Col 13, lines 19-35, resource utilization during a given interval of time may be predicted according to static, invariant inputs, such as the type of the resource 100 being predicted, the identity of the interval, etc. For example, in one embodiment, utilization of a communication resource 100 may be statically predicted according to the interval of interest (e.g., may be statically predicted to be 15% during the hour 1:00-1:59 a.m., 67% during the hour 10:00-10:59 a.m., etc.). By contrast, a dynamic predictive model of resource utilization may take into account similar static variables as a static prediction, but may also depend on actual historical behavior of some inputs to the prediction. For example, in one embodiment a dynamic prediction of utilization of a communication resource 100 during the interval of 10:00-10:59 a.m. may include a static baseline prediction (e.g., 67%) that may be adjusted up or down depending on historical utilization of the communication resource 100 during the preceding hour (as current computing usage, since it can be just current time utilization for predicting upcoming interval, and this is dynamic predictive model, which is retained every time when predicting based on constantly feeding of the current resource usage input); also see Col 13, lines 48-49, a dynamic prediction of utilization may take into account primarily the current state of such variables). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc and Grebenisan with Vermeulen because Vermeulen’s teaching of utilizing the model for keep predicating the subsequence interval time of the resource usage based on constantly feeding of the current resource usage input would have provided Radovanovic, Bogdany, Bryc and Grebenisan’s system with the advantage and capability to accurately predicating the future resource usage and adjusting the bassline level of the resource in order to improving the system performance and efficiency. As per claim 11, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic further teaches wherein the configuring the computing capacity comprises: providing an increased amount of the available server resources to be accessible by the one or more computing devices at the future period of time from one of a standard amount of the available server resources set by the computer system or a previous amount of the available server resources provided by the computer system to the one or more computing devices (Radovanovic, [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices (as increasing an amount of the available computing resources) and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion). As per claim 12, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic further teaches wherein the configuring the computing capacity comprises: providing a decreased amount of available server resources to be accessible by the one or more computing devices at the future period of time from one of a standard amount of the available server resources set by the computer system or a previous amount of the available server resources provided by the computer system to the one or more computing devices. (Radovanovic, [0198] lines 7-22, the network optimization can include a different combination of the resource generation mix (e.g., different combinations of solar power, hydroelectric power, and/or natural gas power) and/or changes to the available capacity of the one or more nodes. By way of further example, based on data associated with the one or more predictions (e.g., a predicted future resource usage) the computing device 102 can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource within the network. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will decrease in one hour, the network computing system can send one or more signals to begin a reduction (as decreasing an amount of available server resources) in the amount of electrical power that is provided by one or more electrical power stations). As per claim 15, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Grebenisan further teaches wherein the determining the predicted future computing usage of the server resources further uses one or more of a linear curve, a first degree curve, or a second degree curve fitted from past computing usage using the one or more ML models (Grebenisan, Fig. 6, 24, 25 (as one or more of a linear curve (i.e., Fig. 6, 25), a first degree curve (Fig. 6, 24); [0092] lines 2-14, the trends and behavior module 126 (i.e., within the machine learning engine) may compute the expected or forecasted storage demand for each of the directories 105 by first establishing a curve (e.g. a best fitting curve) of the actual demand data 121. Based on the established curve, the forecasted storage demand may further be defined as a function of at least one of: a computed first derivative of the curve projected to at least the expected future time (e.g. metadata characterizing a time in the future when the project for the particular directory is expected to last until and thereby expected storage usage of resources for the particular directory) and a computed first derivative of a moving average of the curve projected to the expected future time). As per claim 19, it is a non-transitory machine-readable medium claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above. As per claim 20, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 19 above. Radovanovic further teaches wherein the configuring the computing capacity comprises one of: increasing an amount of the available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (Radovanovic, [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices (as increasing an amount of the available computing resources) and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion).; or decreasing an amount of available server resources available to the one or more computing devices at the future period of time from a previously used amount of the server resources utilized during the previous period of time (Radovanovic, [0198] lines 7-22, the network optimization can include a different combination of the resource generation mix (e.g., different combinations of solar power, hydroelectric power, and/or natural gas power) and/or changes to the available capacity of the one or more nodes. By way of further example, based on data associated with the one or more predictions (e.g., a predicted future resource usage) the computing device 102 can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource within the network. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will decrease in one hour, the network computing system can send one or more signals to begin a reduction (as decreasing an amount of available computing resources) in the amount of electrical power that is provided by one or more electrical power stations). Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claims 3 and 12 respectively above, and further in view of SENARATH et al. (US Pub. 2017/0085493 A1). SENARATH was cited in the previous Office Action. As per claim 4, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 3 above. Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to explicitly teach wherein the configuring the computing capacity further comprises preventing the amount of the server resources from falling at or below a baseline threshold of the available server resources. However, SENARATH teaches wherein the configuring the computing capacity further comprises preventing the amount of the server resources from falling at or below a baseline threshold of the available server resources (SENARATH, Fig. 4, 430, minimum resources for next period, 440 granted resources for next period; [0044] lines 7-22, This model can be used to allow for changes to allocated resources to accommodate the slice providing an indication of a future demand. Functional resource control model 400 may be applied, for example, when Slice S1 120 comprises a soft slice having more flexibility in the receipt of resources over a given interval. RA 320 dynamically provides a grant 440 of a portion of PNI 110's resources to SP 130 (via Slice S1 120) for an upcoming operational period (for example, an upcoming TTI). The RA 320 may determine the granted portion based on resource guarantees 410 (for example, from an SLA between PNI 110 and SP 130), resource usage information 420 (for example, based on current traffic over entire PNI 110, or traffic solely associated with SP 130), and the minimum resources required 430 by Slice S1 120 over the next operational period (as adjusting the computing capacity further comprises preventing the amount of the computing resources from falling at or below a baseline threshold (i.e., minimum resources required) of the available computing resources)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with SENARATH because SENARATH’s teaching of guarantee the allocation of the minimum resources for future time would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to ensuring the minimum amount of the resource are to be provided for later processing and allowing any surplus resource to be re-allocated which improving the system performance and resource utilization (see SENARATH, [0050] “any surplus resources from one of SPs 130, 135 over a given interval may be re-allocated to another SP to more efficiently utilize PNI 110's connectivity resources”). As per claim 13, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 12 above. Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach wherein the providing the decreased amount uses a minimum threshold amount of the available server resources to be provided at the future period of time. However, SENARATH teaches wherein the providing the decreased amount uses a minimum threshold amount of the available server resources to be provided at the future period of time (SENARATH, Fig. 4, 430, minimum resources for next period, 440 granted resources for next period; [0044] lines 7-22, This model can be used to allow for changes to allocated resources to accommodate the slice providing an indication of a future demand. Functional resource control model 400 may be applied, for example, when Slice S1 120 comprises a soft slice having more flexibility in the receipt of resources over a given interval. RA 320 dynamically provides a grant 440 of a portion of PNI 110's resources to SP 130 (via Slice S1 120) for an upcoming operational period (for example, an upcoming TTI). The RA 320 may determine the granted portion based on resource guarantees 410 (for example, from an SLA between PNI 110 and SP 130), resource usage information 420 (for example, based on current traffic over entire PNI 110, or traffic solely associated with SP 130), and the minimum resources required 430 by Slice S1 120 over the next operational period; [0050] lines 9-16, determines that it only requires 30% of the PNI 110's resources over the next y TTIs, PNI 110 can re-acquire control of the 20% surplus resources from SP 135. PNI 110 may then offer this 20% surplus to SP 130 over the next y TTIs. In this way, any surplus resources from one of SPs 130, 135 over a given interval may be re-allocated to another SP to more efficiently utilize PNI 110's connectivity resources (as reduced/decreased amount)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with SENARATH because SENARATH’s teaching of guarantee the allocation of the minimum resources for future time would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to ensuring the minimum amount of the resource are to be provided for later processing and allowing any surplus resource to be re-allocated which improving the system performance and resource utilization (see SENARATH, [0050] “any surplus resources from one of SPs 130, 135 over a given interval may be re-allocated to another SP to more efficiently utilize PNI 110's connectivity resources”). Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claim 1 above, and further in view of Lubrecht et al. (US Pub. 2006/0161884 A1) and Gao et al. (US Pub. 2020/0042420 A1). Lubrecht and Gao were cited in the previous Office Action. As per claim 5, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach wherein, prior to the analyzing, the operations further comprise: applying a short-term bias to the past computing usage based on changes to the past computing usage over a portion of the previous period of time, wherein the short-term bias comprises a fitted curve for the past computing usage over the portion of the previous period of time, wherein the analyzing further uses the applied short-term bias. However, Lubrecht teaches wherein, prior to the analyzing, the operations further comprise: applying a short-term bias to the past computing usage based on changes to the past computing usage over a portion of the previous period of time (Lubrecht, [0099] lines 1-7, The use of each resource and service must be considered over short-, medium-, and long-term periods, and the minimum, maximum, and average utilization during these periods must be recorded. Typically, the short-term period covers utilization over 24 hours, the medium-term period may cover from one to four weeks, and the long-term period covers a year or more; [0092] lines 1-9, The data collected from the monitoring information should be analyzed to identify trends from which the normal utilization and service level, or baseline, can be established. By regularly monitoring and comparing this baseline with current resource usage, exception conditions in the utilization of individual components or service thresholds can be defined, and breaches or near misses in the OLAs can be reported. In addition, the data can be used to predict future resource use); wherein the analyzing further uses the applied short-term bias (Lubrecht, [0237] lines 1-2, The CDB is a valuable source of capacity information, including trend information that can be used to predict future behavior). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Lubrecht because Lubrecht’s teaching of applying short-term bias for determining the resource usage over a short time period in order to predicting the future resource usage would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to more accurately determining the future resource utilization based on the different periods which improving the system resource utilization and efficiency (also see Lubrecht, [0046] “promoting a quality approach to achieving business effectiveness and efficiency in the use of information systems”). Although Lubrecht teaches trend information of short-term bias of resource usage, Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht fail to explicitly teach wherein the short-term bias comprises a fitted curve for the past computing usage over the portion of the previous period of time. However, Gao teaches wherein the short-term bias comprises a fitted curve for the past computing usage over the portion of the previous period of time (Gao, Fig. 4, 410 and 420 of the transaction run time; [0066] lines 1-10, To determine the processing resource wait time 420 and the storage resource wait time 440, the solution generates a processing resource time-resource model by performing curve fitting on the historical values of the processing resource service time 410, the processing resource wait time 420, and the processing resource utilization, and generates a storage resource time-resource model by performing curve fitting on the historical values of the storage resource service time 430, the storage resource wait time 440, and the storage resource utilization). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht with Gao because Gao’s teaching of curve fitting with fitted curve for the past computing usage over the portion of the previous period of time (i.e., Fig. 3, 410 and 420, short-term) would have provided Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht’s system with the advantage and capability to allow the system to improving the predication accuracy which improving the system performance and efficiency (see Gao, [0101] “predicted more accurately, and the performance bottleneck of the bath application can be found more efficiently”). As per claim 6, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Grebenisan teaches wherein the one or more ML models comprise at least one continuous learning model (Grebenisan, Fig. 1B, 124 machine learning engine, 126, 128 and 130 (as one or more ML models); [0109] lines 3-5, the machine learning engine 124 for performing the forecasting of expected storage demands (e.g. via the trends and behavior module 126); also see [0042] lines 1-3, trained machine learning model comprises multiple machine trained machine learning models). Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach wherein the analyzing the past computing usage comprises fitting the past computing usage to a short term curve model for a first portion of the previous period of time, a medium term curve model for a second portion of the previous period of time that is longer than the first portion, and a long term curve model for a third portion of the previous period of time longer than the first portion and the second portion. However, Lubrecht teaches wherein the analyzing the past computing usage comprises past computing usage with a short term curve model for a first portion of the previous period of time, a medium term curve model for a second portion of the previous period of time that is longer than the first portion, and a long term curve model for a third portion of the previous period of time longer than the first portion and the second portion (Lubrecht, [0099] lines 1-7, The use of each resource and service must be considered over short-, medium-, and long-term periods, and the minimum, maximum, and average utilization during these periods must be recorded. Typically, the short-term period covers utilization over 24 hours, the medium-term period may cover from one to four weeks, and the long-term period covers a year or more; [0092] lines 1-9, The data collected from the monitoring information should be analyzed to identify trends (i.e., as curve model) from which the normal utilization and service level, or baseline, can be established. By regularly monitoring and comparing this baseline with current resource usage, exception conditions in the utilization of individual components or service thresholds can be defined, and breaches or near misses in the OLAs can be reported. In addition, the data can be used to predict future resource use; also see [0007] lines 8-12, A model of the IT environment may then be created based on the collected data. The model of the IT environment may be used to create simulated conditions and to determine the performance of the IT environment; [0008] lines 6-7, creating at least one model of at least some aspects of the IT environment). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Lubrecht because Lubrecht’s teaching of predicting the future resource usage based on the resource usage over a short time period, medium period and long term period trends would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to more accurately determining the future resource utilization based on the different periods which improving the system resource utilization and efficiency (also see Lubrecht, [0046] “promoting a quality approach to achieving business effectiveness and efficiency in the use of information systems”). Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht fail to specifically teach the analyzing of the past computing usage with short term curve model, a medium term curve model, and a long term curve model, it is fitting the past computing usage to short/medium/long term curve models. However, Gao teaches fitting the past computing usage to short/medium/long term curve models (Gao, Fig. 4, 410 and 420, 430 and 440, 450; [0066] lines 1-10, To determine the processing resource wait time 420 and the storage resource wait time 440, the solution generates a processing resource time-resource model by performing curve fitting on the historical values of the processing resource service time 410, the processing resource wait time 420, and the processing resource utilization, and generates a storage resource time-resource model by performing curve fitting on the historical values of the storage resource service time 430, the storage resource wait time 440, and the storage resource utilization (please note: short/medium/long term curve model was taught by Lubrecht). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht with Gao because Gao’s teaching of curve fitting with fitted curve for the past computing usage over the portion of the previous period of time would have provided Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Lubrecht’s system with the advantage and capability to allow the system to improving the predication accuracy which improving the system performance and efficiency (see Gao, [0101] “predicted more accurately, and the performance bottleneck of the bath application can be found more efficiently”). Claims 7-8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claims 1 and 10 respectively above, and further in view of Liu et al. (US Pub. 2014/0344155 A1). Liu was cited in the previous Office Action. As per claim 7, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Radovanovic further teaches wherein the computing capacity of the available server resources comprises a capacity for performing a processing using the available server resources (Radovanovic. [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource; also see [0003] lines 2-8, Operations associated with the state of a geographic area can be implemented on a variety of computing devices. These operations can include processing data associated with the geographic area for later access and use by a user or computing system. Further, the operations can include sending and receiving data to remote computing systems…the types of operations and the way in which the operations (as one or more computing tasks executed using the server resources) are performed can change over time). Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach the processing is a processing of electronic transactions. However, Liu teaches the processing is a processing of electronic transactions (Liu, [0025] lines 1-3, payment application" may refer to an application or software that facilitates a transaction; [0113] lines 2-4, improving the efficiency of transaction-related processes and conserving resources of entities involved in transactions). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Liu because Liu’s teaching of conserving resources of entities involved in transactions would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to improving the resource utilization for processing the transactions in order to improving the system performance and efficiency. As per claim 8, Radovanovic, Bogdany, Bryc, Grebenisan, Vermeulen and Liu teach the invention according to claim 7 above. Liu further teaches wherein processing the electronic transactions comprises assessing a risk of fraud for the electronic transactions and assessing an availability of funds to pay for the electronic transactions (Liu, [0025] lines 1-3, payment application" may refer to an application or software that facilitates a transaction; [0039] lines 4-13, the audit of the payment application server computer may be conducted by a payment processing network or an issuer computer. The audit may include an evaluation of the authorization request messages transmitted by the payment application server computer and the risk scores associated with each of the transmitted authorization request messages. The audit of the payment application server computer may be conducted to determine whether the payment application server computer is transmitting authorization request messages for transactions that may have a high risk of fraud (as assessing a risk of fraud for the electronic transactions); [0105] lines 10-18, determine whether the payment device provided by the user for the transaction should be allowed or rejected. The authorization process for the payment device provided by the user for the transaction may include determining an available balance or an available credit balance for a payment account associated with the payment device. If there are insufficient funds to pay for the transaction total of the transaction, the authorization process may be declined by the issuer computer 110 (as assessing an availability of funds)). As per claim 16, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic further teaches wherein the computing capacity is utilized by the one or more computing devices to process operations (Radovanovic. [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource; also see [0003] lines 2-8, Operations associated with the state of a geographic area can be implemented on a variety of computing devices. These operations can include processing data associated with the geographic area for later access and use by a user or computing system. Further, the operations can include sending and receiving data to remote computing systems…the types of operations and the way in which the operations are performed can change over time). Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach the operations is electronic transactions using one or more digital wallets or one or more digital accounts provided by the computing system. However, Liu teaches the operations is electronic transactions using one or more digital wallets or one or more digital accounts provided by the computing system (Liu, [0025] lines 1-8, The term "payment application" may refer to an application or software that facilitates a transaction. In some embodiments, a payment application may be a wallet application stored in a memory or secure element of a user device (e.g., a mobile phone, desktop computer, tablet computer). In other embodiments, the payment application may be an interface on a merchant's website that allows a user to enter payment data for submission for processing a transaction). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Liu because Liu’s teaching of processing transactions using digital wallets would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to improving the security while processing the transaction (see Liu, [0003] “provide more secure and transaction processes”). Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claims 1 and 10 respectively above, and further in view of KHAN et al. (US Pub. 2015/0006733 A1). KHAN was cited in the previous Office Action. As per claim 9, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 1 above. Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach wherein the configuring the computing capacity comprises reserving the computing capacity on one or more external servers that are located in a different location than the server system. However, KHAN teaches wherein the configuring the computing capacity comprises reserving the computing capacity on one or more external servers that are located in a different location than the server system (KHAN, Fig. 2, 230-1 cloud computing environment, 240-1 cloud resources, 230-n cloud computing environment, 240-n cloud resource; Fig. 6B, 230-1to 230-3; [0036] lines 7-10, based on information obtained from analytics component 440. For example, prediction component 430 and analytics component 440 may interact to determine a resource utilization prediction; [0039] lines 3-5, prediction component 430 may include an algorithm for computing a period of time for which cloud and/or network resources are to be reserved; [0068] lines 11-14, determine one or more cloud computing environments 230, one or more cloud devices 240, and/or one or more cloud resources for providing services for the expected session (as reserving the computing capacity on one or more external servers that are located in a different location than the server system)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with KHAN because KHAN’s teaching of reserving the different cloud resources at different locations based on the resource prediction would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to efficiently reserving the different resources at different location based on the location of the operations which reducing the latency and improving the system efficiency (see KHAN, [0081] “reduce latency and/or improve metrics associated with quality of service and/or service level agreement requirements”). As per claim 18, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic teaches configuring the computing capacity (Radovanovic. [0043] lines 2-21, generate data indicative of at least one network optimization based at least in part on the one or more predictions. In some embodiments, the network computing system may operate at least part of the network according to the at least one network optimization. For example, based on data associated with the one or more predictions (e.g., a predicted future resource availability or demand for a resource at a future time interval) the network computing system can generate one or more control signals that can be used to activate one or more devices and/or systems associated with providing and/or generating the resource. For example, based on one or more predictions that the demand for a resource (e.g., electrical power) will increase in thirty minutes, the network computing system can route more electrical power from one or more electrical power stations and/or increase the amount of electrical power that will be made available by the one or more electrical power stations. In this way, the disclosed technology can more optimally provide a resource in accordance with demand for the resource, which can result in less congestion). Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach utilizing one or more external cloud computing resources to provide the computing capacity to the one or more computing devices. However, KHAN teaches utilizing one or more external cloud computing resources to provide the computing capacity to the one or more computing devices (KHAN, Fig. 2, 230-1 cloud computing environment, 240-1 cloud resources, 230-n cloud computing environment, 240-n cloud resource; Fig. 6B, 230-1to 230-3; [0036] lines 7-10, based on information obtained from analytics component 440. For example, prediction component 430 and analytics component 440 may interact to determine a resource utilization prediction; [0039] lines 3-5, prediction component 430 may include an algorithm for computing a period of time for which cloud and/or network resources are to be reserved; [0068] lines 11-14, determine one or more cloud computing environments 230, one or more cloud devices 240, and/or one or more cloud resources for providing services for the expected session (as utilizing one or more external cloud computing resources to provide the computing capacity to the one or more computing devices (i.e., Fig. 1, user device, service request)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with KHAN because KHAN’s teaching of reserving the different cloud resources at different locations based on the resource prediction would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to efficiently reserving the different external resources at different location based on the location of the operations which reducing the latency and improving the system efficiency (see KHAN, [0081] “reduce latency and/or improve metrics associated with quality of service and/or service level agreement requirements”). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claim 10 above, and further in view of Young et al. (US Patent. 10,409,649 B1). Young was cited in the previous Office Action. As per claim 14, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic further teaches wherein the one or more ML models comprise a continuous learning model trained for usage over the previous period of time (Radovanovic, Fig. 2, 204 input data, 200 machine-learned computing device, network state determination model (as ML model); [0071] lines 4-10, the machine-learned mode can generate data indicative of future energy costs and/or prices, based on prior training using historical energy costs and/or prices. Further, the machine-learned mode can generate data indicative of future network bandwidth costs and/or prices, based on prior training using historical network bandwidth costs and/or prices; [0073] lines 2-10, historical training data including historical resource availability (e.g., how much network bandwidth was available at certain days or certain hours of the day in the past), historical resource usage (e.g., how much network bandwidth was used at certain days or certain hours of the day in the past), and/or a ground-truth resource cost (e.g., the price of network bandwidth in the past) for a resource provided in association with a plurality of nodes over a plurality of time intervals). Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to explicitly teach the usage is pattern identification over the previous period of time, and that is based on at least one of daily network traffic or weekly network traffic. However, Young teaches the usage is pattern identification over the previous period of time, and that is based on at least one of daily network traffic or weekly network traffic (Young, Col 3, lines 23-31, the computing resource service provider may configure the particular customer's load balancer to pre-allocate additional load balancer resources based on predicted traffic and/or configuring the load balancer to be more sensitive to changes in traffic patterns. For example, if the customer traffic patterns include a daily spike at 6:00 p.m., the load balancer may be configured to pre-allocate load balancer resources prior to the daily spike in order to process the predicted traffic spike; Col 8, lines 36-37, a predicted increase in request traffic based on daily usage patterns may be correlated). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Young because Young’s teaching of predicting the resource usage based on the traffic pattern with daily network traffic would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to efficiently allocating and deallocating the resources based on the daily peak traffic of traffic pattern which improving the resource utilization and system efficiency (see Young, Col 11, lines 29-31, “a predictive model may be used as described above to increase efficiencies in allocating and deallocating computing resources”). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen, as applied to claim 10 above, and further in view of Drapeau et al. (US Patent. 11,704,673 B1). Drapeau was cited in the previous Office Action. As per claim 17, Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen teach the invention according to claim 10 above. Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen fail to specifically teach wherein the computing capacity comprises one or more fraud detection engines and one or more payment platforms provided by the computing system during electronic transaction processing. However, Drapeau teaches wherein the computing capacity comprises one or more fraud detection engines and one or more payment platforms provided by the computing system during electronic transaction processing (Drapeau, Fig. 1, 100 as computing system; Col 12, lines 16-22, transaction manager may employ fraud detection system 214 during transactions to use the ML engines 214A through 214B to predict whether an incoming transaction is likely fraudulent. In embodiments, transaction manager 216 may pass certain transaction attributes (e.g., card number, name on card, email used in a transaction, etc.) to fraud detection engine; Col 1, lines 28-38, This processing of payments by the commerce platform (as payment platform) may include running credit cards, crediting a merchant account for the transaction, crediting the agent responsible for the transaction, debiting a commerce system fee for processing the transaction on behalf of the merchant, interacting with authorization network systems (e.g., bank systems, credit card issuing systems, etc.), as well as performing other commerce related transactions for the merchant and/or agent such as providing payouts for products/services rendered on behalf of a merchant). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen with Drapeau because Drapeau’s teaching of fraud detection engine and the payment platform for processing the transactions would have provided Radovanovic, Bogdany, Bryc, Grebenisan and Vermeulen’s system with the advantage and capability to allow the system to improving the fraud detection during the transaction which increasing the security of the transaction processing (see Drapeau, Col 5, lines 15-16, “improve fraud detection during transactions”). Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In the remark applicant’s argue in substance: (a), These improvements are captured in the presently amended claims, which an intelligent ML model system to predict machines to allocate to a computing task at a future time, and dynamically resize machine pools while providing a reserve machine pool for additional computing tasks. This may be utilized to accurately provide proper computing resources to tasks in a predictive manner, minimizing downtime and/or errors and issues with computing service usage and task processing. Further, by retraining the ML models based on current computing resource usages, the ML models may be made more accurate over time to predict changing trends in resource usages are directed at a solution to the aforementioned technical problem. Consequently, Applicant asserts that the amended claims include several limitations that integrate any alleged abstract idea into a practical limitation, Examiner respectfully disagreed with Applicant’s argument for the following reasons: As to point (a), in response to applicant’s argument that “These improvements are captured in the presently amended claims, which an intelligent ML model system to predict machines to allocate to a computing task at a future time, and dynamically resize machine pools while providing a reserve machine pool for additional computing tasks. This may be utilized to accurately provide proper computing resources to tasks in a predictive manner, minimizing downtime and/or errors and issues with computing service usage and task processing. Further, by retraining the ML models based on current computing resource usages, the ML models may be made more accurate over time to predict changing trends in resource usages are directed at a solution to the aforementioned technical problem”. Examiner respectfully disagreed. Firstly, the above cited feature and limitations of the claim are abstract idea, which is can be merely performed by mentally. That is, The human mind can easily judging/evaluating/determining/analyzing the past computing usages to determining the usage pattern, predicting/determining the future computing usage at future time based on the previous usage pattern, determining/identifying whether there is available server resources at the future time, determining/identifying, a baseline level of the available server resources to allocate for the predicted future computing usage based on the past computing usage and one or more rule, determining/identifying an allocation and judging/scheduling/adjusting/configuring the capacity of available computing resource based on the previous determined prediction and determined allocation (i.e., mentally determining if resources will be increased at future time, then adjusting/scheduling/configuring more resources at current time), dynamically assigning/changing/modifying/sizing the plurality of machine pools to allocate the at least one machine to the computing task, and allocating/assigning a reserve machine pool from the plurality of machine pools for additional computing tasks; analyzing/determining a current computing usage of the server resources based on executing the computing tasks and a plurality of additional computing tasks executed using the available server resource at the current time; and retraining the ML engine based on the current computing usage, wherein the retraining includes adjusting the baseline level based on the current computing usage (i.e., mentally performing mathematical calculation based on the input, to adjusting/changing the baseline level based on the input data/current computing usage). MPEP 2106.05(a) discloses that “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. In addition, the improvement can be provided by the additional element(s) in combination with the recited judicial exception”. Please see 101 rejection above. Secondly, the claim recites “machine learning (ML) Engine”. However, under Broadest Reasonable Interpretation (BRI), the BRI of a “machine learning engine” is a file to recognize certain types of patterns or data. This BRI is consistent with specification [0013] “an intelligent machine learning (ML) engine that analyzes past computing resource usage during different time periods to predict potential future usage of computing resources”. Accordingly, it is clearly reasonable for one to mentally retraining a “engine” to identify/predicting pattern or adjusting values from historical/current data. The “engine” could be a nothing more than a single word for “text/data processing.” (i.e., mentally keep learning from previous data to predicating and adjusting values). Thirdly, the claim recites “baseline level” determined and adjusted by using the ML engine, but does not specifically indicating how that baseline level is used for allocation. (i.e., the claim only recites determining baseline level of resources to allocate for predicted future computing usage, but at the later step, the claim further recites, determining an allocation for that predicted future computing usage. That is, the determination of the allocation has nothing to do with the determined baseline level). So how can these improvements are actually happened if that baseline level is not used? For the reasons above, Applicant’s argument has not been found to be persuasive, and therefore the rejections are maintained. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZUJIA XU/Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

May 26, 2022
Application Filed
Jan 25, 2025
Non-Final Rejection — §101, §103
Apr 12, 2025
Interview Requested
Apr 18, 2025
Applicant Interview (Telephonic)
Apr 18, 2025
Examiner Interview Summary
Apr 30, 2025
Response Filed
Sep 03, 2025
Final Rejection — §101, §103
Oct 20, 2025
Interview Requested
Nov 07, 2025
Examiner Interview Summary
Nov 07, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §101, §103
Mar 16, 2026
Interview Requested
Mar 27, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602249
Hardware Resource Allocation System for Allocating Resources to Threads
2y 5m to grant Granted Apr 14, 2026
Patent 12541397
THREAD MANAGEMENT
2y 5m to grant Granted Feb 03, 2026
Patent 12504983
SUPERVISORY DEVICE WITH DEPLOYED INDEPENDENT APPLICATION CONTAINERS FOR AUTOMATION CONTROL PROGRAMS
2y 5m to grant Granted Dec 23, 2025
Patent 12498971
COMPUTING TASK SCHEDULING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 16, 2025
Patent 12436805
COMPUTER SYSTEM WITH PROCESSING CIRCUIT THAT WRITES DATA TO BE PROCESSED BY PROGRAM CODE EXECUTED ON PROCESSOR INTO EMBEDDED MEMORY INSIDE PROCESSOR
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+81.5%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month