DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step One The claims are directed to a method (claims 1 - 10 ) and a non-transitory storage med ium (claims 11 - 20 ) . Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). As to claims 1, Step 2A, Prong One The claim recites in part: predicting, by the workspace size predicting engine, a size of a workspace that corresponds to the workspace provisioning request; predicting, by the datacenter host prediction engine, a datacenter and/or host that is able to support requirements of the workspace. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For exampl e: (1) A human can review expected users, applications, and traffic and mentally predict the size of a cloud networking workspace needed to support the workload. (2) A human can also predict which provider (host) is the best fit to supply the cloud networking workspace. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: receiving, by a workspace size predicting engine, a workspace provisioning request regarding a customer machine learning (ML) model; receiving, by a datacenter host prediction engine from the workspace size predicting engine, the workspace size; which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. The claim further recites a datacenter which is recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). The recitation a workspace size predicting engine, workspace provision request, and datacenter host prediction engine amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)). Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of: receiving, by a workspace size predicting engine, a workspace provisioning request regarding a customer machine learning (ML) model; receiving, by a datacenter host prediction engine from the workspace size predicting engine, the workspace size; are recited at a high level of generality and amounts to extra-solution activity of receiving data i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claim further recites a datacenter which is recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). The recitation a workspace size predicting engine, workspace provision request, and datacenter hos t prediction engine amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)). Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception. As to claim 2, Step 2A, Prong One The claim recites in part: the workspace size comprises a number of containers, and a respective amount of memory and processing capability for each of the containers. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example: A can review a request for a cloud workspace and mentally estimate the number of containers need and the memory and processing capability required for each container. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. As to claim 3 , Step 2A, Prong One The claim recites in part: the workspace size prediction engine provides the workspace size to a workspace provisioning engine that provisions the workspace using the workspace size. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example: A human can mentally determine a workspace size and communicate that size to another person responsible for setting up the workspace so the workspace can be provisioned accordingly Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. As to claim 4, the recitation “t he workspace size prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the size of the workspace ” amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)). As to claim 4 , Step 2A, Prong One The claim recites in part: t he workspace size prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the size of the workspace As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. As claimed the deep neural network (DNN)-based multi-output regressor is a generic algorithmic program that runs on a generic computer which is able to predict the size of a workspace. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. As to claim 5 , Step 2A, Prong One The claim recites the abstract idea described above in claim 1, but does not recite any other abstract ideas or any other judicial exceptions. Step 2A, Prong Two The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: the workspace size prediction engine was trained based in part using historical workspace resource metrics data. which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of: the workspace size prediction engine was trained based in part using historical workspace resource metrics data. which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception. As to claim 6 , Step 2A, Prong One The claim recites in part: the host prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the datacenter and/or host. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. As claimed the deep neural network (DNN)-based multi-output regressor is a generic algorithmic program that runs on a generic computer which is able to predict the datacenter and/or host. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. As to claim 7 , Step 2A, Prong One The claim recites the abstract idea described above in claim 1, but does not recite any other abstract ideas or any other judicial exceptions. Step 2A, Prong Two The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of: the host prediction engine was trained based in part using historical workspace creation data. which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of: the host prediction engine was trained based in part using historical workspace creation data. which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception. As to claim 8, the recitation “wherein the host prediction engine comprises DNN-based multi-label classifier ” amounts to generally linking the use of the judicial exception to a particular environment of field of use (See MPEP 2106.05(h)). As to claim 9 , Step 2A, Prong One The claim recites in part: wherein the workspace is provisioned, based on the workspace size, in a shared hybrid cloud platform. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, A human can mentally estimate a workspace size and the provision resources using generic computer components. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. As to claim 10 , Step 2A, Prong One The claim recites in part: wherein the workspace is placed in the predicted host and/or datacenter. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally predict which host and/or datacenter has sufficient capacity for a workspace and then place the workspace in that predicted host or data center using a generic computer system. The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself. Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception. Claim 11 has similar limitations as claim 1. Therefore, the claim is rejected for the same reasons as above. The claim further recites a non-transitory storage medium, one or more processors, and a datacenter which are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Claim 12 has similar limitations as claim 2. Therefore, the claim is rejected for the same reasons as above. Claim 13 has similar limitations as claim 3. Therefore, the claim is rejected for the same reasons as above. Claim 14 has similar limitations as claim 4. Therefore, the claim is rejected for the same reasons as above. Claim 15 has similar limitations as claim 5. Therefore, the claim is rejected for the same reasons as above. Claim 16 has similar limitations as claim 6. Therefore, the claim is rejected for the same reasons as above. Claim 17 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above. Claim 18 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above. Claim 19 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above. Claim 20 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3, 5, 7, 9, 10, 11, 13, 15, 17, 19, and 20 is/are rejected under 35 U.S.C. 102 FILLIN "Insert either \“(a)(1)\” or \“(a)(2)\” or both. If paragraph (a)(2) of 35 U.S.C. 102 is applicable, use form paragraph 7.15.01.aia, 7.15.02.aia or 7.15.03.aia where applicable." \d "[ 2 ]" (a)(1) as being anticipated by Srikanth et al (US 2012/0198073). As to claim 1, Srikanth et al figure 1 shows and teaches a method, receiving, by a workspace size predicting engine, a workspace provisioning request regarding a customer machine learning (ML) model ( paragraph [0034]… curates the catalog and/or knowledge base by enriching it and enabling semantic searching of the catalog ; paragraph [0047]… t he set of method steps 3006 receives on-demand computing queries and services them. More specifically, t he set of method steps 3006 facilitates capturing consumers' queries, understanding them as a request for a computing resource, workload, or task, and providing appropriately matched resources for the query by processing and/or understanding textual queries; capturing structural inputs and forms to facilitate apprehension of advanced user queries for resources, workloads, and tasks ; paragraph [0051]… facilitate modeling and indexing the automatically generated computing resources catalog, understanding and modeling consumer computing needs based on their queries, dynamically matching available computing resources in the federated on-demand computing environment to satisfy a consumer query, and automatically suggesting related resources that complement consumers' computing needs )( Examiner’s Note: “catalog and/or knowledge base” reads on “workspace size predicting engine” ; “ facilitates capturing consumers' queries, understanding them as a request for a computing resource, workload, or task, and providing appropriately matched resources for the query by processing and/or understanding textual queries ” reads on “receiving, by a workspace size predicting engine, a workspace provisioning request” ; “facilitate modeling and indexing the automatically generated computing resources catalog” reads on “a customer machine learning (ML) model” ) ; predicting, by the workspace size predicting engine, a size of a workspace that corresponds to the workspace provisioning request ( paragraph [0016]… facilitate the monitoring and management of resource utilization in the federated on-demand computing environment of the system 100, thereby providing access to appropriate computing resources for consumers and metrics for predictable capacity planning and management of data centers for computing resource providers ) )( Examiner’s Note: “predictable capacity planning and management of data centers” reads on “predicting, by the workspace size predicting engine, a size of a workspace that corresponds to the workspace provisioning request ” ) ; receiving, by a datacenter host prediction engine from the workspace size predicting engine, the workspace size ( paragraph [0038]… the method proceeds to block 3068 where the method builds a knowledge base (as a self-learning system) to align computing resources from different computing resource providers ) paragraph [0055]… c omputing resource queries to an on-demand computing environment are expressed suitably through identification of the resource type and selected set of resource attributes. These requests are satisfied by matching the desired attributes with existing resources available from current providers. The resource matching is based on the underlying meaning and descriptor of the available resources and their match to the requested resources )( Examiner’s Note: “a knowledge base (as a self-learning system)” reads on “a datacenter host prediction engine” ; “requests are satisfied by matching the desired attributes with existing resources available from current providers” reads on “receiving, by a datacenter host prediction engine from the workspace size predicting engine, the workspace size” ) ; and predicting, by the datacenter host prediction engine, a datacenter and/or host that is able to support requirements of the workspace ( paragraph [0029]…the following steps predict virtual machine inventory based on capacity information on physical infrastructure available in an on-demand computing environment. However, it would be appreciated by one skilled in the art that these steps are extensible to predicting the inventory of other types of resources based on their license and/or capacity information. The inventory is dynamically updated in the catalog and different parameters contribute to the prediction of resource types at a provider including, but not limited to, the variety of resources available at the provider, their earlier utilization rate in a federation as well as outside channels, the resource provider's location and the consumer's demand at that location, time and date of request fulfillment and duration of usage. At block 3044, the method receives current usage of physical resources (e.g., physical computing machines) and capacity of the provider at a given service location identified by service end point. The method then continues to another continuation terminal ("Terminal A5") ; paragraph [0055]… c ertain workloads require and/or are configured to operate correctly on certain types of computing resources. In addition, certain workloads require certain types of computing resources to be available at the desired time and in the desired configuration for the workload to complete correctly ) ( Examiner’s Note: “The inventory is dynamically updated in the catalog and different parameters contribute to the prediction of resource types at a provider including, but not limited to, the variety of resources available at the provider” reads on “predicting, by the datacenter host prediction engine, a datacenter and/or host that is able to support requirements of the workspace” ; “certain workloads require certain types of computing resources” reads on “ able to support requirements of the workspace” ) As to claim 3, Srikanth et al figure 1 shows and teaches the method, wherein the workspace size prediction engine provides the workspace size to a workspace provisioning engine that provisions the workspace using the workspace size ( paragraph [0055]… Computing resource queries to an on-demand computing environment are expressed suitably through identification of the resource type and selected set of resource attributes. These requests are satisfied by matching the desired attributes with existing resources available from current providers ) ( Examiner’s Note: “requests are satisfied by matching the desired attributes with existing resources available from current providers” reads on “the workspace size prediction engine provides the workspace size to a workspace provisioning engine that provisions the workspace using the workspace size” ). As to claim 5, Srikanth et al figure 1 shows and teaches the method, wherein the workspace size prediction engine was trained based in part using historical workspace resource metrics data ( paragraph [0014]…various embodiments of the present subject matter ease the finding of the increasing list of on-demand computing resource providers, and the capturing of different types of computing resources with their descriptions and metadata including prices that may vary over time. Additionally, usage data taken from observations of the usage of computing resources is made available, by various embodiments, for computing resource providers to appreciate their customers and the needs of their customers in terms of desired computing resources so as to better model their configurations and setups ). As to claim 7, Srikanth et al figure 1 shows and teaches the method, wherein the host prediction engine was trained based in part using historical workspace creation data ( paragraph [0021]… An abstraction of the catalog is the knowledge base 220 which stores semantic representations of the computing resource types, their attributes, taxonomy of their values, and other categorical information about the computing resources, such as actions that are configured to be performed based on the types and the capabilities of the computing resource providers of the computing resources ). As to claim 9, Srikanth et al figure 1 shows and teaches the method, wherein the workspace is provisioned, based on the workspace size, in a shared hybrid cloud platform. ( paragraph [0019]…On-demand computing queries 202 coming from various sources, such as software developers 104a, users 104b, and enterprises 104c, are presented to the cloud organizing system 102. Computing resources 208a-208c of various computing resource providers 108a-108c yield pieces of metadata, among other pieces of information, which are gathered by various pieces of software executing on pieces of cloud organizing system 102 ). As to claim 10, Srikanth et al figure 1 shows and teaches the method, wherein the workspace is placed in the predicted host and/or datacenter ( paragraph [0055]… Computing resource queries to an on-demand computing environment are expressed suitably through identification of the resource type and selected set of resource attributes. These requests are satisfied by matching the desired attributes with existing resources available from current providers. The resource matching is based on the underlying meaning and descriptor of the available resources and their match to the requested resources. The resources matched for a given query can come from multiple providers and from multiple regions. Computing resources are used to perform certain workloads or tasks. Certain workloads require and/or are configured to operate correctly on certain types of computing resources ) ( Examiner’s Note: “Certain workloads require and/or are configured to operate correctly on certain types of computing resources” reads on “the workspace is placed in the predicted host and/or datacenter” ). Claim 11 has similar limitations as claim 1. Therefore, the claim is rejected for the same reasons as above. Claim 13 has similar limitations as claim 3. Therefore, the claim is rejected for the same reasons as above. Claim 15 has similar limitations as claim 5. Therefore, the claim is rejected for the same reasons as above. Claim 17 has similar limitations as claim 7. Therefore, the claim is rejected for the same reasons as above. Claim 19 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above. Claim 20 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claim (s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Srikanth et al (US 2012/0198073) in view of Dodsley et al (US 11,360,844 ). As to claim 2, Srikanth et al teaches the workspace size. Srikanth et al fails to explicitly show/teach the workspace size comprises a number of containers, and a respective amount of memory and processing capability for each of the containers. However, Dodsley et al teaches the workspace size comprises a number of containers, and a respective amount of memory and processing capability for each of the containers ( column 53, lines 25 - 40…API (404) or plugin (412, 418, 422, 424) for the container storage provider (426) may be intent based, where in addition to supporting a request for a block of data of a specified size, the container storage provider (426) may enable requests specifying a storage resource intent, such as a request for a volume with a level of redundancy. In some example , the volume requested may be scaled up in size or down in size automatically without any additional commands, and as such, a request need not specify a volume size ). Therefore, it would have been obvious for one having ordinary skill in the art, at the time the invention was made, for Srikanth et al’s workspace size comprises a number of containers, and a respective amount of memory and processing capability for each of the containers, as in Dodsley et al, for the purpose of balancing w ork evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking. Claim 12 has similar limitations as claim 2. Therefore, the claim is rejected for the same reasons as above. Claim (s) 4 , 6, 8, 1 4 , 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Srikanth et al (US 2012/0198073) in view of Mohanty et al (US 2024/0012667 ). As to claim 4, Srikanth et al teaches the workspace size prediction engine. Srikanth et al fails to teach t he workspace size prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the size of the workspace . However, Mohanty et al teaches a workspace size prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the size of the workspace ( paragraph [0057]… predict microservice instance resource sizing, the resource amount prediction layer 132 leverages historical utilization data (e.g., metrics such as CPU, memory and storage utilization) associated with each microservice instance 106 in its corresponding hosting environment as captured by the monitoring, collection and logging layer 121. The utilization data and associated timestamps capture, for example, load, volume and seasonality, which are used by the machine learning model to predict future utilization of resources in a hosting instance for a given microservice. In illustrative embodiments, the resource amount prediction layer 132 uses a multi-target regression algorithm to predict the size of each resource for a given microservice. The orchestration engine 140, which may comprise, but is not necessarily limited to, infrastructure orchestration tools like Kubernetes®, Docker Swarm®, AmazonEKS ®, AmazonECS ® and PKS®, applies the predicted resource sizes when provisioning new instances of containers, pods and/or VMs ). Therefore, it would have been obvious for one having ordinary skill in the art, at the time the invention was made, for Srikanth et al’s workspace size prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the size of the workspace , as in Mohanty et al, for the purpose of d eploy ing in a hybrid cloud infrastructure, which allows for decoupling and reduces dependency, thus enabling each microservice to change and scale independently. As to claim 6 , Mohanty et al teaches the host prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the datacenter and/or host ( paragraph [0057]… predict microservice instance resource sizing, the resource amount prediction layer 132 leverages historical utilization data (e.g., metrics such as CPU, memory and storage utilization) associated with each microservice instance 106 in its corresponding hosting environment as captured by the monitoring, collection and logging layer 121. The utilization data and associated timestamps capture, for example, load, volume and seasonality, which are used by the machine learning model to predict future utilization of resources in a hosting instance for a given microservice. In illustrative embodiments, the resource amount prediction layer 132 uses a multi-target regression algorithm to predict the size of each resource for a given microservice. The orchestration engine 140, which may comprise, but is not necessarily limited to, infrastructure orchestration tools like Kubernetes®, Docker Swarm®, AmazonEKS ®, AmazonECS ® and PKS®, applies the predicted resource sizes when provisioning new instances of containers, pods and/or VMs ). It would have been obvious for the host prediction engine comprises a deep neural network (DNN)-based multi-output regressor that uses multi-target regression to predict the datacenter and/or host , for the same reasons as above. As to claim 8 , Mohanty et al teaches the host prediction comprises DNN-based multi-label classifier ( paragraph [0057]… predict microservice instance resource sizing, the resource amount prediction layer 132 leverages historical utilization data (e.g., metrics such as CPU, memory and storage utilization) associated with each microservice instance 106 in its corresponding hosting environment as captured by the monitoring, collection and logging layer 121. The utilization data and associated timestamps capture, for example, load, volume and seasonality, which are used by the machine learning model to predict future utilization of resources in a hosting instance for a given microservice. In illustrative embodiments, the resource amount prediction layer 132 uses a multi-target regression algorithm to predict the size of each resource for a given microservice. The orchestration engine 140, which may comprise, but is not necessarily limited to, infrastructure orchestration tools like Kubernetes®, Docker Swarm®, AmazonEKS ®, AmazonECS ® and PKS®, applies the predicted resource sizes when provisioning new instances of containers, pods and/or VMs ). It would have been obvious for the host prediction comprises DNN-based multi-label classifier , for the same reasons as above. Claim 1 4 has similar limitations as claim 4 . Therefore, the claim is rejected for the same reasons as above. Claim 1 6 has similar limitations as claim 6 . Therefore, the claim is rejected for the same reasons as above. Claim 1 8 has similar limitations as claim 8 . Therefore, the claim is rejected for the same reasons as above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT BRANDON S COLE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-5075 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Mon - Fri 7:30pm - 5pm EST (Alternate Friday's Off) . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez can be reached at 571-272-2589 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRANDON S COLE/ Primary Examiner, Art Unit 2128