DETAILED ACTION This office action is in response to claims filed 26 October 2023. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Examiner’s Note Claims 4, 8-11, 14, and 18 were not rejected using prior art, but stand rejected under other statutes. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Regarding claim 1 , in step 1 of the 101 analysis set forth in MPEP 2106 , the claim recites a system that auto-scales cloud resources based on an output of a machine learning model . A system is one of the four statutory categories of invention. In step 2A, prong 1 of the 101 analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components : i . “generating a first feature input based on the first cloud architecture processing requirement and the first set of available cloud resources” ( a person can mentally generate a feature input by simply evaluating a requirement and a set of resources, and making a judgement of a first feature input (MPEP 2106.04(a)) ) . i i . “determining, based on the first output, a first cloud architecture pattern for the first set of available cloud resources” ( a person can mentally determine a architecture pattern by simply evaluating an output and making a judgement of a pattern (MPEP 2106.04(a)) ) . i i i . “selecting a subset of cloud resources from the first set of available cloud resources based on the first cloud architecture pattern” ( a person can mentally select cloud resources by simply evaluating a set of resources and making a judgement of particular ones (MPEP 2106.04(a)) ) . i v . “generating a resource schedule for use of the subset of cloud resources based on the first cloud architecture pattern” ( a person can mentally generate a schedule for use of resources by simply evaluating resources and making a judgement of a particular schedule for usage (MPEP 2106.04(a)) ) . v . “ auto-scaling use of a first cloud resource of the subset of cloud resources based on the resource schedule ” ( a person can mentally auto-scale resources by simply evaluating a resource schedule and making a judgement of a scaled amount of resources (MPEP 2106.04(a)) ) . If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. In step 2A, prong 2 of the 101 analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application : vi. “ a system for optimizing cloud architectures using artificial intelligence models trained on standardized cloud architecture patterns corresponding to specific requirements , the system comprising one or more processors; and one or more non-transitory, computer-readable media having instructions recorded thereon that, when executed by the one or more processors, cause operations comprising” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . vi i . “receiving a first cloud architecture processing requirement” ( insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g)) ) . vi ii . “receiving a first set of available cloud resources” ( insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g)) ) . ix. “wherein the first set of available cloud resources comprises virtual machines corresponding to storage volumes, databases, and networking components” ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) ) . x. “ inputting the first feature input into a first artificial intelligence model to generate a first output, wherein the first artificial intelligence model is trained on historical usage data for cloud resources in known cloud architecture patterns ” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . x i . “ wherein the known cloud architecture patterns comprise respective arrangements of used and unused cloud resources and their interconnectivity ” ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) ) . xii. “ wherein outputs of the first artificial intelligence model comprise recommendations for potential cloud architecture patterns corresponding to inputted cloud architecture processing requirements ” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . Since the claim does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea. In step 2B of the 101 analysis set forth in the 2019 PEG, the examiner has determined through reanalysis of the following limitations considered in step 2A prong 2, that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. vi. “ a system for optimizing cloud architectures using artificial intelligence models trained on standardized cloud architecture patterns corresponding to specific requirements , the system comprising one or more processors; and one or more non-transitory, computer-readable media having instructions recorded thereon that, when executed by the one or more processors, cause operations comprising” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . vi i . “receiving a first cloud architecture processing requirement” ( well-understood, routine and conventional activity of receiving data over a network (MPEP 2106.05(d)(II) ) . vi ii . “receiving a first set of available cloud resources” ( well-understood, routine and conventional activity of receiving data over a network (MPEP 2106.05(d)(II) ) ) . ix. “wherein the first set of available cloud resources comprises virtual machines corresponding to storage volumes, databases, and networking components” ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) ) . x. “ inputting the first feature input into a first artificial intelligence model to generate a first output, wherein the first artificial intelligence model is trained on historical usage data for cloud resources in known cloud architecture patterns ” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . xi. “ wherein the known cloud architecture patterns comprise respective arrangements of used and unused cloud resources and their interconnectivity ” ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)) ) . xii. “ wherein outputs of the first artificial intelligence model comprise recommendations for potential cloud architecture patterns corresponding to inputted cloud architecture processing requirements ” ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)) ) . Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea . Therefore, the claim is not patent eligible. Regarding claim 2, It comprises limitations similar to those of claim 1, and is therefore rejected for similar rationale. Regarding claim 3, the additional element s “ receiving a second cloud architecture processing requirement ” , “ transmitting a second communication ” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application ( insignificant extra-solution activity of mere data gathering /transmitting (MPEP 2106.05(g)) , and under step 2B it does not amount to significantly more than the judicial exception ( well-understood, routine, and conventional activity of receiving and transmitting data over a network , (MPEP 2106.05(d)(II)) . T he additional element s “ generating a second feature input based on the second cloud architecture processing requirement and the first set of available cloud resources ”, “ determining, based on the second output, a second cloud architecture pattern for the first set of available cloud resourc e s ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally generate a feature input and determine a pattern by simply evaluating a requirement and a set of resources, and making a judgement of a first feature input , and further evaluating an output, and making a judgement of a pattern (MPEP 2106.04(a) ) . The additional element “ wherein the second communication causes the first set of available cloud resources to adopt the second cloud architecture pattern ” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h) ), and under step 2B it does not amount to significantly more than the judicial exception ( generally links the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h) ). Regarding claim 4, the additional elements “ the first feature input is further based on a second taxonomy, wherein the second taxonomy is generated by: determining a first taxonomy for the first set of available cloud resources; determining a standardized taxonomy of the known cloud architecture patterns; and reformatting the first taxonomy based on the standardized taxonomy to generate the second taxonomy ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine taxonomy of a first set of resources and a standard taxonomy, and making a judgment of a reformatted taxonomy (MPEP 2106.04(a) ). Regarding claim 5 , the additional elements “ determining the first cloud architecture pattern for the first set of available cloud resources further comprises: selecting a subset of cloud resources from the first set of available cloud resources; and determining a plurality of interconnections between the subset of cloud resources ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally select cloud resources and determine interconnections by simply evaluating cloud resources and interconnections and make a judgement of particular resources and their particular interconnections (MPEP 2106.04(a) ). Regarding claim 6 , the additional elements “ determining the plurality of interconnections between the subset of cloud resources further comprises: determining a first virtual switch between a first cloud resource of the subset of cloud resources and a second cloud resource of the subset of cloud resources; and managing network traffic through the first virtual switch ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine a virtual switch, and manage traffic through the virtual switch by simply evaluating network topology, and making a judgement of a particular virtual switch through which traffic is routed (MPEP 2106.04(a) ). Regarding claim 7 , the additional elements “ training the first artificial intelligence model on the historical usage data for the cloud resources in the known cloud architecture patterns further comprises: determining a first training frequency; determining to collect additional historical usage data based on the first training frequency ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine training frequency and determine to collect usage data by simply making a judgement of a frequency to collect usage data (MPEP 2106.04(a) ). Further, the additional elements “ retraining the first artificial intelligence model based on the additional historical usage data” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))) , and under step 2B it does not amount to significantly more than the judicial exception ( Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f))) . Regarding claim 8 , the additional elements “ determining the first training frequency further comprises: determining a number of devices in the first set of available cloud resources; determining a required training frequency based on the number of devices; and determining whether the required training frequency corresponds to the first training frequency ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine training frequency by evaluating a number of devices, and making a judgement of whether a frequency corresponds to a required frequency (MPEP 2106.04(a) ). Regarding claim 9 , the additional elements “ determining the first training frequency further comprises: determining a first application for the first set of available cloud resources; determining a required training frequency based on the first application; and determining whether the required training frequency corresponds to the first training frequency ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine training frequency by evaluating an application , and making a judgement of whether a frequency corresponds to a required frequency (MPEP 2106.04(a) ). Regarding claim 10 , the additional elements “ determining the first training frequency further comprises: determining a first reliability requirement for the first set of available cloud resources; determining a required training frequency based on the first reliability requirement; and determining whether the required training frequency corresponds to the first training frequency ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine training frequency by evaluating reliability requirements , and making a judgement of whether a frequency corresponds to a required frequency (MPEP 2106.04(a) ). Regarding claim 1 1 , the additional elements “ determining the first training frequency further comprises: determining an average processing load for the first set of available cloud resources; determining a required training frequency based on the average processing load; and determining whether the required training frequency corresponds to the first training frequency ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine training frequency by evaluating average processing load , and making a judgement of whether a frequency corresponds to a required frequency (MPEP 2106.04(a) ). Regarding claim 1 2 , the additional elements “ retrieving a plurality of artificial intelligence models ” does not render the claim patent eligible because under step 2A prong 2, it does not integrate the judicial exception into a practical application ( insignificant extra-solution activity of mere data gathering (MPEP 2106.05(g)) , and under step 2B it does not amount to significantly more than the judicial exception ( well-understood, routine, and conventional activity of retrieving data from memory , (MPEP 2106.05(d)(II)) . Further, the additional elements “ determining respective weights of the first cloud architecture processing requirement in training each of the plurality of artificial intelligence models; and selecting the first artificial intelligence model from the plurality of artificial intelligence models based on a respective weight of the first cloud architecture processing requirement in training the first artificial intelligence model. does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine weights and select an AI model by simply evaluating requirements, making a judgement of weights, and making a judgement to select a particular AI model (MPEP 2106.04(a) ). Regarding claim 1 3 , the additional elements “ training the first artificial intelligence model on the historical usage data for the cloud resources in the known cloud architecture patterns further comprises: determining a first validation requirement for the historical usage data; and validating the historical usage data based on the first validation requirement ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine validation requirements and validate data by simply evaluating validation requirements, and making a judgement that data is validated (MPEP 2106.04(a) ). Regarding claim 1 4 , the additional elements “ determining the first validation requirement further comprises: determining a number of devices in the first set of available cloud resources; and determining the first validation requirement based on the number of devices ” does not render the claim patent eligible because under step 2A prong 1, it recites a judicial exception (mental process) ( a person can mentally determine numbers of devices and validation requirements, by simply evaluating a count of devices, and making a judgement of a validation requirement (MPEP 2106.04(a) ). Regarding claim 1 5 , they comprise limitations similar to claim 1, and is therefore rejected for similar rationale. Regarding claims 16-20, they comprise limitations similar to claims 2-6, and are therefore rejected for similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 13, 15, 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over FAWCETT et al. Pub. No.: US 2022/0075664 A1 (hereafter FAWCETT), in view of ORTIZ et al. Patent No.: US 11,681,552 B2 (hereafter ORTIZ). Regarding claim 1, FAWCETT teaches the invention substantially as claimed, including: A system for optimizing cloud architectures using artificial intelligence models trained on standardized cloud architecture patterns corresponding to specific requirements, the system comprising: one or more processors; and one or more non-transitory, computer-readable media having instructions recorded thereon that, when executed by the one or more processors, cause operations ( [0021] The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention ) comprising: receiving a first cloud architecture processing requirement ( [0071] Processing begins at operation S255 (see FIG. 4), where I/O module (“mod”) 355 (see FIG. 5) receives historical processing element resource allocation information for the stream processing job. [0073] The historical processing element resource allocation information (also referred to simply as the “historical information”) includes information pertaining to historical allocations of resources to processing elements of the stream processing job, how the processing elements performed under those allocations, and the outputs produced by the stream processing job using those allocations. For example, the historical information may include how many CPU cores were allocated to each processing element, how much random access memory (RAM) was allocated to each processing element, a frequency of input tuples (or input tuple type) for each processing element, how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element, and/or the output streams for each processing element over a given period of time ( i.e., historical information includes resources used, or “required” during previous execution of elements of a stream processing job ) ) ; receiving a first set of available cloud resources ( [0075] [A] specified objective ( i.e., specified objectives represent objectives that are “received” ) may be any one or more of a wide variety of possible objectives—whether technical or business in nature. For example, some possible technical objectives may include: ( i ) keeping CPU load to a minimum, or under a certain amount, (ii) keeping RAM utilization to a minimum, or under a certain amount, (iii) minimizing an amount of “wait time” for processing elements, and/or (iv) maximizing throughput (for example, tuple outflow rate) ( i.e., specifying minimum amounts of CPU load, RAM utilization, and latency (wait time) represent sets of minimum cloud resources that must be available ) ) , wherein the first set of available cloud resources comprises virtual machines corresponding to storage volumes, databases, and networking components ([0068] The processing elements of the stream processing job may be individually or collectively located on respective computing units (such as nodes 10) of cloud computing environment 50, whether physical or virtual. For purposes of this disclosure, the “computing units” (or “units of computing”) can be any computing construct capable of containing processing elements of stream processing jobs and having computing resources (such as CPU cores and memory) allocated to it for the processing of those stream processing jobs. In some embodiments, the computing units are virtual machines. [0053] System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32. Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system 34 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”) ( i.e., memory 28 comprises storage volumes, like RAM 30 and cache 32, as well as storage databases, like storage system 34 ). [0055] Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc ( i.e., virtual machine computing unit is associated with storage, databases, and networking components ) ) ; generating a first feature input based on the first cloud architecture processing requirement and the first set of available cloud resources ( [0074] Processing proceeds to operation S260 (see FIG. 4), where scoring mod 360 (see FIG. 5) scores the historical resource allocations according to a specified objective. Generally speaking, in this operation, scoring mod 360 analyzes the historical information and scores each historical allocation based on how well the respective allocation meets the specified objective ) ; inputting the first feature input into a first artificial intelligence model to generate a first output , wherein the first artificial intelligence model is trained on historical usage data for cloud resources in known cloud architecture patterns ([0076] Processing proceeds to operation S265 (see FIG. 4), where ML mod 365 (see FIG. 5) trains an ML model using the historical information and the corresponding scores generated by scoring mod 360 ( i.e., historical information and scores are used as input into a model to train it ). For example, in some cases, ML mod 365 may train the ML model, via backpropagation, by using the historical information as training input and the corresponding scores as training output. In other cases, ML mod 365 may select the historical information that results in the best score for a given set of circumstances, and use just the selected historical information to train the ML model. For example, the historical information may include sets of CPU core adjustments for each processing element, and ML mod 365 may select the CPU core adjustments that result in the best score for each of a given set of circumstances, such as given sets of tuple queue utilization rates ) , wherein the known cloud architecture patterns comprise respective arrangements of used and unused cloud resources and their interconnectivity ([0073] F or example, the historical information may include how many CPU cores were allocated to each processing element, how much random access memory (RAM) was allocated to each processing element, a frequency of input tuples (or input tuple type) for each processing element, how many processor cycles it took for each processing element to process a given number of tuples, a size or utilization percentage of a tuple queue for each processing element, and/or the output streams for each processing element over a given period of time ( i.e., resource utilization percentage over a period of time represents “arrangements of used and unused resources” over a periodic pattern of utilization ) ) , and wherein outputs of the first artificial intelligence model comprise recommendations for potential cloud architecture patterns corresponding to inputted cloud architecture processing requirements ([0079] R esource allocation mod 370 retrieves setup parameters of the stream processing job, provides the setup parameters to the trained ML model, and then receives as output from the trained ML model a recommended allocation of resources ( i.e., cloud architecture pattern ) for the processing elements of the stream processing job. [0112] As a result of those inputs, trained ML model 814 generates, as output, a set of optimal adjustment deltas for pod subscriptions of the stream processing application ( i.e., machine learning model outputs a set of multiple recommendations ) ) … selecting a subset of cloud resources from the first set of available cloud resources based on the first cloud architecture pattern…and auto-scaling use of a first cloud resource of the subset of cloud resources ( [0082] Processing proceeds to operation S275 (see FIG. 4), where job execution mod 375 (see FIG. 5) executes the stream processing job using the allocated resources. In some cases, when the stream processing job is already executing, “executing” in this context simply means executing the stream processing job under the new resource allocations determined in operation S270 ( i.e., execution of stream processing jobs is performed according to a “schedule” that defines the stream and uses a selected subset of cloud resources ) )… While FAWCETT teaches autoscaling use of cloud resources based on machine learning output, FAWCETT does not explicitly teach: determining, based on the first output, a first cloud architecture pattern for the first set of available cloud resources; generating a resource schedule for use of the subset of cloud resources based on the first cloud architecture pattern; auto-scaling use of a first cloud resource of the subset of cloud resources based on the resource schedule; However, in analogous art that similarly teaches allocation of resources based on machine learning output, ORTIZ teaches: determining, based on the first output, a first cloud architecture pattern for the first set of available cloud resources; generating a resource schedule for use of the subset of cloud resources based on the first cloud architecture pattern; auto-scaling use of a first cloud resource of the subset of cloud resources based on the resource schedule ([Column 3, Lines 12-17] T ime varying resource pools may be computing resources, such as memory resources, computational processing resources, or the like. Such memory resources or computational processing resources may be allocated on a recurring basis (e.g., scheduled computing tasks). [Column 24, Lines 48-49] At 316, the processor may conduct operations of a learning model for determining forecasted resource allocations. [Column 30, Line 60-Column 31, Line 1] At 404, the processor may identify one or more recurring resource allocations based on recurring data entries of the time-series data set. In some embodiments, identifying one or more recurring resource allocations may be based on heuristics. In some embodiments, the heuristics may include rules-based pattern recognition operations for identifying recurring resource allocations (e.g., monetary payments, computing resource allocations, etc.) that recur on substantially periodic time-basis. [Column 33, Lines 11-15] Upon detection of a trigger condition, the processor, at 410, may generate data to display, via a user interface, a scaled resource allocation value based on the forecasted resource pool value ( i.e., computational processing resources are scaled based on output from a machine learning model according to a “schedule” generated based on a pattern of recurring resource allocations ) ) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined ORTIZ’s teaching of scaling computational processing resources based on an output of a machine learning model according to a schedule based on patterns of resource allocations, with FAWCETT’s teaching of scaling computational processing resources based on output of a machine learning model, to realize, with a reasonable expectation of success, a system that scales computational processing resources based on output of a machine learning model, as in FAWCETT, according to a schedule based on patterns of resource allocations, as in ORTIZ. A person having ordinary skill would have been motivated to make this combination to provide machine learning models with patterns to improve resource allocation predictions of the models. Regarding claim 2, it comprises limitations similar to claim 1, and is therefore rejected for similar rationale. Regarding claim 3, FAWCETT further teaches: receiving a second cloud architecture processing requirement; generating a second feature input based on the second cloud architecture processing requirement and the first set of available cloud resources; inputting the second feature input into the first artificial intelligence model to generate a second output; determining, based on the second output, a second cloud architecture pattern for the first set of available cloud resources; and transmitting a second communication, wherein the second communication causes the first set of available cloud resources to adopt the second cloud architecture pattern ( [0083] Processing proceeds to operation S280 (see FIG. 4), where resource allocation mod 370 (see FIG. 5) reallocates resources during execution of the stream processing job according to changed conditions of the stream processing job (i.e., conditions that have changed since a beginning of the executing of the stream processing job). In this operation, program 300 iteratively repeats the allocating of operation S270 (i.e., performs a “re- allocating”) using an updated status of the stream processing job, where the updated status is based, at least in part, on the changed conditions ( i.e., additional execution of stream processing jobs generate updated processing requirements and cloud resources, which in turn leads to a determination of updated cloud architecture allocation patterns. ) ) . Regarding claim 5, FAWCETT further teaches: determining the first cloud architecture pattern for the first set of available cloud resources further comprises: selecting a subset of cloud resources from the first set of available cloud resources; and determining a plurality of interconnections between the subset of cloud resources ([0097] Individual pods in a stream processing job can have high numbers of inter-dependencies. This can be due to the nature of how streams of tuples flow through and get processed by the pods. The dynamic complexity of the interaction and inter-dependencies that exist between the application pods of a stream processing job as tuple workloads are processed makes it difficult to achieve optimal results using more traditional methods. [0098] Some embodiments of the present invention leverage machine learning to optimize job results by managing how CPU usage is dynamically spread across pods to serve each pod's unique processing demands ( i.e., dynamically spreading CPU usage across pod resources necessitates determining the inter-dependencies between the pods in the stream processing job ) ) . Regarding claim 13, ORTIZ further teaches: training the first artificial intelligence model on the historical usage data for the cloud resources in the known cloud architecture patterns further comprises: determining a first validation requirement for the historical usage data; and validating the historical usage data based on the first validation requirement ([Column 24, Lines 25-37] At 310, the processor may allocate a subset of the pre-processed data entries as a training data set and a subset of the pre-processed data entries as a validation data set. The training data set may include data entries for training a learning model. The validation data set may be a portion of the pre-processed transaction data that may be used to provide an unbiased evaluation of the trained model following processing of the training data set. In some examples, the processor may also tune learning model hyper-parameters based on the validation data set. At 322, the processor may determine resource allocation forecasting accuracy based on the validation data set ( i.e., training a model includes evaluating the model based on a validation data set, representing a “validation requirement” ) ) . Regarding claim 15, it comprises limitations similar to claim 1, and is therefore rejected for similar rationale. Regarding claims 16-17, and 19, they comprise limitations similar to claims 2-3, and 5, and are therefore rejected for similar rationale. Claims 6, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over FAWCETT, in view of ORTIZ , as applied to claims 2, and 15 above, and in further view of MANAM et al. Pub. No.: US 2014/0140221 A1 (hereafter MANAM). Regarding claim 6, while FAWCETT and ORTIZ discuss allocating cloud resources, they do not explicitly teach: determining the plurality of interconnections between the subset of cloud resources further comprises: determining a first virtual switch between a first cloud resource of the subset of cloud resources and a second cloud resource of the subset of cloud resources; and managing network traffic through the first virtual switch. However, in analogous art that similarly allocates cloud resources, MANAM teaches: determining the plurality of interconnections between the subset of cloud resources further comprises: determining a first virtual switch between a first cloud resource of the subset of cloud resources and a second cloud resource of the subset of cloud resources; and managing network traffic through the first virtual switch ([0046] Cloud computing resources may be provided to a user on a pay-per-use basis, where users are charged only for the computing resources actually used (e.g., an amount of storage space consumed by a user or a number of virtualized systems instantiated by the user). A user can access any of the resources that reside in the cloud at any time, and from anywhere across the Internet. In context of the present disclosure, a service management component 220 could execute on a physical network switch hosting multiple virtual network switches in a data center for the cloud. Upon determining that a service module connected to the physical switch has become available, the service management component 220 could determine one of the virtual network switches to map the available service module to, and could map the available service module accordingly ( i.e., virtual network switches interconnect service module resources according to Fig. 1 and manage access traffic to and from the resources )) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined MANAM’s teaching of managing access traffic to and from cloud resources through a determined virtual network switch, with the combination of FAWCETT and ORTIZ’s teaching of allocating cloud resources, to realize, with a reasonable expectation of success, a system that allocates cloud resources, as in FAWCETT and ORTIZ, which are accessed via a determined virtual network switch, as in MANAM. A person having ordinary skill would have been motivated to make this combination to enable a user to access any one of a number of resources via the determined virtual network switch enabling increased flexibility and control in resource allocation. Regarding claim 20, it comprises limitations similar to claim 6, and is therefore rejected for similar rationale. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over FAWCETT, in view of ORTIZ , as applied to claims 2, and 15 above, and in further view of THOMAS et al. Pub. No.: US 2020/0411168 A1 (hereafter THOMAS). Regarding claim 7, while FAWCETT and ORTIZ discuss training AI models for use in resource allocation, they do not explicitly teach: training the first artificial intelligence model on the historical usage data for the cloud resources in the known cloud architecture patterns further comprises: determining a first training frequency; determining to collect additional historical usage data based on the first training frequency; and retraining the first artificial intelligence model based on the additional historical usage data. However, in analogous art that similarly trains AI models for use in resource allocation, THOMAS teaches: training the first artificial intelligence model on the historical usage data for the cloud resources in the known cloud architecture patterns further comprises: determining a first training frequency; determining to collect additional historical usage data based on the first training frequency; and retraining the first artificial intelligence model based on the additional historical usage data ([0058] At 206, the task management model development module 110 can employ the historical state data 204 to build and/or train the one or more demand models 138, the one or more TAT models 140, and/or the one or more staffing models 142. This historical data collection and model training can be a continuous process. In this regard, after initial versions of the one or more demand models 138, the one or more TAT models 140, and/or the one or more staffing models 142 are built, the task management model development module 110 can regularly or continuously collect sequential sets of the dynamic system state data 102 over time and add them to the historical state data 204. The task management model development module 110 can further regularly or continuously employ the updated historical state data 204 to retrain the one or more demand models 138, the one or more TAT models 140, and/or the one or more staffing models 142 to generate updated versions of the respective models ) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined THOMAS’s teaching of retraining machine learning models based on updated historical state data, with the combination of FAWCETT and ORTIZ’s teaching of training machine learning models based on historical usage data, to realize, with a reasonable expectation of success, a system that trains a machine learning model on historical usage data, as in FAWCETT and ORTIZ, and then retrains the model as the historical usage data is updated. A person having ordinary skill would have been motivated to make this combination to improve the accuracy of a machine learning model by periodically retraining it with updated data. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over FAWCETT, in view of ORTIZ , as applied to claims 2, and 15 above, and in further view of DHAMIJA et al. Pub. No.: US 2024/0259931 A1 (hereafter DHAMIJA). Regarding claim 12, while FAWCETT and ORTIZ discuss training AI models for use in resource allocation, they do not explicitly teach: retrieving a plurality of artificial intelligence models; determining respective weights of the first cloud architecture processing requirement in training each of the plurality of artificial intelligence models; and selecting the first artificial intelligence model from the plurality of artificial intelligence models based on a respective weight of the first cloud architecture processing requirement in training the first artificial intelligence model. However, in analogous art that similarly trains an artificial intelligence model, DHAMIJA teaches: retrieving a plurality of artificial intelligence models; determining respective weights of the first cloud architecture processing requirement in training each of the plurality of artificial intelligence models; and selecting the first artificial intelligence model from the plurality of artificial intelligence models based on a respective weight of the first cloud architecture processing requirement in training the first artificial intelligence model ([0116] The model selection/ training module 1006 can select one of the AI /ML models 1010 in any of a variety of ways. In some implementations of the current subject matter, the model selection/ training module 1006 can select one of the AI /ML models 1010 at random. In some implementations of the current subject matter, a user (e.g., user 1014) can input initial configuration requirements (e.g., target performance /accuracy desired to achieve, etc.) to the AI/ML 804 via the NSMF 702 (FIG. 8 a ) or the NSC 706 (FIG. 8 b ). The model selection/ training module 1006 can be configured to select one of the AI /ML models 1010 based on the initial configuration requirements. In implementations in which the AI/ML model repository 1012 includes only one AI / ML model , the model selection / training module 1006 can be configured to select the one of the AI /ML models 1010 without regard to any input initial configuration requirements ( i.e., different configuration requirements including performance or accuracy are considered (implied to be “weighted” higher than requirements that are not considered) when selecting an AI/ML model retrieved from a model repository 1012 ) ) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined DHAMIJA’s teaching of selecting an AI/ML model for training based on weighted performance requirements, with the combination of FAWCETT and ORTIZ’s teaching of training an AI/ML model, to realize, with a reasonable expectation of success, a system that trains an AI/ML model, as in FAWCETT and ORTIZ, that was selected based on weighted performance requirements, as in DHAMIJA. A person having ordinary skill would have been motivated to make this combination so that an optimal AI/ML model may be selected that better satisfies user requirements. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MICHAEL W AYERS whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-6420 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8:30-5 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Aimee Li can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-4169 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/ Primary Examiner, Art Unit 2195