Prosecution Insights
Last updated: April 19, 2026
Application No. 17/867,181

DETERMINING VIRTUAL MACHINE CONFIGURATION BASED ON APPLICATION SOURCE CODE

Non-Final OA §103
Filed
Jul 18, 2022
Examiner
NGUYEN, AN-AN NGOC
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
34 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
20.6%
-19.4% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 1-3, 6-15, 17-19, 21, and 24-26 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 2. Applicant’s arguments with respect to claim(s) 1-3, 6-15, 17-19, 21 and 24-26 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 1-3, 6, 12-15, 17-19 and 24-26 are rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 in view of Coster et al. US 20200133707 A1. 4. With regard to claim 1, Tripathi teaches: A computer-implemented method comprising: obtaining a user request for deploying an application on at least one virtual machine, the user request comprising a link to source code of an application ([0037] Computing environment 140 can host, as part of a deployed application, one or more VM, (e.g., defined by a hypervisor-based VM or a container-based VM), one or more agent 268, and/or one or more tool 270.; [0049] In some use cases, a developer user in building an application deployment software code instance can use one or more prior-developed asset deployment software code instance previously referenced in registry 2121 and stored in code area 2122. A user interface displayed to the developer user can make available such software code instances to a user in the development of new application software code instances; [0056] In one embodiment, the deployment package sent at block 1103 can include a computing environment specific application deployment software code instance for installing one or more asset and in one embodiment, the computing environment specific application deployment software code instance can include a computing environment specific application deployment software code for (a) installation of a virtual machine (VM), [...] VMs specified for installation with a deployment package can include, e.g., hypervisor-based VMs or container-based VMs.), in response to obtaining the user request, retrieving the source code from a code repository based on the link in the user request ([0049] In some use cases, a developer user in building an application deployment software code instance can use one or more prior-developed asset deployment software code instance previously referenced in registry 2121 and stored in code area 2122. A user interface displayed to the developer user can make available such software code instances to a user in the development of new application software code instances; Examiner’s Note: Previously referenced software code instances stored in a registry can be displayed to the user through a UI. Therefore, it is retrieved from the repository and displayed.); generating an application summary comprising one or more features of the application, wherein the one or more features of the application are determined at least in part by parsing the source code of an the application to be deployed on at least one virtual machine to determine one or more features of the application, and wherein the one or more features comprise an application type, wherein the parsing comprises identifying at least one of: (i) one or more keywords and (ii) one or more components in the source code, and wherein the parsing is performed prior to initiating a configuration of the at least one virtual machine for the application ([0058] On determination that deployment was successful at block 1104, orchestrator 110 can proceed to block 1105. At block 1105, orchestrator 110 can run parsing process 111 to parse the deployed application deployment software code instance. At parsing block 1105, orchestrator 110 can run parsing process 111 to parse the application deployment software code instance deployed at block 1101. Parsing at block 1105 can include tokenizing the deployed computing environment specific application deployment software code instance to identify attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation; [0110] FIG. 7 depicts operations of a pattern analyzer defined by parsing process 111 (FIG. 1). Input pattern data defined by application deployment software code can be input into the pattern analyzer defined by parsing process 111 having tokenizing process 111A and tree generating process 111B. The pattern analyzer defined by parsing process 111 can parse the input application deployment software code to identify attributes therein, e.g., tokens such as tokens of a variety of classifications which can include, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation. From the identified attributes, the pattern analyzer by tree generating process 111B can generate an application deployment semantic tree data structure that specifies operations of an application deployment workflow pattern.); wherein the method is performed by at least one processing device comprising a processor coupled to a memory ([0007] In a further aspect, a system can be provided. The system can include, for example a memory. In addition, the system can include one or more processor in communication with the memory. Further, the system can include program instructions executable by the one or more processor via the memory to perform a method.). Tripathi teaches obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. However, Tripathi but fails to explicitly teach providing the application summary to at least one machine learning model, wherein the at least one machine learning model is trained based at least in part on historical usage data associated with one or more virtual machines configured for one or more other applications; obtaining, from the at least one machine learning model, one of a plurality of virtual machine configurations for the application; and initiating the configuration of the at least one virtual machine for deploying the application based at least in part on the virtual machine configuration obtained from the at least one machine learning model. However, in analogous art, Coster teaches: providing the one or more features application summary to at least one machine learning model, wherein the at least one machine learning model is trained based at least in part on historical usage data associated with one or more virtual machines configured for one or more other applications ([0028] Once received, the data obtained by the profiling server monitor can be processed and/or otherwise analyzed by the profiling server monitor in any suitable manner. In many cases, the profiling server monitor leverages a machine learning algorithm or model (e.g., decision trees, state machines, genetic or evolutionary algorithms, machine learning algorithms, support vector machine algorithms, neural networks, Bayesian networks, gradient learning algorithms, and so on) to analyze the data received; [0042] For example, in some embodiments, the analytics engine 112 administers an algorithm configured to monitor power consumption of the physical hardware 106 as a particular workload is executed by a virtual machine of the virtual computing environment. In this example, the analytics engine 112 can track power consumption over time, correlating power usage—as reported by the hypervisor 104 and/or another power-reporting component (not shown), such as a power probe, a power meter, or an electrical utility—to one or more performance characteristics of the workload, such as processor utilization, memory utilization, storage utilization, and so on; [0049] As may be appreciated, business rules can inform and/or otherwise guide the operation of a virtual computing environment—such as the virtual computing environment associated with the system 100 depicted in FIG. 1—and may be used to determine, without limitation: what hypervisor type should be preferred or used given a particular selected workload; what virtual machine image or configuration should be preferred or used given a particular hypervisor type [...] It may be appreciated that this listing of example business rules is not exhaustive; any suitable business rule may be extracted, inferred, or otherwise created, edited, or updated by analyzing (e.g., using a machine learning algorithm or other suitable analysis technique) one or more profiles created or stored by the profiling server monitor 102; [0061] For example, the telemetry ingest module 204 can be communicably coupled to, and thus can receive telemetry and/or diagnostic data from, the hypervisor 206 via an application programming interface. Example data that can be communicated from the hypervisor 206 to the telemetry ingest module 204 includes, but may not be limited to: processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on; Examiner’s Note: The analytics engine monitors power consumption over time and correlating power usage, which is analogous to the historical usage data. The telemetry data is the summary.); obtaining, from the at least one machine learning model, one of a plurality of virtual machine configurations for the application ([0050] For example, the profiling server monitor 102 may be able to determine, by comparing two or more profiles, that a first workload executed by a virtual machine supported by a Type 1 hypervisor consumes less power than the same workload executed by the same virtual machine supported by a Type 2 hypervisor. In this example, the profiling server monitor 102 may create a business rule that causes the virtual computing environment to request or prefer a virtual machine supported by a Type 1 hypervisor when the first workload is required to be executed; Examiner’s Note: Based on the telemetry data, business rules are created by the machine learning model in order to obtain a virtual machine supported by a type of hypervisor in order to execute the workload (application).); and initiating the configuration of the at least one virtual machine for deploying the application based at least in part on the virtual machine configuration obtained from the at least one machine learning model ([0050] In this example, the profiling server monitor 102 may create a business rule that causes the virtual computing environment to request or prefer a virtual machine supported by a Type 1 hypervisor when the first workload is required to be executed.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster where providing the application summary to at least one machine learning model, wherein the at least one machine learning model is trained based at least in part on historical usage data associated with one or more virtual machines configured for one or more other applications; obtaining, from the at least one machine learning model, one of a plurality of virtual machine configurations for the application; and initiating the configuration of the at least one virtual machine for deploying the application based at least in part on the virtual machine configuration obtained from the at least one machine learning model. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data from Tripathi, which includes attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation ([0058]). Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). 5. With regard to claim 2, Tripathi further teaches: wherein the one or more features further comprise at least one of: one or more technology stacks corresponding to the application ([0059] In one embodiment, orchestrator 110 can perform parsing block 1105 using a Java Compiler Compiler (JavaCC) parser. JavaCC is an open-source project released under the BSD license 2.0. Orchestrator 110 running parsing block 1105 can include orchestrator 110 running parsing process 111 which can include tokenizing process 111A and tree generating process 111B. Orchestrator 110 running parsing block 1105 can also run various other parsing processes, e.g., syntax check processing resulting in rejection of an input computing environment specific application deployment software code instance in the case that syntax rules for the relevant domain are violated.); a number of components of the application; a type of one or more components of the application ([0058] Parsing at block 1105 can include tokenizing the deployed computing environment specific application deployment software code instance to identify attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation.); and a size of the application. 6. With regard to claim 3, Coster further teaches: wherein the historical usage data comprises one or more of: memory usage data, storage usage data, computing usage data ([0042] For example, in some embodiments, the analytics engine 112 administers an algorithm configured to monitor power consumption of the physical hardware 106 as a particular workload is executed by a virtual machine of the virtual computing environment. In this example, the analytics engine 112 can track power consumption over time, correlating power usage—as reported by the hypervisor 104 and/or another power-reporting component (not shown), such as a power probe, a power meter, or an electrical utility—to one or more performance characteristics of the workload, such as processor utilization, memory utilization, storage utilization, and so on.), traffic data, and failure data. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster wherein the historical usage data comprises one or more of: memory usage data, storage usage data, computing usage data, traffic data, and failure data. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Additionally, Coster teaches of historical usage data. As a result of this construction, the analytics engine 112 can determine and/or observe how performance of a particular workload executed on a particularly configured virtual machine affects power consumption of the physical hardware 106, as discussed in Coster ([0043]). This helps determine the best virtual machine configuration for an application. 7. With regard to claim 6, Tripathi further teaches: further comprising: obtaining one or more additional features related to the application, wherein the one or more additional features comprise at least one of: a predicted traffic information corresponding to the application, historical traffic information corresponding to the application, one or more availability requirements of the application ([0034] Computing environment 140, in addition to having computing node stacks 10A-10Z, can include manager 210 that runs availability management process 211. Manager 210 running availability management process 211 can adjust a hosting configuration for a given application to achieve a specified Service Level Agreement (SLA) requirement. Manager 210 running availability management process 211 can adjust an availability rating for a given application, e.g., by migrating the application to a different computing node stack of computing environment 140, adding instances of the application and/or subtracting instances of application. In addition, manager 210 of respective ones of computing environments 140A-140Z can be in communication with orchestrator 110, e.g., for sending metrics data to orchestrator 110 and/or for responding to hosting adjusting data from orchestrator 110.), and one or more recovery requirements of the application, Tripathi fails to explicitly teach wherein the at least one machine learning model is further trained based at least in part on the at least one of the one or more additional features. However, in analogous art, Coster further teaches: wherein the at least one machine learning model is further trained based at least in part on the at least one of the one or more additional features (0028] Once received, the data obtained by the profiling server monitor can be processed and/or otherwise analyzed by the profiling server monitor in any suitable manner. In many cases, the profiling server monitor leverages a machine learning algorithm or model (e.g., decision trees, state machines, genetic or evolutionary algorithms, machine learning algorithms, support vector machine algorithms, neural networks, Bayesian networks, gradient learning algorithms, and so on) to analyze the data received; [0042] For example, in some embodiments, the analytics engine 112 administers an algorithm configured to monitor power consumption of the physical hardware 106 as a particular workload is executed by a virtual machine of the virtual computing environment. In this example, the analytics engine 112 can track power consumption over time, correlating power usage—as reported by the hypervisor 104 and/or another power-reporting component (not shown), such as a power probe, a power meter, or an electrical utility—to one or more performance characteristics of the workload, such as processor utilization, memory utilization, storage utilization, and so on; [0049] As may be appreciated, business rules can inform and/or otherwise guide the operation of a virtual computing environment—such as the virtual computing environment associated with the system 100 depicted in FIG. 1—and may be used to determine, without limitation: what hypervisor type should be preferred or used given a particular selected workload; what virtual machine image or configuration should be preferred or used given a particular hypervisor type [...] It may be appreciated that this listing of example business rules is not exhaustive; any suitable business rule may be extracted, inferred, or otherwise created, edited, or updated by analyzing (e.g., using a machine learning algorithm or other suitable analysis technique) one or more profiles created or stored by the profiling server monitor 102.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster wherein the at least one machine learning model is further trained based at least in part on the at least one of the one or more additional features. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Additionally, Tripathi also teaches of one or more availability requirements of the application. Computing environment 140, in addition to having computing node stacks 10A-10Z, can include manager 210 that runs availability management process 211. Manager 210 running availability management process 211 can adjust a hosting configuration for a given application to achieve a specified Service Level Agreement (SLA) requirement. Manager 210 running availability management process 211 can adjust an availability rating for a given application, e.g., by migrating the application to a different computing node stack of computing environment 140, adding instances of the application and/or subtracting instances of application. In addition, manager 210 of respective ones of computing environments 140A-140Z can be in communication with orchestrator 110, e.g., for sending metrics data to orchestrator 110 and/or for responding to hosting adjusting data from orchestrator 110, as discussed in Tripathi ([00340]). Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data and/or availability requirements from Tripathi. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). 8. With regard to claim 12, Coster further teaches: further comprising initiating a deployment of the application on the at least one virtual machine configured for the application ([0050] For example, the profiling server monitor 102 may be able to determine, by comparing two or more profiles, that a first workload executed by a virtual machine supported by a Type 1 hypervisor consumes less power than the same workload executed by the same virtual machine supported by a Type 2 hypervisor. In this example, the profiling server monitor 102 may create a business rule that causes the virtual computing environment to request or prefer a virtual machine supported by a Type 1 hypervisor when the first workload is required to be executed; [0058] The hosted hypervisor, identified as the hypervisor 206, is configured to present a virtual hardware interface for one or more guest operating system(s) 208 that, in turn, can each execute one or more computational workloads, such as one or more applications or services 210.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster further comprising initiating a deployment of the application on the at least one virtual machine configured for the application. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data from Tripathi, which includes attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation ([0058]). Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Additionally, it would be obvious to one of ordinary skill in the art that deployment of the application onto the virtual machine is initiated. 9. Regarding claim 13, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale. 10. Regarding claim 14, it is rejected under the same reasoning as claim 2 above. Therefore, it is rejected under the same rationale. 11. Regarding claim 15, it is rejected under the same reasoning as claim 3 above. Therefore, it is rejected under the same rationale. 12. Regarding claim 17, it is rejected under the same reasoning as claim 1 above. Therefore, it is rejected under the same rationale. 13. Regarding claim 18, it is rejected under the same reasoning as claim 2 above. Therefore, it is rejected under the same rationale. 14. Regarding claim 19, it is rejected under the same reasoning as claim 3 above. Therefore, it is rejected under the same rationale. 15. With regard to claim 24, Coster further teaches: wherein the at least one machine learning model is trained by filtering the historical usage data, wherein the filtering comprises: determining whether virtual machine configuration data associated with a given virtual machine configuration satisfies one or more specification conditions ([0050] For example, the profiling server monitor 102 may be able to determine, by comparing two or more profiles, that a first workload executed by a virtual machine supported by a Type 1 hypervisor consumes less power than the same workload executed by the same virtual machine supported by a Type 2 hypervisor.); and discarding the virtual machine configuration data in response to the virtual machine configuration data failing to satisfy the one or more specification conditions ([0050] For example, the profiling server monitor 102 may be able to determine, by comparing two or more profiles, that a first workload executed by a virtual machine supported by a Type 1 hypervisor consumes less power than the same workload executed by the same virtual machine supported by a Type 2 hypervisor. In this example, the profiling server monitor 102 may create a business rule that causes the virtual computing environment to request or prefer a virtual machine supported by a Type 1 hypervisor when the first workload is required to be executed.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster wherein the at least one machine learning model is trained by filtering the historical usage data, wherein the filtering comprises: determining whether virtual machine configuration data associated with a given virtual machine configuration satisfies one or more specification conditions; and discarding the virtual machine configuration data in response to the virtual machine configuration data failing to satisfy the one or more specification conditions. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data from Tripathi, which includes attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation ([0058]). Moreover, Costar teaches of discarding VM configurations that do not satisfy a specified condition. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). 16. With regard to claim 25, Coster further teaches: wherein the one or more specification conditions comprise one or more thresholds for at least a portion of a set of statistics corresponding to the virtual machine configuration data ([0024] In addition, in some embodiments, the profiling server monitor can receive or determine statistical metadata associated with the operation of the virtual computing environment. For example, statistical metadata related to power consumption (as one example) can include, but may not be limited to: average power consumption; peak power consumption; peak, average, or root-mean-squared voltage or current; minimum power consumption; power source jitter; standard deviation and/or variation in power consumption within a selected time window; and so on; [0043] As a result of this construction, the analytics engine 112 can determine and/or observe how performance of a particular workload executed on a particularly configured virtual machine affects power consumption of the physical hardware 106. For example, in some cases, the hypervisor 104 may determine a need to allocate more processing power or memory to the workload if the workload begins loading the physical hardware 106 beyond a selected threshold. In this example, the analytics engine 112, having received processor utilization, memory utilization, and/or power consumption data over time from the hypervisor 104 (via the telemetry ingest module 108) can determine a power consumption cost associated with the reallocation of resources performed by the hypervisor 104. An operation such as described in this example, in which a profiling server monitor—such as described herein—determines or characterizes one or more relationships between two or more variables associated with the operation of a virtual computing environment, is referred to herein as a “profiling operation.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster wherein the one or more specification conditions comprise one or more thresholds for at least a portion of a set of statistics corresponding to the virtual machine configuration data. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data from Tripathi, which includes attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation ([0058]). Moreover, Costar teaches of specification conditions that comprise one or more thresholds for a set of statistics. This set of statistics helps further define for the ML model what VM configuration would work for the application. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). 17. With regard to claim 26, Coster further teaches: wherein the set of statistics correspond to at least one of: a consumption of memory resources ([0039] As noted with respect to other embodiments described herein, the hypervisor 104 can communicate any suitable information or data to the profiling server monitor 102 including, but not limited to: power consumption data; temperature data; humidity data; acoustic data; visual data (e.g., camera footage); processor utilization data; core utilization data; memory utilization data; storage utilization data; networking data; manufacturer data; virtual machine identifying data; hypervisor identifying data; and so on.); a consumption storage resources; fluctuations in traffic; a number of failures; data unavailability; and data loss. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi with the teachings of Coster wherein the set of statistics correspond to at least one of: a consumption of memory resources; a consumption storage resources; fluctuations in traffic; a number of failures; data unavailability; and data loss. Tripathi teaches of obtaining source code based on a user request for deployment of an application, and parsing the source code in order to obtain one or more features of an application to create an application summary. Similarly, Coster teaches of using telemetry data to create business rules that a machine learning model uses to obtain a virtual machine supported by a type of hypervisor in order to execute the workload. The telemetry data can be processor utilization data; memory utilization data; aggregate power consumption data; storage utilization data; and so on ([0061]). The telemetry data is similar to the parsed data from Tripathi, which includes attributes of the instance of the application deployment software code instance, e.g., identifiers, keywords, separators, operators, literals, comments, actions, entities, values, and orders of operation ([0058]). Moreover, Costar teaches of specification conditions that comprise one or more thresholds for a set of statistics, which includes memory resource consumption and other resource consumption ([0039]). This set of statistics helps further define for the ML model what VM configuration would work for the application. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). 18. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 and Coster et al. US 20200133707 A1, as applied in claim 1, in further view of Sreenivasan et al. US 20200012897 A1. 19. With regard to claim 7, Tripathi and Coster teach the computer-implemented method of claim 1 but fail to explicitly teach wherein the at least one machine learning model is trained using a supervised machine learning technique. However, in analogous art, Sreenivasan teaches: wherein the at least one machine learning model is trained using a supervised machine learning technique ([0005] Embodiments include a machine learning based recommendation model, including a supervised learning classifier configured to receive input training data that includes a plurality of behavioral determinants, a supervised learning model configured to receive subject input data that includes a plurality of behavior determinants, wherein the supervised learning model outputs a predicted behavior of a subject, and a channel selection module configured to receive the subject input data and the predicted behavior and to determine a recommended communication channel for the subject to follow to achieve the predicted behavior.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Sreenivasan wherein the at least one machine learning model is trained using a supervised machine learning technique. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, Sreenivasan teaches of using a supervised machine learning model to determine the recommended communication channel, in this case, for the subject to follow to achieve the predicted behavior, (Abstract). In the context of this present claimed invention, the recommended output is a recommended VM, not communication channel. Therefore, it would be obvious to one of ordinary skill in the art to use a supervised machine learning model to output a recommended VM based on parsed source code and obtained attributes. 20. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 and Coster et al. US 20200133707 A1, as applied in claim 1, in further view of Vukmirovic et al. US 20240273198 A1. 21. With regard to claim 8, Tripathi and Coster teach the computer-implemented method of claim 1 but fail to explicitly teach wherein the at least one machine learning model comprises at least one of: a boosted gradient model and a random forest of trees model. However, in analogous art, Vukmirovic teaches: wherein the at least one machine learning model comprises at least one of: a boosted gradient model and a random forest of trees model ([0029] The machine learning algorithms used in disclosed example methods are random forest and boosted gradient models; [0063] Ensemble algorithms combine multiple machine algorithms to achieve better performance. Random forests and gradient boosted algorithms are powerful in this category. They are essentially a collection of trees with various degrees of overfitting with averaged results for better performance. Random forests perform by creating multiple decision trees at preparation time and outputting the class. There is no contact or logical contamination between the trees because they exist parallel to each other. There is randomness amongst the trees. The gradient boosted regression tree is another ensemble method that combines multiple decision trees to create a more powerful model. They can be used for regression or classification. Unlike random forest where the trees are parallel, in the gradient boosted algorithms the trees are built in a sequence. Strong pre-pruning is used in these algorithms, without randomization. Very shallow trees are often used, of depth one to five, which is economical in terms of memory and makes predictions faster. ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Vukmirovic wherein the at least one machine learning model comprises at least one of: a boosted gradient model and a random forest of trees model because they can be trained in unique and useful ways. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, Vukmirovic teaches of training machine learning models with a dataset (Abstract). Specifically, random forest and gradient boosted algorithms are used. They can specialize in different things that are specific to the problem needing to be solved, as discussed in Vukmirovic ([0029]). 22. Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 and Coster et al. US 20200133707 A1, as applied in claim 1, in further view of Fontoura et al. US 20190163517 A1. 23. With regard to claim 9, Tripathi and Coster teach the computer-implemented method of claim 1 but fail to explicitly teach further comprising: outputting an indication of the virtual machine configuration obtained from the at least one machine learning model. However, in analogous art, Fontoura teaches: further comprising: outputting an indication of the virtual machine configuration obtained from the at least one machine learning model ([0006] Further, a customer may choose to opt-in to automatically execute the request to deploy the VM deployment based on the predicted rightsized deployment configuration, while another customer may simply have the predicted rightsized deployment configuration communicated to them via an interface; Examiner’s Note: A customer is notified of a VM configuration that is obtained from the rightsized deployment configuration that is ready to be deployed. The VM’s configuration is indicated to the customer.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Fontoura where outputting an indication of the virtual machine configuration obtained from the at least one machine learning model. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, Fontoura teaches of VM deployment that uses past behaviors and features to help determine deployment (Abstract). Moreover, Fontoura teaches of communicating the predicted VM deployment to a user via an interface. This might allow a user the ability to chose to opt into the deployment or not, as discussed in Fontoura ([0006]; [0027]). 24. With regard to claim 10, Fontoura further teaches: wherein the initiating is performed in response to one or more user inputs approving the virtual machine configuration obtained from the at least one machine learning model ([0006] Further, a customer may choose to opt-in to automatically execute the request to deploy the VM deployment based on the predicted rightsized deployment configuration, while another customer may simply have the predicted rightsized deployment configuration communicated to them via an interface.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Fontoura wherein the initiating is performed in response to one or more user inputs approving the virtual machine configuration obtained from the at least one machine learning model. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, Fontoura teaches of VM deployment that uses past behaviors and features to help determine deployment (Abstract). Moreover, Fontoura teaches of communicating the predicted VM deployment to a user via an interface. This might allow a user the ability to chose to opt into the deployment or not, as discussed in Fontoura ([0006]; [0027]). 25. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 and Coster et al. US 20200133707 A1, as applied in claim 1, in further view of Desai US 20200034270 A1. 26. With regard to claim 11, Tripathi and Coster teach the computer-implemented method of claim 1 but fail to explicitly teach wherein the at least one machine learning model is further trained based at least in part on application criticality data associated with at least one of the one or more other applications. However, in analogous art, Desai teaches: wherein the at least one machine learning model is further trained based at least in part on application criticality data associated with at least one of the one or more other applications ([0036] The failure simulator 204 can run simulations for various admission control policies for a particular cluster of virtual machines and generate a score for each simulation. The score can represent an availability and performance score and expresses the ability of the cluster of virtual machines to successfully failover to other physical host machines within the cloud environment 100 that are allocated to the enterprise. The various simulations can be run using different admission control policy settings for each admission control policy and generate the score for each simulation. The score represents a prediction as to how the cluster of VMs will recover from the simulated physical host machine failure. The admission control policy configuration and score associated with the simulations can be fed into the cluster analyzer 206, which employs a machine-learning process to generate recommendations as to how the admission control policies can be changed to improve the failover capabilities of the VM cluster; [0039] The telemetry training data 211 can also include other parameters that are used as a training data set for a machine learning process. The parameters can include an availability and performance score that was calculated for the deployment [...]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Desai wherein the at least one machine learning model is further trained based at least in part on application criticality data associated with at least one of the one or more other applications. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, Desai teaches of training an ML model based on historical usage data in order to predict whether a deployment is configured for optimal failover (Abstract). This helps an administrator of a hyper-converged infrastructure allocate resources in an efficient manner, as discussed in Desai ([0002]). 27. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Tripathi et al. US 20230048653 A1 and Coster et al. US 20200133707 A1, as applied in claim 1, in further view of Anand et al. US 12007832 B2. 28. With regard to claim 21, Tripathi and Coster teach the computer-implemented method of claim 1 but fails to explicitly teach wherein the at least one machine learning model comprises a plurality of machine learning models, each trained to predict a respective resource parameter for the at least one virtual machine, and wherein the obtaining the one of the plurality of virtual machine configurations from the at least one machine learning model comprises combining outputs of the plurality of machine learning models to determine the virtual machine configuration for the application. However, in analogous art, Anand teaches: wherein the at least one machine learning model comprises a plurality of machine learning models, each trained to predict a respective resource parameter for the at least one virtual machine, and wherein the obtaining the one of the plurality of virtual machine configurations from the at least one machine learning model comprises combining outputs of the plurality of machine learning models to determine the virtual machine configuration for the application (Claim 1, ... in response to detecting the deviation, predict an anomaly associated with the component using an iterative machine learning method based at least on the data feed of the component and the deviation of current state vector from the normal state vector, wherein the iterative machine learning method uses a plurality of machine learning models to predict the anomaly and iteratively updates training of each of the machine learning models using the data feed received for the component wherein predicting the anomaly associated with the component comprises: generating the plurality of machine learning models, wherein each machine learning model uses a different algorithm to predict an anomaly associated with the component; predicting an anomaly using each of the plurality of machine learning models based at least on the data feed received from the component; comparing results from the predicting the anomaly using each of the plurality of machine learning models; selecting, based on the results, one of the machine learning models having a highest accuracy associated with the prediction of the anomaly; and selecting the anomaly predicted by the selected machine learning model as the predicted anomaly of the component; identify a system configuration needed to run a current workload associated with the component, wherein the current workload includes processing at least one software application; search each of the plurality of cloud infrastructures for a cloud instance that can support the identified system configuration; identify based on the search, a cloud instance of a cloud infrastructure that can support the identified system configuration; initiate the identified cloud instance of the cloud infrastructure by creating in the cloud infrastructure a virtual machine corresponding to the identified cloud instance; and switch the current workload from an original system running the current workload to the initiated cloud instance; Examiner’s Note: Multiple machine learning models are used to determine a VM configuration for an application.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Tripathi and Coster with the teachings of Anand wherein the at least one machine learning model comprises a plurality of machine learning models, each trained to predict a respective resource parameter for the at least one virtual machine, and wherein the obtaining the one of the plurality of virtual machine configurations from the at least one machine learning model comprises combining outputs of the plurality of machine learning models to determine the virtual machine configuration for the application. Together Tripathi and Coster teach of parsing source code in order to obtain attributes of an application and using those attributes to determine a VM configuration for that application using a machine learning model, which could be beneficial for features such as reducing power consumption of the virtual computing environment, as discussed in Coster (Abstract). Similarly, as discussed in Anand, having multiple machine learning models to determine a VM configuration allows for the system to choose the model with the highest accuracy (Claim 1). This helps output the best VM configuration based on user’s needs. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN-AN N NGUYEN whose telephone number is (571)272-6147. The examiner can normally be reached Monday-Friday 8:00-5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AIMEE LI can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AN-AN NGOC NGUYEN/Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Jul 18, 2022
Application Filed
Apr 22, 2025
Non-Final Rejection — §103
Aug 08, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Examiner Interview Summary
Aug 08, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Dec 16, 2025
Request for Continued Examination
Jan 02, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561130
MAINTENANCE MODE IN HCI ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12511156
CREDIT-BASED SCHEDULING USING LOAD PREDICTION
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+50.0%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month