Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,687

CLOUD INSTANCE SIZING AND DEPLOYMENTS USING MACHINE LEARNING

Non-Final OA §103
Filed
Mar 27, 2024
Examiner
CHEEMA, UMAR
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
5y 4m
To Grant
74%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
154 granted / 235 resolved
+7.5% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
44 currently pending
Career history
279
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), filed on 11/26/2025 in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/19/2025 has been entered. Claims 1-6, 8-14, 16-18 and 20-23 are pending. Response to Arguments 2. Applicant's arguments are moot in light of the new ground of rejections set forth below. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1-6, 8 and 10-14, 16-18, 20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Manuel-Devadoss (US 2022/0239567, hereafter “Manuel”) in view of Ingle et al (US 2024/0403133) and further in view of DOAN HUU (US 2020/0151588). As to claim 1, Manuel discloses a method comprising: receiving a request to predict a configuration of a cloud instance in which at least one application is to be executed, wherein the request includes one or more features of the at least one application (figure 5; [0005], “Embodiments configure a cloud system that includes a plurality of cloud services. Embodiments receive a user utterance that includes a natural language and extract at least a first entity from the utterance. Embodiments translate the first entity into a cloud intent definition language entity and receive user feedback in response to presenting the cloud intent definition language entity. Embodiments generate an intent based on the cloud intent definition language entity and the feedback and compile the intent into a cloud services policy to be deployed by the cloud system”, wherein the plurality of cloud services are a plurality of applications to be deployed/executed; also see abstract; [0051], “At 502, the user utterances 75 for configuring cloud system 110 and related services are received. For example, an input utterance may be “Add storage from FSS to backend for client VM, with Rate less than 5 and 95 kb of file size, and allow FSS only”); analyzing the one or more features using one or more machine learning algorithms ([0052], “At 504, entities are extracted from the utterances. An intelligent chatbot interface is used to extract the main actions and targets (i.e., entities of a user intent from natural language). One embodiment implements the chatbot interface using the Oracle Digital Assistant, which uses machine learning to identify key aspects in the user's utterances without the need for extensively covering every possible entity value. In this chatbot, examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”); and predicting, based at least in part on the analyzing, the configuration of a cloud instance in which at least one application is to be executed ([0055], “At 510, the intent is generated and compiled into a cloud services policy. The cloud services policy is according to the destination cloud resources such as computing, network, security, analytics, logging and monitoring, etc. Embodiments also make assertions to verify any conflicts between the extracted intent and the cloud configuration. For example, an intent asking for more virtual machine (“VM”) storage than is available on the required storage generates an assertion to warns the user through the chatbot interface”); wherein the one or more machine learning algorithms comprise a neural network configured to predict a plurality of targets ([0052], “At 504, entities are extracted from the utterances. An intelligent chatbot interface is used to extract the main actions and targets (i.e., entities of a user intent from natural language). One embodiment implements the chatbot interface using the Oracle Digital Assistant, which uses machine learning to identify key aspects in the user's utterances without the need for extensively covering every possible entity value. In this chatbot, examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”, wherein the machine learning can use neural network, see [0040], “FIG. 4 is a block diagram of a neural sequence-to-sequence learning model 400 that implements intent translator 242 in accordance to embodiments”); wherein a first target of the plurality of targets is predicted using a classification technique ([0032], “classify named entities… into predefined categories”); wherein the configuration comprises an amount of utilization for one or more computer resources in connection with execution of the at least one application in the cloud instance (see citation in rejection to the preceding limitations, e.g., virtual machine (“VM”) storage); and wherein the steps of the method are executed by a processing device operatively coupled to a memory ([0022]). However, Manuel does not expressly disclose that the configuration comprises one of a plurality of configurations of cloud instances that are collected from a plurality of cloud providers and stored in a repository. Ingle discloses a concept for a configuration to be one of a plurality of configurations of cloud instances that are collected from a plurality of cloud providers and stored in a repositor (abstract; [0040], “the cloud data collector 106 can request and/or direct the cloud-based service(s) 102 to provide information related to: (1) accounts utilizing the cloud-based service(s) 102, (2) at least one configuration of the cloud-based service(s) 102 and/or (3) services of the cloud-based service(s) 102. The request by the cloud data collector 106 to the cloud-based service(s) 102 can be driven by an occurrence of an event or performed on periodic or aperiodic timeframes and/or on a schedule. According to examples disclosed herein, the cloud-based service(s) 102 provide(s) data, requested changes, configuration information and/or updates associated with the cloud-based service(s) 102 to the cloud data collector 106 in response to a query from the cloud data collector 106 or without receiving a query from the cloud data collector 106. In some examples, the aforementioned data and/or updates provided to the cloud data collector 106 can include changes of a configuration of the cloud-based service(s) 102 and/or operational data of the cloud-based service(s) 102.”; [0063], “In examples disclosed herein, a state refers to a configuration of cloud resources. For example, a state can provide information about infrastructure components of cloud resources and provisioning of the cloud resources“; [0083], “a user can select the first state 410 by clicking on the first state 410 to view the corresponding input parameters and infrastructure management code”). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Manuel with Ingle. The suggestion/motivation of the combination would have been to monitor, select, and/opr validate configuration of cloud services (Ingle, abstract; [0040]; [0063]). Manuel does not expressly disclose wherein remaining targets of the plurality of targets are predicted using a regression technique. DOAN HUU expressly disclose that remaining targets of a plurality of targets are predicted using a regression technique ([0041], “In some embodiments, the modeling algorithms may be declared within the catalog 350 (metadata) in a hierarchical fashion to group together implementations/algorithms which solve the same functional modeling task. In this example, the functional modeling task is the nature of the business problem to be solved, for example, a classification (predict a categorical target), a regression (predict a continuous target), a clustering (group observations based on characteristics similarity), and the like”). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Manuel with DOAN HUU. The suggestion/motivation of the combination would have been to grouping predicting targets in hierarchical fashion (DOAN HUU, [0041]). As to claim 14, see similar rejection to claim 1. As to claim 18, see similar rejection to claim 1. As to claim 2, Manuel discloses the method of claim 1 further comprising selecting, based at least in part on the analyzing, a cloud platform of a plurality of cloud platforms to host the cloud instance ([0057], wherein the selected modeling algorithm based on the generated cloud resource setup is equivalent to a cloud platform). As to claim 3, Manuel discloses the method of claim 1 wherein the cloud instance comprises one of a container and a virtual machine (see citation in rejection to claim 1, last limitation, “Virtual Machine”). As to claim 4, Manuel discloses the method of claim 1 wherein the configuration comprises an amount for at least one of central processing unit utilization, memory utilization and disk input-output utilization ([0057], “specifying a cloud resource configuration details, extract at least one entity from the utterance, and generate parameters for the cloud resource setup based on the received information”; [0052], “At 504, entities are extracted from the utterances. An intelligent chatbot interface is used to extract the main actions and targets (i.e., entities of a user intent from natural language). One embodiment implements the chatbot interface using the Oracle Digital Assistant, which uses machine learning to identify key aspects in the user's utterances without the need for extensively covering every possible entity value. In this chatbot, examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”, wherein the cloud computing indicates an amount for central processing unit utilization, and wherein the storage indicates an amount for memory utilization. It is to be noted that the claimed limitation is “an amount for” without requiring that the amount represents any of the processing unit utilization or memory utilization). As to claim 5, Manuel discloses the method of claim 1 wherein the one or more features identify at least one of a size of code for the at least one application, a language of the code for the at least one application, a complexity tier of the at least one application ([0052], “examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”, wherein the IoT, security compliances, network endpoints, monitoring, logging, can all be considered a complexity tier of the at least one application), an interactivity determination of the at least one application ([0032], “request parental control for their kids' device”) and an execution time of the at least one application. As to claim 6, Manuel discloses the claimed invention substantially as discussed in claim 1, buy does not expressly disclose wherein the at least one application comprises at least one of a micro-frontend application and a microservice application. Ingle discloses a concept for a cloud service application to be a microservice application ([0042]). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Manuel with Ingle. The suggestion/motivation of the combination would have been to configure cloud-based services (Ingle, [0042]). As to claim 8, Manuel discloses the method of claim 1, wherein the first target comprises a cloud platform of a plurality of cloud platforms to host the cloud instance, and the remaining targets comprise respective amounts for central processing unit utilization, memory utilization and disk input-output utilization in connection with the execution of the at least one application in the cloud instance ([0057], “specifying a cloud resource configuration details, extract at least one entity from the utterance, and generate parameters for the cloud resource setup based on the received information”; [0052], “At 504, entities are extracted from the utterances. An intelligent chatbot interface is used to extract the main actions and targets (i.e., entities of a user intent from natural language). One embodiment implements the chatbot interface using the Oracle Digital Assistant, which uses machine learning to identify key aspects in the user's utterances without the need for extensively covering every possible entity value. In this chatbot, examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”, wherein the cloud computing including the network endpoints indicates a cloud platform, wherein the compute indicates an amount for central processing unit utilization, wherein the storage indicates an amount for memory utilization, and wherein database and logging indicate disk input-output utilization. It is to be noted that the claimed limitation is “amounts for” without requiring that the respective amount representing the respective targets). As to claim 16, see similar rejection to claim 8. As to claim 20, see similar rejection to claim 8. As to claim 10, Manuel discloses the method of claim 1 further comprising training the one or more machine learning algorithms with historical runtime feature data of a plurality of applications ([0033]-[0036]). As to claim 22, see similar rejection to claim 10. As to claim 11, Manuel discloses the method of claim 10, wherein the historical runtime feature data specifies for respective ones of the plurality of applications at least one of: (i) a code size; (ii) a code language; (iii) a complexity tier ([0052], “examples of entities are the cloud computing, analytics, cost management, compute, containers, database, Internet of Things (“IoT”), identity, storage, content delivery, security compliances, network endpoints, monitoring, logging, and temporal configurations for the policy”, wherein the IoT, security compliances, network endpoints, monitoring, logging, can all be considered a complexity tier of the at least one application); (iv) an interactivity determination ([0032], “request parental control for their kids' device”); and (v) an execution time. As to claim 23, see similar rejection to claim 11. As to claim 12, Manuel discloses the method of claim 1 further comprising interfacing with at least one cloud platform of a plurality of cloud platforms to collect one or more runtime metrics corresponding to execution of a plurality of applications in a plurality of cloud instances ([0033]; [0036]; [0045]-[0046]. It is to be noted that the claimed limitation is “corresponding to expectation” which does not mercenarily require resulting from the execution), wherein the interfacing comprises: generating one or more application programming interfaces based at least in part on one or more cloud platform application programming interfaces used by the at least one cloud platform ([0046], “API”); and invoking the one or more generated application programming interfaces to collect the one or more runtime metrics from the at least one cloud platform ([0046]). As to claim 17, see similar rejection to claim 12. As to claim 13, Manuel discloses the method of claim 12 wherein the one or more runtime metrics are used for training the one or more machine learning algorithms ([0033]; [0036]; [0045]-[0046]). 7. Claims 9 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Manuel-Ingle-OAN HUU, as applied to claim 1 above, and further in view of Banerjee at al (“Feature Representations Using the Reflected Rectified Linear Unit (RReLU) Activation”). As to claim 9, Manuel-Ingle discloses the method of claim 1 wherein: the neural network includes a plurality of parallel branches respectively corresponding to the plurality of targets (Manuel, [0040], “RNN decoder 404 receives thought vector 420 as input and generates a sequence of words in the intent definition language at output 425. In embodiments, decoder 404 includes multiple RNNs 430. Multiple RNNs are used in embodiments to improve the learning rates”; figure 6. Multiple outputs corresponding to respective branches. Also see Figure 4), and the respective branches of the plurality of parallel branches comprise at least two hidden layers utilizing an activation function (figure 6, “Hidden”; [0058], “each of which contains an activation function…one or more hidden layers”), but does not expressly disclose that the activation function comprises a rectified linear unit activation function. Banerjee discloses a concept of an activation function being a rectified linear unit activation function (abstract). Before the effective filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Manuel-Ingle with Banerjee. The suggestion/motivation of the combination would have been to improve validation accuracy (Banerjee, abstract). As to claim 21, see similar rejection to claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUA FAN whose telephone number is (571)270-5311. The examiner can normally be reached on 9-6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUA FAN/Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
May 10, 2025
Non-Final Rejection — §103
Aug 13, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103
Nov 19, 2025
Response after Non-Final Action
Nov 26, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598113
APPLYING MANAGEMENT CONSTRAINTS DURING NETWORK SLICE DESIGN
2y 5m to grant Granted Apr 07, 2026
Patent 12585234
METHOD FOR ASSOCIATING ACTIONS FOR INTERNET OF THINGS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12574801
OPEN RADIO ACCESS NETWORK CLOUD INTELLIGENT CONTROLLER
2y 5m to grant Granted Mar 10, 2026
Patent 12568521
SCHEDULING TRANSMISSION METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12501491
RACH BASED ON FMCW CHANNEL SOUNDING
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
74%
With Interview (+8.4%)
5y 4m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month