Prosecution Insights
Last updated: April 19, 2026
Application No. 17/443,839

PROVISIONING COMPUTING RESOURCES ACROSS COMPUTING PLATFORMS

Non-Final OA §103
Filed
Jul 28, 2021
Examiner
WU, BENJAMIN C
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nutanix, Inc.
OA Round
5 (Non-Final)
87%
Grant Probability
Favorable
5-6
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
456 granted / 522 resolved
+32.4% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
19.8%
-20.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
0.8%
-39.2% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submissions filed on 10/07/2025 have been entered. 3. Claims 1–2, 4–11, 13–18, 23–24, and 26–52 are pending for examination in the request for continued examination filed on 10/07/2025. Claims 3, 12, 19–22, and 25 are cancelled. Claims 44–52 are NEW. Examiner Notes 4. Examiner refers to and explicitly cites particular pages, sections, figures, paragraphs or columns and lines in the references as applied to Applicant’s claims to the extent practicable to streamline prosecution. Although the cited portions of the references are representative of the best teachings in the art and are applied to meet the specific limitations of the claims, other uncited but related teachings of the references may be equally applicable as well. It is respectfully requested that, in preparing responses to the rejections, the Applicant fully considers not only the cited portions of the references, but also the references in their entirety, as potentially teaching, suggesting or rendering obvious all or one or more aspects of the claimed invention. Abbreviations 5. Where appropriate, the following abbreviations will be used when referencing Applicant’s submissions and specific teachings of the reference(s): i. figure / figures: Fig. / Figs. ii. column / columns: Col. / Cols. iii. page / pages: p. / pp. References Cited 6. (A) Castellanos et al., US 10,511,481 B1 (“Castellanos”). (B) Schmisseur et al., US 2019/0065231 A1 (“Schmisseur”). (C) Dasgupta et al., US 2013/0174149 A1 (“Dasgupta”). (D) Breckenridge et al., US 2012/0191630 A1 (“Breckenridge”). (E) Evans et al., US 9,921,827 B1 (“Evans”). Castellanos, Schmisseur, Breckenridge, and Evans were cited in the previous Office action. Notice re prior art available under both pre-AIA and AIA 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. A. 8. Claims 1, 4–5, 9–10, 13–14, 18, 23, 26–27, 31–32, 34, 36, and 41–49 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Castellanos in view of (B) Schmisseur, and (C) Dasgupta. See “References Cited” section, above, for full citations of references. 9. Regarding claim 1, (A) Castellanos teaches/suggests the invention substantially as claimed, including: “A method comprising: receiving a performance indicator of a workload executing on one or more processors of a first computing environment” (Col. 7, lines 19–28: performance monitoring 205 may monitor the execution of the application 200 with the resources 140A to generate suitable performance metrics 206 that characterize the execution. The metrics 206 may include one or more processor metrics, one or more memory metrics, and one or more network metrics. The performance metrics 206 may be used by the configuration recommendation service 110 to generate the resource usage characteristics for the application description; Col. 2, lines 33–38: Resource usage characteristics (including computational characteristics) for an application 35 may be determined based on input from a user and/or from performance monitoring of an existing deployment. The resource usage characteristics may relate to anticipated or estimated processor usage, memory usage, storage usage, network usage, and so on; Col. 6, lines 59–60: the client to characterize a workload associated with the application); “identifying one or more comparable workloads [configurations] based on the performance indicator of the workload in the first computing environment” (Col. 10, lines 12–25: the configuration analysis 120 may include nearest neighbor analysis, linear regression 15 analysis, neural network analysis, multi-arm bandit analysis, other suitable types of analysis, and/or any suitable combination thereof. In the nearest neighbor approach, a set of neighboring configurations may be determined in a space comprising the set of potential configurations 113. The 20 potential configurations 113 may be associated with other applications for the same client and/or other clients. The neighboring configurations may be associated with ones of the other applications that are similar to the current application, e.g., as determined by their resource usage characteristics; Col. 13, lines 16–44: resource usage characteristics may relate to anticipated or estimated processor usage, memory usage, storage usage, network usage, and so on … resource usage characteristics may be determined based (at least in part) on performance monitoring of an existing deployment of the application in the provider network; Col. 9, lines 13–20: The automated analysis may include scoring at least a portion of the potential configurations 113 using a scoring function 121. The scoring function 121 may determine a score that represents an estimate of the relative quality or fitness of a particular configuration for the particular application, e.g., in light of the resource utilization characteristics 112 of the application; the Examiner notes: identifying or determining (potential) configurations associated with other applications for the same client requires identifying the client’s other applications executing on other computing resources or servers, i.e. identifying one or more comparable workloads (belonging to the same client) based on the performance indicator of the workload); “generating a suggested resource allocation for the workload in a second computing environment based on characteristics of the one or more comparable workloads … the characteristics comprising a number of I/O requests, usage pattern, resource usage, or combinations thereof” (Col. 10, lines 25–33: Scoring the potential configurations may include generating scores for the neighboring configurations based (at least in part) on the scoring function 121. The recommended configuration 115 may represent a particular one of the neighboring configurations associated with a superior score. In one embodiment, the neighboring configuration with the best score may be selected as the recommended configuration 115; Col. 7, lines 40–50: The performance metrics 206 may include central processing unit (CPU) metrics per unit of time such as the average CPU usage, the maximum CPU usage, the minimum CPU usage, the standard deviation of CPU usage, the length of sustained CPU usage ( e.g., usage greater than 60% for at least five minutes), the length of idle CPU usage ( e.g., usage less than 5% for at least five minutes), and/or the length of heavy CPU usage ( e.g., usage greater than 90% for at least five minutes); Col. 8, lines 45–49: performance metrics 206 may also include the total number of workloads per hour, per day, and/or per account); and “CREATING, at the second computing environment … the instance of the workload” (Col. 13, lines 5–11: configuration recommendation service may recommend a configuration in a provider network for a new and undeployed application, an application that is already deployed in the provider network, or an application that has been deployed in an external environment; Col. 12, lines 34–38: the deployment to the instances 141A-141M may represent a migration from another set of resources ( e.g., of a different type) in the provider network 100; the other resources may be deprovisioned and returned to a pool of available resources). Castellanos teaches identifying one or more comparable configurations of workloads. Castellanos do not teach “allocating one or more resources for an instance of the workload at the second computing environment” and “creating, at the second computing environment and using the one or more allocated resources associated with the … resource allocation, the instance of the workload.” (B) Schmisseur, in the context of Castellanos’s teachings, however teaches or suggests implementing: “allocating one or more resources for an instance of the workload at the second computing environment” (¶¶ 110–112: to manage the creation, migration, and deletion of VM instances on the compute sleds 1602. To do so, the illustrative VM instance manager 1940 includes a resource identifier 1942 and a migration manager 1944. The resource identifier 1942 is configured to identify which resources to allocate for a particular purpose ( e.g., a workload). Such resources may be allocated by type, amount, performance, intended use, etc., and may include network communication resources, storage resources, compute resource … to manage the migration of a VM instance in response to having detected a migration triggering event; ¶¶ 114–115: determines a compute sled 1602 (e.g., one of the compute sled (1) 1602a, the compute sled (2) 1602b, the compute sled (N) 1602c of FIG. 16) on which to launch the VM instance 1616. To do so, in block 2008, the resource manager server 1606 first identifies the available resources of each available compute sled 1602. Additionally, in block 2010, the resource manager server 1606 determines the compute sled to launch the VM instance 1616 based on the determined resources required by the workload and the identified available resources of each available compute sled 1602. ¶ 115: allocates resources of the determined compute sled for use by the VM instance. In block 2014, the resource manager server 1606 allocates a region of memory in a memory pool (e.g., the memory 1612 in the memory pool 1614 ofFIG.16) to be associated with the compute sled 1602. … In block 2020, the resource manager server 1606 creates the VM instance 1616; ¶ 120: create the VM instance on the compute sled; allocate, in response to determined determination that the VM instance is to be migrated, a second set of resources of another compute sled of the plurality of compute sleds for the VM instance; migrate the VM instance to the other compute sled; associate the region of memory in the memory pool with the other compute sled; and start-up the VM instance on the other compute sled); “creating, at the second computing environment and using the one or more allocated resources associated with the … resource allocation, the instance of the workload.” (¶ 115: allocates resources of the determined compute sled for use by the VM instance. In block 2014, the resource manager server 1606 allocates a region of memory in a memory pool (e.g., the memory 1612 in the memory pool 1614 ofFIG.16) to be associated with the compute sled 1602. … In block 2020, the resource manager server 1606 creates the VM instance 1616; ¶ 120: create the VM instance on the compute sled; allocate, in response to determined determination that the VM instance is to be migrated, a second set of resources of another compute sled of the plurality of compute sleds for the VM instance; migrate the VM instance to the other compute sled; associate the region of memory in the memory pool with the other compute sled; and start-up the VM instance on the other compute sled). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of (B) Schmisseur with those of Castellanos, to create/deploy a new instance of the workload (via migration) using available, allocated resources of the deployment instance or server, i.e. migration target. The motivation or advantage to do so is to ensure sufficient capacity, a level of reliability, and resiliency of potential deployment/migration targets before migrating the workload. Castellanos and Schmisseur do not teach “the suggested resource allocation including suggested configurations for one or more additional instances of the workload at the second computing environment.” (C) Dasgupta however teaches or suggests implementing: “the suggested resource allocation including suggested configurations for one or more additional instances of the workload at the second computing environment” (¶ 36: Based upon, at least in part, the determined change in application capacity associated with the determined predicted workload, scaling process may dynamically select a scaling strategy; ¶ 37: to change the resource allocation of one or more virtual machines ( e.g., VM1) executing an application ( e.g., application a1). Additionally/alternatively, virtualization manager may create one or more new virtual machine instances (e.g., VM2, VM3, VM4) that may execute application a1; Fig. 3 and ¶ 44: assume that initially application al is only implemented on a single virtual machine (VMl). Further assume that, based upon a determined predicted workload, scaling process 10 may determine a change in capacity for the predicted workload that is greater than three times the available capacity of an optimally configured virtual machine executing application al. Scaling process 10 may dynamically select a scaling strategy that may implement a plurality of virtual machines (e.g., VMl, VM2, VM3) having generally equal resource allocation configurations ( e.g., which may collectively provide for the majority of the determined 52 change in capacity for the predicted workload)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further combine the teachings of (C) Dasgupta with those of Castellanos and Schmisseur, to provide suggested scaling (expansion) of computing resources for deployment. The motivation or advantage to do so is to provide for the right-sizing of deployed (allocated) resources based on predicted change in workload capacity or demand. (See Castellanos, Col. 13, lines 16–44: resource usage characteristics may relate to anticipated or estimated processor usage, memory usage, storage usage, network usage, and so on … resource usage characteristics may be determined based (at least in part) on performance monitoring of an existing deployment of the application in the provider network). 10. Regarding claim 4, Castellanos and Schmisseur teach or suggest: “terminating the workload at the first computing environment when creating the instance of the workload at the second computing environment” (Castellanos, Col. 13, lines 5–11: configuration recommendation service may recommend a configuration in a provider network for a new and undeployed application, an application that is already deployed in the provider network, or an application that has been deployed in an external environment; Col. 12, lines 34–38: the deployment to the instances 141A-141M may represent a migration from another set of resources ( e.g., of a different type) in the provider network 100; the other resources may be deprovisioned and returned to a pool of available resources; the Examiner notes: migration involving copying and creating a new instance of the application/workload at a different, destination node and terminating the current instance at the source node; Schmisseur, ¶ 97: instantiating/stopping/starting a VM instance and executing a workload ( e.g., within the VM instance); ¶ 112: upon receipt, the compute sled 1602 can stop the VM instance and initiate a data flush to a mapped region of memory in a memory pool (e.g., the memory 1612 in the memory pool 1614 of FIG. 16).). 11. Regarding claim 5, Castellanos teaches or suggests: “wherein the instance of the workload at the second computing environment is generated in addition to the workload executing at the first computing environment” (Col. 13, lines 5–11: configuration recommendation service may recommend a configuration in a provider network for a new and undeployed application, an application that is already deployed in the provider network, or an application that has been deployed in an external environment). 12. Regarding claim 9, Castellanos and Schmisseur teach or suggest: “wherein the second computing environment is selected from available computing environments” (Castellanos, Col. 3, lines 40–60: configuration analysis 120 may use, as input, an application description 111 and a set of potential configurations 113. Each of the potential configurations 113 may describe one or more types of the computing resources 140 available in the provider network 100 … A recommended configuration 115 may describe one or more types of the computing resources 140 available in the provider network 100; Col. 12, lines 4–10: additional logic may be automatically applied by the configuration recommendation service 110 or other component of the provider network 100 to approve or deny the recommended configuration 115, e.g., based on cost, availability, and/or other applicable policies; Schmisseur, ¶ 113: the resource manager server 1606 first identifies the available resources of each available compute sled 1602. Additionally, in block 2010, the resource manager server 1606 determines the compute sled to launch the VM instance 1616 based on the determined resources required by the workload and the identified available resources of each available compute sled 1602). 13. Regarding claim 32, Castellanos and Schmisseur teach or suggest: “wherein the suggested resource allocation is a memory allocation, a number and types of processors available allocation, and size of the instance allocation, or combinations thereof” (Castellanos, Col. 3, lines 40–45: The configuration analysis 120 may use, as input, an application description 111 and a set of potential configurations 113. Each of the potential configurations 113 may describe one or more types of the computing resources 140 available in the provider network 100 and, for each of the types of resources, a number (e.g., a defined quantity) of the resources to be used in the configuration; Col. 4, lines 51–58: virtual compute instance may comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size, and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor); Schmisseur, ¶ 110: resource identifier 1942 is configured to identify which resources to allocate for a particular purpose ( e.g., a workload). Such resources may be allocated by type, amount, performance, intended use, etc., and may include network communication resources, storage resources, compute resources, etc.). 14. Regarding claim 41, Castellanos and Schmisseur teach or suggest: “wherein said allocating the one or more resources for the instance of the workload at the second computing environment is based on the suggested resource allocation” (Castellanos, Col. 10, lines 25–33: Scoring the potential configurations may include generating scores for the neighboring configurations based (at least in part) on the scoring function 121. The recommended configuration 115 may represent a particular one of the neighboring configurations associated with a superior score. In one embodiment, the neighboring configuration with the best score may be selected as the recommended configuration 115; Schmisseur, ¶¶ 114–115: determines a compute sled 1602 (e.g., one of the compute sled (1) 1602a, the compute sled (2) 1602b, the compute sled (N) 1602c of FIG. 16) on which to launch the VM instance 1616. To do so, in block 2008, the resource manager server 1606 first identifies the available resources of each available compute sled 1602. Additionally, in block 2010, the resource manager server 1606 determines the compute sled to launch the VM instance 1616 based on the determined resources required by the workload and the identified available resources of each available compute sled 1602. ¶ 115: allocates resources of the determined compute sled for use by the VM instance. In block 2014, the resource manager server 1606 allocates a region of memory in a memory pool (e.g., the memory 1612 in the memory pool 1614 ofFIG.16) to be associated with the compute sled 1602. … In block 2020, the resource manager server 1606 creates the VM instance 1616). 15. Regarding claim 44 (new), Castellanos and Dasgupta teach or suggest: “suggested resource allocation is different from the one or more comparable workloads” (Castellanos, Col. 13, lines 16–44: resource usage characteristics may relate to anticipated or estimated processor usage, memory usage, storage usage, network usage, and so on … resource usage characteristics may be determined based (at least in part) on performance monitoring of an existing deployment of the application in the provider network; Dasgupta, ¶ 36; ¶ 37; Fig. 3 and ¶ 44, as applied in rejecting claim 1 above, teaching horizontal scaling of resources based on predicted requirement/demand). 16. Regarding claim 47 (new), Castellanos and Dasgupta teach or suggest: “wherein the second computing environment has a different architecture than the first computing environment” (Castellanos, Col. 3, lines 1–18: The computing resources 140 may include different types of resources, such as computing resources 140A of a first type and computing resources 140B of a second type through computing resources 140N of an Nth type. Any suitable types of computing resources 140 may be provided by the provider network 100; Dasgupta, ¶ 28: An example of servers s1 through sn (e.g., which may include one or more processors and one or more memory architectures; not shown) may include, but is not limited to, a blade server (such as an IBM BladeCenter PS704 Express) or other server computer). 17. Regarding claims 10, 13–14, 18, 34, 42, 45, and 48, they the corresponding computer program product claims reciting similar limitations of commensurate scope as the method of claims 1, 4–5, 9, 32, 41, 44, and 47. Therefore, they are rejected on the same basis as claims 1, 4–5, 9, 32, 41, 44, and 47 above. 18. Regarding claims 23, 26–27, 31, 36, 43, 46, and 49, they the corresponding system claims reciting similar limitations of commensurate scope as the method of claims 1, 4–5, 9, 32, 41, 44, and 47. Therefore, they are rejected on the same basis as claims 1, 4–5, 9, 32, 41, 44, and 47 above, including the following rationale: Castellanos teaches/suggests: “a central computing system comprising a workload manager” (Fig. 7 and Col. 16: computing device 3000 includes one or more processors 3010A-3010N coupled to a system memory 3020 … configured to store program instructions and data accessible by processor(s)). B. 19. Claims 2, 6–7, 11, 15–16, 24, and 28–29 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Castellanos in view of (B) Schmisseur, and (C) Dasgupta, as applied to claims 1, 10, and 23 above, and further in view of (D) Breckenridge. 20. Regarding claim 2, Castellanos, Schmisseur, and Dasgupta do not teach “wherein identifying the one or more comparable workloads comprises using a k nearest neighbors model.” (D) Breckenridge however teaches or suggests: “wherein identifying the one or more comparable workloads comprises using a k nearest neighbors model” (¶ 37: Some examples of training functions that can be used to train a static predictive model include (without limitation): regression ( e.g., linear regression, logistic regression), classification and regression tree, multivariate adaptive regression spline and other machine learning training functions ( e.g., Naive Bayes, k-nearest neighbors …); ¶ 25: The selected trained model executing in the data center 112 receives the prediction request, input data and request for a predictive output, and generates the predictive output 114; ¶ 34: process 400 and system 200 can be used in various different applications. Some examples include (without limitation) making predictions relating to customer sentiment, transaction risk, species identification, … product recommendation). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (D) Breckenridge with those of Castellanos, Schmisseur, and Dasgupta, to train static (in additions to dynamic and updateable) models to generate predictive output. The motivation or advantage to do so is provide multiple types of predictive models for selection and use based of their effectiveness. 21. Regarding claim 6, Castellanos and Breckenridge teach/suggest: “training a provisioning model using performance indicators for a plurality of workloads executing at the first computing environment and the second computing environment, wherein the one or more comparable workloads are identified using the provisioning model and the suggested resource allocation is generated using the provisioning model” (Castellanos, Col. 10, lines 33–46: In the regression model approach, a regression model or neural network may be determined for the set of potential 35 configurations 113 …. For each pair of an application description and a configuration, a score may be calculated using the scoring function 121. These application-configuration pairs may be 40 determined for existing customers and existing configurations of the provider network 100. In this manner, a training set may be generated. Using the training set, the automated analysis may attempt to fit either a logistic regression model or a neural network that learns the mapping from the 45 application description and configuration to the scoring function; Breckenridge, Figs. 4 and 5; ¶ 46: the selected predictive model is fully trained using the training data (e.g., all K partitions) (Step 410), for example, by the model training module 212. A trained model (i.e., "fully trained" model) is thereby generated for use in generating predictive output, e.g., trained predictive model 218; ¶ 21: Methods and systems are described that provide a dynamic repository of trained predictive models, at least some of which can be updated as new training data becomes available. A trained predictive model from the dynamic repository can be provided and used to generate a predictive output for a given input. As a particular client entity's training data changes over time, the client entity can be provided access to a trained predictive model that has been trained with training data reflective of the changes. As such, the repository of trained predictive models from which a predictive model can be selected to use to generate a predictive output is “dynamic”, as compared to a repository of trained predictive models that are not updateable). 22. Regarding claim 7, Castellanos and Breckenridge teach/suggest: “updating the provisioning model using the performance indicator of the workload” (Castellanos, Col. 10, lines 33–46: In the regression model approach, a regression model or neural network may be determined for the set of potential 35 configurations 113 …. For each pair of an application description and a configuration, a score may be calculated using the scoring function 121. These application-configuration pairs may be 40 determined for existing customers and existing configurations of the provider network 100. In this manner, a training set may be generated. Using the training set, the automated analysis may attempt to fit either a logistic regression model or a neural network that learns the mapping from the 45 application description and configuration to the scoring function; Breckenridge, Fig. 4 and ¶ 38: multiple predictive models, which can be all or a subset of the available predictive models, are trained using some or all of the training data (Step 404). In the example predictive modeling server system 206, a model training module 212 is operable to train the multiple predictive models. The multiple predictive models include one or more updateable predictive models and can include one or more static predictive models; ¶ 36: An updateable predictive model refers to a trained predictive model that was trained using a first set of training data (e.g., initial training data) and that can be used together with a new set of training data and a training function to generate a "retrained" predictive model. The retrained predictive model is effectively the initial trained predictive model updated with the new training data. One or more of the training functions included in the repository 216 can be used to train “static” predictive models). 23. Regarding claims 11 and 15–16, they the corresponding computer program product claims reciting similar limitations of commensurate scope as the method of claims 2 and 6–7. Therefore, they are rejected on the same basis as claims 2 and 6–7 above. 24. Regarding claims 24, and 28–29, they the corresponding system claims reciting similar limitations of commensurate scope as the method of claims 2 and 6–7. Therefore, they are rejected on the same basis as claims 2 and 6–7 above. C. 25. Claims 8, 17, 30, 33, 35, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over (A) Castellanos in view of (B) Schmisseur and (C) Dasgupta, as applied to claims 1, 10, and 23 above, and further in view of (E) Evans. 26. Regarding claim 8, Castellanos, Schmisseur, and Dasgupta do not teach “wherein the one or more comparable workloads are identified based on a fingerprint of the workload and fingerprints of a plurality of workloads.” (E) Evans however teaches or suggests: “wherein the one or more comparable workloads are identified based on a fingerprint of the workload and fingerprints of a plurality of workloads” (Col. 2, lines 30–40: Each of the applications 103 has its own respective application fingerprint 106 that can function to distinguish one application 103 from another and/or to identify similarities between applications … application fingerprints 106 identifies characteristics relating to device hardware used, software libraries used, and resource consumption. In other examples, additional or different characteristics may be represented by the application fingerprints 106 such as usage or behavioral metrics associated; Col. 4, lines 35–37: The applications 103 may correspond to game applications, email applications, social network applications, mapping applications, and/or any other type of application 103; Col. 5, lines 1–5: The application fingerprint 106 may indicate resource consumption profiles 254 and/or behavioral usage profiles 257; Col. 9, lines 15–26: the application fingerprint 106 may be used in searching for applications 103 that have certain characteristics …. Further, the application fingerprint 106 may be used to determine similarities among applications 103 based upon matching of application fingerprints 106). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of (E) Evans with those of Castellanos, Schmisseur, and Dasgupta, to use application (workload) fingerprint to compare and identify similar applications. The motivation or advantage to do so is provide efficient and quick search of potential configurations of applications with similar resource consumptions and hardware/software requirements. 27. Regarding claim 33, Castellanos and Evans teach or suggest: “wherein identifying one or more comparable workloads is further based on using a neural network, a provisioning model, clustering, transfer learning, a workload fingerprint, or combinations thereof” (Castellanos, Col. 10, lines 12–25: the configuration analysis 120 may include nearest neighbor analysis, linear regression 15 analysis, neural network analysis, multi-arm bandit analysis, other suitable types of analysis, and/or any suitable combination thereof Col. 14, lines 14–20: different approaches or combinations of approaches may be used in the automated analysis. For example, the automatic analysis may include nearest neighbor analysis, linear regression analysis, neural network analysis, multi-arm bandit analysis, other suitable types of analysis, and/or any suitable combination thereof; Evans, Col. 2, lines 30–40: Each of the applications 103 has its own respective application fingerprint 106 that can function to distinguish one application 103 from another and/or to identify similarities between applications … application fingerprints 106 identifies characteristics relating to device hardware used, software libraries used, and resource consumption. In other examples, additional or different characteristics may be represented by the application fingerprints 106 such as usage or behavioral metrics associated; Col. 5, lines 1–5: The application fingerprint 106 may indicate resource consumption profiles 254 and/or behavioral usage profiles 257; Col. 9, lines 15–26: the application fingerprint 106 may be used in searching for applications 103 that have certain characteristics …. Further, the application fingerprint 106 may be used to determine similarities among applications 103 based upon matching of application fingerprints 106). 28. Regarding claims 17 and 35, they the corresponding computer program product claims reciting similar limitations of commensurate scope as the method of claims 8 and 33. Therefore, they are rejected on the same basis as claims 8 and 33 above. 29. Regarding claims 30 and 37, they the corresponding system claims reciting similar limitations of commensurate scope as the method of claims 8 and 33. Therefore, they are rejected on the same basis as claims 8 and 33 above. Allowable Subject Matter 30. Claims 38–40 and 50–52 are objected to as being dependent upon a rejected base claim, but would be allowable if 1) rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments 31. Applicant’s arguments with respect to the claims have been considered but are moot because the arguments do not apply to any of the newly applied teachings or references being used in the current rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN C WU whose telephone number is (571)270-5906. The examiner can normally be reached Monday through Friday, 8:30 A.M. to 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J. Li can be reached on (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN C WU/Primary Examiner, Art Unit 2195 November 13, 2025
Read full office action

Prosecution Timeline

Jul 28, 2021
Application Filed
Jan 26, 2024
Non-Final Rejection — §103
Apr 24, 2024
Examiner Interview Summary
Apr 24, 2024
Applicant Interview (Telephonic)
May 06, 2024
Response Filed
Jul 21, 2024
Final Rejection — §103
Sep 19, 2024
Applicant Interview (Telephonic)
Sep 19, 2024
Examiner Interview Summary
Nov 20, 2024
Request for Continued Examination
Nov 25, 2024
Response after Non-Final Action
Mar 08, 2025
Non-Final Rejection — §103
May 28, 2025
Examiner Interview Summary
May 28, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Response Filed
Jul 03, 2025
Final Rejection — §103
Aug 25, 2025
Applicant Interview (Telephonic)
Aug 25, 2025
Examiner Interview Summary
Oct 07, 2025
Request for Continued Examination
Oct 14, 2025
Response after Non-Final Action
Nov 13, 2025
Non-Final Rejection — §103
Mar 09, 2026
Examiner Interview Summary
Mar 09, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602258
INSTANTIATING SOFTWARE DEFINED STORAGE NODES ON EDGE INFORMATION HANDLING SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12585508
RECONSTRUCTING AND VERIFYING PROPRIETARY CLOUD BASED ON STATE TRANSITION
2y 5m to grant Granted Mar 24, 2026
Patent 12579006
SYSTEMS AND METHODS FOR UNIVERSAL AUTO-SCALING
2y 5m to grant Granted Mar 17, 2026
Patent 12572388
COMPUTING RESOURCE SCHEDULING BASEDON EXPECTED CYCLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566646
Accessing Critical Resource in a Non-Uniform Memory Access (NUMA) System
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+16.4%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month