Prosecution Insights
Last updated: April 19, 2026
Application No. 18/054,923

DYNAMIC AND CASCADING RESOURCE ALLOCATION

Non-Final OA §103§112
Filed
Nov 14, 2022
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§103 §112
DETAILED ACTION This office action is in response to claims filed 14 November 2022. Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-8, 10, and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 5, In lines 1-2, it is not particularly pointed out or distinctly claimed what is meant by “analyzing the relationships between said resources and any internal and external clients”, as the claims only mention relationships between resources (claim 1, line 6), and not relationships between resources and internal or external clients. For examination purposes, the examiner will interpret claim 5 as reciting the relationships between resources, and further relationships between the resources and clients. Regarding claim 6, a. In lines 1-2, it is not particularly pointed out or distinctly claimed what is meant by “analyzing the relationships between said resources and any internal and external clients”, as the claims only mention relationships between resources (claim 1, line 6), and not relationships between resources and internal or external clients. For examination purposes, the examiner will interpret claim 5 as reciting the relationships between resources, and further relationships between the resources and clients. Regarding claims 7, and 16 (line numbers correspond to claim 7), a. In lines 3-4, it is not particularly pointed out or distinctly claimed what is meant by “remove at least one of said resources”, because it is not clear where the resource is being removed from. Further, it is unclear as to whether “remove” a resource relates to “release” a resource as described in claim 1 (line 13). Regarding claim 8, a. In line 2, it is not particularly pointed out or distinctly claimed what is meant by “provide an estimated request”. For examination purposes, the examiner will interpret this as providing an estimated time. Regarding claim 10, a. In line 1, it is not particularly pointed out or distinctly claimed what is meant by “wherein metrics also include…”. For examination purposes, the examiner will interpret this discussing “said metrics”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 12-14, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over RANJAN et al. Pub. No.: US 2021/0232440 A1 (hereafter RANJAN), in view of BLAGODUROV et al. Pub. No.: US 2020/0379814 A1 (hereafter BLAGODUROV). Regarding claim 1, RANJAN teaches the invention substantially as claimed, comprising: A method for providing dynamic resource allocation in a computing environment, comprising: determining a database access pattern by analyzing and monitoring traffic pattern between a pool of resources and a plurality of clients ([0010] An example of a cluster (i.e., “pool of resources”) is a cluster that operates a serverless computing environment, in which custom application code is run in managed and ephemeral containers on a Functions-as-a-Service (FaaS) platform (i.e., FaaS provide cluster resources to plural customers, or clients)) during a performance of at least one task ([0011] A function (i.e., “task”) may be executed upon invocation, for example, in response to a service request. The execution of the function may involve consumption of computing resources of the cluster. [0001] An example function is a search function that is to search for an item in a database (i.e., execution of a function, representing a database access task, results in database access “traffic”)), wherein said traffic pattern includes accessing at least one database; determining any relationships between each of said resources ([0047] To determine the resource availability information, a cluster, such as the first cluster 102 (i.e., resources within a same cluster are “related” while those not within the same cluster are not “related”), may forecast its resource consumption for a predetermined future time duration, such as one hour…Such a forecast may be performed based on an analysis of a historical workload pattern of the cluster (i.e., workloads due to database access function execution are “monitored” to produce a historical workload “pattern” utilizing related resources within a same cluster)); enabling an access to a plurality of said resources in said pool based on said database access pattern and said resource relationship, wherein said plurality of resources in said pool can be accessed but do not have to be allocated until a processing of a request ([0046] To facilitate sharing of the workload, in an example, the community manager 302 may monitor resources available with each cluster of the community and may store such information as a record 306. The information of resources available with a cluster may be referred to as resource availability information of the cluster. The resource availability information may be published by each cluster to the community manager 302 (i.e., available cluster resources are resource that are “enabled” for access and are waiting for allocation to a request))…and upon receiving of a subsequent request for processing ([0065] At block 402, an identification request may be received from a requesting cluster, such as the first cluster 102. The identification request may be for identification of a cluster that can spare ‘X’ number of computing units. Here, ‘X’ may be the first number if the identification request corresponds to the first service request 216 alone)…predict any resource needs so as to dynamically allocate, reallocate and release said plurality of resources in a cascading manner until a completion of said subsequent request ([0066] In response to the request, at block 404, clusters of the community that can spare the ‘X’ number of computing units may be identified. Such an identification may be performed based on the resource availability information published by each cluster, as explained earlier. If it is identified that a single cluster alone can spare the ‘X’ number of computing units, at block 408, such a cluster may be selected as the lender cluster. Further, an indication of the lender cluster may be provided to the requesting cluster. If, at block 406, it is identified that multiple clusters can spare the ‘X’ number of computing units, at block 410, one cluster is selected from among the multiple clusters (i.e., clusters predicted to have resources that are capable of supporting the received service request are dynamically selected, or allocated to the service request). [0032] A container hosting the function may be instantiated in a node of a cluster when the function is to be executed and may be terminated when the output of the execution is completed (i.e., resources of a container are terminated, or “released” for “reallocation” to other functions to be executed. Further, this technique of allocating, terminating, and reallocating, may be considered a “cascading manner”)). While RANJIN discusses predicting subsequent resource needs based on traffic pattern analysis, it does not explicitly teach: generating a consumption model to predict resource needs based on said resource relationships, a traffic pattern and an availability of said plurality of resources; and using said consumption model to predict any resource needs. However, in analogous art that similarly predicts future resource usage, BLAGODUROV teaches: generating a consumption model to predict resource needs based on said resource relationships, a traffic pattern and an availability of said plurality of resources; and using said consumption model to predict any resource needs ([0010] A generative adversarial network generates predicted resource utilization. An orchestrator trains the generative adversarial network and provides the predicted resource utilization from the generative adversarial network to a resource scheduler for usage when the quality of the predicted resource utilization is above a threshold (i.e., training a generative adversarial network on resource consumption data “generates” a machine learning “consumption model”). [0015] To predict future resource usage patterns based on past resource usage patterns, the resource usage prediction system 101 includes a discriminator 102 and a generator 104 that together implement a generative adversarial network …The role of the generator 104 is to generate output content given input content. The input content is real resource utilization for a time period 1, while the output content is predicted resource utilization for a subsequent time period, time period 2). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined BLAGODUROV’s teaching of predicting future resource usage patterns based on output of a generated consumption model, with RANJAN’s teaching of predicting future resource usage patterns, to realize, with a reasonable expectation of success, a system that predicts future resource usage patterns, as in RANJAN, using a trained generative adversarial network, as in BLAGODUROV. A person having ordinary skill would have been motivated to make this combination so that more accurate resource allocation can be made to improve performance of a computer system (BLAGODUROV [0001]) Regarding claim 2, RANJAN further teaches: upon determining that said plurality of resources are not sufficient ([0068] If, at block 404, it is determined that there is no single cluster that can spare the ‘X’ number of computing units, at block 411, it is determined if computing units can be obtained from a group of clusters (i.e., there is no cluster having “sufficient” resources to fulfil the request)), at least one additional resource is added and enabled from said resource pool ([0069] If it is possible to obtain the computing units from a group of clusters, at block 413, it is determined if more than one cluster can collectively spare the ‘X’ number of computing units. For instance, it may be checked if the first number of computing units can be spared by the second cluster 202 and the second number of computing units can be spared by the third cluster 204 or vice versa. If yes, at block 414, the clusters from which the computing units are to be obtained may be selected and computing units may be obtained (i.e., at least one resource from a different cluster is added to a “pool” comprising the first cluster of the group of clusters)). Regarding claim 3, RANJAN further teaches: other resources are enabled and added that have not been part of said resource pool and when no resource availability is found, an alert is generated so that a processing request cannot be performed due to resource availability ([0068] If it is not possible to obtain the computing units from the group of clusters, at block 412, the selection process ends, and the first cluster 102 is notified (i.e., generating an “alert”) that workload sharing is not possible (i.e. other clusters represent other “resources” that are enabled and are able to be added to a group of clusters representing said “resource pool”, but which together are unable to satisfy the computing request, and which results in a notification being generated)). Regarding claims 12-14, and 17-18, they comprise limitations similar to those of claims 1-3, and are therefore rejected for similar rationale. Claims 4-6, 15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over RANJAN, in view of BLAGODUROV, as applied to claims 1, 12, and 17 above, and in further view of KULACK et al. Pub. No.: US 2012/0310996 A1 (hereafter KULACK). Regarding claim 4, while RANJAN and BLAGODUROV discuss accessing a database in response to a request, they do not explicitly teach: a plurality of databases is used by said computing environment. However, in analogous art that similarly teaches accessing a database in response to a request, KULACK teaches: a plurality of databases is used by said computing environment ([0004] The method, computer program product and system include analyzing the first database to determine a first set of structural characteristics of the first database. The method, computer program product and system also include analyzing a second database to determine a second set of structural characteristics of the second database, wherein the second database is associated with a second data abstraction model (i.e., system uses a plurality of databases to create data abstraction models)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined KULACK’s teaching of executing queries to access multiple databases, with the combination of RANJAN and BLAGODUROV’s teaching of executing database access operations, to realize, with a reasonable expectation of success, a system that executes database access operations, as in RANJAN and BLAGODUROV, to multiple different databases, as in KULACK. A person of ordinary skill would have been motivated to make this combination to enable access to multiple databases to improve data locality or resilience. Regarding claim 5, KULACK further teaches: analyzing the relationships between said resources and any internal and external clients includes at least one of identifying the relationships of same resources among components within a database or identifying the relationships of same resources among components across one or more databases ([0030] Generally, the database analysis component 135 may analyze the database 130.sub.1 to determine a first set of structural characteristics for the database 130.sub.1. These characteristics may include information related to the structure of the database, such as the tables contained in the database and the structure of those tables (i.e., each table represents a same type of “resource” that are also “components” of a database). The characteristics may further include information on the data contained in the tables. For instance, the database analysis component 135 may analyze the database 130.sub.1 and determine that one column of data conforms to a particular industry standard. The database analysis component 135 may also examine relationships between the tables in the database (i.e., relationships between tables represents “relationships between resources”). One example of such a relationship would be if a first table of the database 130.sub.1 contains references to a second table of the database 130.sub.1 (e.g., a foreign key). [0036] The embodiments of the present invention may also be practiced in distributed computing environments in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. In this regard, the computer system 175 and/or one or more of the networked devices 176 may be thin clients which perform little or no processing (i.e., multiple thin clients represent at least one internal or external client having a relationship with the databases and components of those databases)). Regarding claim 6, KULACK further teaches: analyzing the relationships between said resources and any internal and external clients includes at least one of identifying the relationships of different resources among components within said one or more databases, and identifying the relationships of different resources among components across at least one of said one or more databases ([0030] Generally, the database analysis component 135 may analyze the database 130.sub.1 to determine a first set of structural characteristics for the database 130.sub.1. These characteristics may include information related to the structure of the database, such as the tables contained in the database and the structure of those tables (i.e., each individual table represents a different “resource” that are also “components” of a database). The characteristics may further include information on the data contained in the tables. For instance, the database analysis component 135 may analyze the database 130.sub.1 and determine that one column of data conforms to a particular industry standard. The database analysis component 135 may also examine relationships between the tables in the database (i.e., relationships between tables represents “relationships between resources”). One example of such a relationship would be if a first table of the database 130.sub.1 contains references to a second table of the database 130.sub.1 (e.g., a foreign key). [0036] The embodiments of the present invention may also be practiced in distributed computing environments in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. In this regard, the computer system 175 and/or one or more of the networked devices 176 may be thin clients which perform little or no processing (i.e., multiple thin clients represent at least one internal or external client having a relationship with the databases and components of those databases)). Regarding claims 15, and 20, they comprise limitations similar to claim 4, and are therefore rejected for similar rationale. Claims 7, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over RANJAN, in view of BLAGODUROV, as applied to claims 1, 12, and 17 above, and in further view of KRISHNAN et al. Pub. No.: US 2019/0324820 A1 (hereafter KRISHNAN). Regarding claim 7, while RANJAN and BALGODUROV discusses assigning resources from pools to handle requests, they do not explicitly teach: said consumption model is used by a Multi-Resource Pool Predictor to allocate and reallocate resources, wherein said Multi-Resource Pool Predictor is enabled to scan a usage of said Multi-Resource Pool Predictor and remove at least one of said resources when an idle time is larger than a defined threshold. However, in analogous art that similarly assigns resources from pools to handle requests, KRISHNAN teaches: said consumption model is used by a Multi-Resource Pool Predictor to allocate and reallocate resources, wherein said Multi-Resource Pool Predictor is enabled to scan a usage of said Multi-Resource Pool Predictor and remove at least one of said resources when an idle time is larger than a defined threshold ([0098] In some examples, the resource pool handler 460 compares a quantity of time ones of the free pool servers 310 are inactive or not utilized to an inactive time threshold specified by a policy rule based on the policy 304. For example, the resource pool handler 460 may instruct the resource deallocator 440 to decompose one or more of the free pool servers 310 when the one or more free pool servers 310 have been inactive for a period of time greater than the inactive time threshold (i.e., deallocating a server resource “removes” the resource from the pool)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined KRISHNAN’s teaching of removing idle resources from a pool, with the combination of RANJAN and BLAGODUROV’s teaching of maintaining a pool of resources for handling database access requests, to realize, with a reasonable expectation of success, a system that maintains a pool of resources, as in RANJAN and BLAGODUROV, where idle resources are dynamically removed, as in KRISHNAN. A person having ordinary skill would have been motivated to make this combination so that unused resources may be reallocated thereby optimizing the use of resources for improved performance (KRISHNAN [0027]). Regarding claims 16, and 19, they comprise limitations similar to claim 7, and are therefore rejected for similar rationale. Claims 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over RANJAN, in view of BLAGODUROV, in view of KRISHNAN, as applied to claims 7, above, and in further view of NAGPAL et al. Patent No.: US 10,089,144 B1 (hereafter NAGPAL). Regarding claim 8, while RANJAN, BLAGODUROV, and KRISHNAN discuss use of a consumption model to predict resource needs, they do not explicitly teach: said Multi-Resource Pool Predictor uses said consumption model to provide an estimated request for completing processing and a plurality of related metrics for said request processing. However, in analogous art that similarly teaches use of a consumption model to predict resource needs, NAGPAL teaches: said Multi-Resource Pool Predictor uses said consumption model to provide an estimated request for completing processing and a plurality of related metrics for said request processing ([Abstract Lines 1-10] Measurements comprising time-series stimuli and time-series responses of a computing platform that has executed a first set of jobs are collected over a first time period. The measurements are used to form a query-able predictive model pertaining to resource usage demand predictions (i.e., “consumption model”) for the first set of jobs. A second set of job records describe a second set of jobs to be invoked in a second time period. The predictive model is queried to determine a likelihood to complete by the predicted finish time (i.e., estimated “time” for completing processing) based on resource usage demand predictions for the first set of jobs (i.e., other related metrics include “likelihood to complete by the predicted finish time”, as well as “resource usage demand predictions”)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined NAGPAL’s teaching of a predictive model that predicts a finish time for a job, as well as other related metrics, with the combination of RANJAN, BLAGODUROV, and KRISHNAN’s teaching of a predictive model used to allocate resources, to realize, with a reasonable expectation of success, a system that uses a predictive model to allocate resources, as in RANJAN, BLAGODUROV, and KRISHNAN, which also provides an estimated time for completion and other related metrics, as in NAGPAL. A person having ordinary skill would have been motivated to make this combination to provide accurate information for better workload planning. Regarding claim 9, NAGPAL further teaches: said metrics include at least one of: an expected arrival rate of at least one [job] and a running time of each [job] ([Column 9, Lines 55-57] This schedule is formed over a set of predicted foreground demands, which predicted foreground demands may or may not be accurate in fact during the timeframe T.sub.0 to T.sub.3 (i.e., a predicted set of demands over a timeframe represents a predicted demand arrival rate)). RANJAN further teaches: at least one query ([0011] A function (i.e., “task”) may be executed upon invocation, for example, in response to a service request. The execution of the function may involve consumption of computing resources of the cluster. [0001] An example function is a search (i.e., “query”) function that is to search for an item in a database) Regarding claim 10, NAGPAL further teaches: metrics also include a next transaction of a new [job] (Column 9, Lines 35-54 Multiple backup jobs can be scheduled on top of one another, and further on top of a set of predictions of foreground demands. FIG. 2C depicts a front-to-back greedy scheduling regime 210. As shown, all 10 units of Job1 are allocated into the T.sub.0 to T.sub.1 time slot, leaving a predicted 50 units available in that time slot, which is consumed when 50 units are allocated to Job2 in that time slot, leaving 30 more units to be allocated to Job2. Even after allocating greedily to the remaining demands of the remaining jobs, Job3 cannot be completed by its end time of T.sub.3. As shown, there are five units of resource utilization that Job3 needs to complete. Depending on the SLA it might be late (overage indication 218). For example, if the SLA specifies “100% completion of backup jobs by the scheduled time” (e.g., referring to a Gold SLA), then job J3 would be deemed to be late. However, if the SLA specifies a more relaxed specification such as “80% of the time the backup jobs are to complete by the scheduled time” (e.g., a Silver SLA), then even though job J3 runs over its scheduled completion time, it still might fall into the acceptable relaxed range) and a memory usage associated with each of said queries ([Column 1, Lines 27-31] In many cases such resources (e.g., CPU cycles, network I/O (input/output or IO), storage (i.e., “memory”) space, etc) can be consumed by backup jobs). Regarding claim 11, RANJAN further teaches: said metrics can include a structural and a periodic shift in a workload or a prediction of the workload shift associated with a processing of said request ([0047] To determine the resource availability information, a cluster, such as the first cluster 102, may forecast its resource consumption for a predetermined future time duration, such as one hour. The forecast may be performed, for example, by the computing agent of the cluster, such as the first computing agent 307. Such a forecast may be performed based on an analysis of a historical workload pattern of the cluster. In an example, the forecast may utilize machine learning. Here, historical workload pattern may include a pattern in which service requests are received by the cluster and a pattern in which resources of the cluster are consumed to handle the service requests). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/ Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Nov 09, 2023
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month