Prosecution Insights
Last updated: April 19, 2026
Application No. 18/365,922

PREDICTING WORKER INSTANCE COUNT FOR CLOUD-BASED COMPUTING PLATFORMS

Non-Final OA §101§103
Filed
Aug 04, 2023
Examiner
AQUINO, WYNUEL S
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
340 granted / 433 resolved
+23.5% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
36 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding independent claims the limitations output a time forecast, computes a predicted number, generates a recommendation, determine if instances are sufficient, as drafted, recites functions that, under its broadest reasonable interpretation, covers a function that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitations as cited above as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. Thus, these limitation falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, this judicial exception is not integrated into a practical application. The claim recites the following additional limitations: memory, processors, machine learning model, worker instances. The additional elements are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, MPEP 2106.05(f), and steps of providing an input do nothing more than add insignificant extra solution activity to the judicial exception of merely gathering data. Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(g) (Ex. v. Consulting and updating an activity log, Ultramercial, 772 F.3d at 715, 112 USPQ2d at 1754). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of memory, processors, machine learning model, worker instances amount to no more than mere instructions, or generic computer/computer components to carry out the exception. Furthermore, the limitations directed to providing an input the courts have identified mere data gathering is well-understood, routine and conventional activity. See MPEP 2106.05(d) (Ex. iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;). The recitation of generic computer instruction and computer components to apply the judicial exception, and mere data gathering do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, the claims are not patent eligible under 35 USC 101. Regarding claim 2, 3, 8, 9, 11, 13-18, 20, the limitations of predicting a peak, compute a threshold, wherein the forecast is based on history, the time series is based on a time model or history distribution, forecast based on availability zone or history of allocation requests, generate further recommendations are functions that can be reasonably performed in the human mind, thus, additional mental process defined in the claims. The claim does not include any additional element, thus, no limitation that needs to be analyzed under prong 2 for practical application, or under step 2B for significantly more. Regarding claim 4, 5, 6, 7 the limitation of provisioning and further description of parameters is considered mere instructions, or generic computer/computer components to carry out the exception Accordingly, the additional element recited in claim 3 fails to provide a practical application under prong 2, or amount to significantly more under step 2B. Regarding claim 10 the limitations of providing workload data to a model for use in subsequent predictions are nothing more than insignificant extra solution activity which is not a practical application under prong 2. Claim Rejections - 35 USC §103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim/s 1, 5, 6, 12, 16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari (Pub. No. US 2025/0021388) in view of Labute (Pub. No. US 2019/0243686). Claim 1, 12, 19 Agrahari teaches “a device for recommending increase in worker instance count for an availability zone in a cloud-based computing platform, comprising: one or more memories storing instructions; and one or more processors coupled to the one or more memories and configured to execute the instructions to ([0070] processors, memory): provide resource allocation information as input to a machine learning (ML) model ([0017] In some embodiments, the computer-implemented method further includes: generating a training dataset using training data related to a historical batch job of a certain type; and training an input model using the training dataset to generate the ML model for processing the batch job of the certain type. [0082] In some embodiments, the training dataset generation subsystem 110 may obtain the training data 108 from the storage subsystem 106 or another memory location (e.g., an external storage medium accessible to the resource estimation system 100). The training data 108 may be a collection of historical data that is labeled and provides information regarding resources used to process a plurality of historical batch jobs of different types.) to receive an output of a time series forecast of a workload for the availability zone in a future time period; compute a predicted number of worker instances in the availability zone for handling the workload in the future time period ([0134] In FIG. 2G, reference numeral 312 designates an output provided by the thread prediction subsystem 103. For example, for test batch job data (A) having 84072 records, the thread prediction subsystem 103, e.g., the thread estimation algorithm, may output three possible solutions, e.g., combination of a number of threads and completion times: a combination (1) having a first data item corresponding to a number of threads (128) and a second data item corresponding to the completion time (2990 seconds), a combination (2) having a first data item corresponding to a number of threads (150) and a second data item corresponding to the completion time (2800 seconds), and a combination (3) having a first data item corresponding to a number of threads (256) threads and a second data item corresponding to the completion time (1800 seconds). The list 140 of threads and completion times, which is described above, may include one combination, e.g., as for the test batch job data (B), or a plurality of combinations similar to the combinations of the test batch job data (A).; see aso [0142-0185]); and when a number of worker instances in the availability zone is less than the predicted number of worker instances, generate a recommendation to add a number of servers to the availability zone to increase the number of worker instances in the availability zone ([0077] In embodiments, the resource estimation system 100 can, based on the batch job, determine a number of threads, determine how many virtual machines are needed to run the threads, and determine which geographic region or which data center has the virtual machines so that the job can be executed in a real-time. [0065] In various embodiments, the described techniques may estimate a number of threads to execute a batch job within a maximum completion time and select, from a list of cloud virtual machines available for use, an optimal virtual machine or an optimal virtual machine combination for executing the batch job within the maximum completion time using the estimated number of threads. [0123] In embodiments, the thread prediction subsystem 103 may, based on the maximum completion time and the number of records, use the thread estimation algorithm to determine (e.g., predict or estimate), a number of threads needed to process the given number of the records of the batch job and the actual completion time that is within the maximum completion time…. add (no_thread, actual_completion_time) to List_threads_completiontime)”. However, Agrahari may not explicitly state that the number of threads is less than the number of threads needed at a future time to generate the number VMs required. Labute teaches as evidence, Agrahari increases the number of threads and servers to meet completion times, such that teaches ([0038] FIG. 5 is a flow diagram illustrating an embodiment of a process for provisioning a computing system according to upcoming usage data. In some embodiments, the process of FIG. 5 implements 304 of FIG. 3. In the example shown, in 500, a set of jobs indicated by the usage data is determined. The usage data comprises a set of jobs at each of a set of times. Determining a set of jobs indicated by the usage data comprises determining the set of jobs indicated by the usage data at a next time, determining the set of jobs indicated by the usage data at all times, determining the set of jobs indicated by the usage data for each time, etc. In 502, a set of computing systems for processing the jobs is determined. For example, computers, servers, worker machines, virtualized computing systems, or other computing resources are determined sufficient to process the jobs in the desired period of time. The Monte Carlo simulation is used to convert the task workload requirements to a resource requirement (i.e., the number of threads/servers needed to accomplish that workload in a given amount of time as well as the data instances that are required to be loaded). [0039] The simulation of task parallelization comprises a set of simulations of task parallelization including an increasing number of threads, until a simulation comprising enough threads to complete the tasks of the bin in the desired period of time is performed.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Labute with the teachings of Agrahari in order to provide a system that teaches implementing additional threads and servers. The motivation for applying Labute teaching with Agrahari teaching is to provide a system that allows for evidence of scaling resources. Agrahari, Labute are analogous art directed towards increasing resources. Together Agrahari, Labute teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Labute with the teachings of Agrahari by known methods before the effective filing date of the claimed invention and gained expected results. Claim 5, 16 the combination teaches the claim, wherein Agrahari teaches “the device of claim 1, wherein the time series forecast of the workload is based on a history of workload data, statistical trend of the workload data, and one or more properties of a time period related to the workload data ([0118] FIG. 2F shows an example of a regression model 300 (e.g., a regression graph) for the historical batch job type of FIG. 2C. The regression model 300 may correspond to the ML model 116. Although FIG. 2F illustrates a linear regression, this is not intended to be limiting. In some embodiments, the regression may be non-linear. [0093] Graph 224 of FIG. 2C relates to a historical batch job of a third type. As shown, the completion time increases as a number of records increases, i.e., the historical batch jobs were processed consistently and, as such, the graph 224 exhibits sufficient correlation.)”. Claim 6, the combination teaches the claim, wherein Agrahari teaches “the device of claim 1, wherein the time series forecast is based on a time series prediction model of historical workload data for the availability zone ([0084] For example, the training data 108 may contain, for each of the historical batch jobs, historical information including batch job attributes. For example, the batch job attributes include one or more of a number of threads, a completion time, a number of virtual machines, a number of CPUs, the geographic region used, costs, etc. In certain embodiments, the historical batch jobs may be arranged in groups, where each batch job group corresponds to the same batch job type, e.g., interest calculation, and includes historical batch jobs having diverse information, e.g., having at least one different batch job attribute. As an example, one batch job group may include two or more batch jobs having different number of records, different number of threads, and/or different completion time.)”. Claim/s 2, 3, 4, 13, 14, 15, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Armangau (Pub. No. US 2023/0401089). Claim 2, 13, 20 the combination teaches the claim, wherein Agrahari teaches “the device of claim 1, wherein the one or more processors are further configured to execute the instructions to predict a peak workload over the future time period based on the predicted time series forecast of the workload…, wherein the predicted number of worker instances is based on the peak workload ([0133] FIG. 2G shows examples of possible batch job data for the batch job having a certain job code. The possible batch job data may be referred to as test batch job data, for simplicity of explanation. As illustrated by a reference numeral 310, the batch job of a same type (e.g., having the same job code) may have different batch job data. In an example, the test batch job data has three examples: (A) number of records=84072; maximum completion time=3000 seconds; (B) number of records=10013; maximum completion time=500 seconds; and (C) number of records=164264 (i.e. peak); maximum completion time=800 seconds.)”. However, does not explicitly teach details of utilizing a ratio. Armangau teaches “a daily to minute peak workload ratio ([0024] To support the use of credit 162, the host load predictor 150 is configured to predict speed-critical tasks 132 in the future based on a history of speed-critical tasks 132 in the past. The term “host load” as used herein is synonymous with “speed-critical tasks 132.” In an example, the host load predictor 150 is configured to observe host load during a training period that extends over multiple past intervals and to predict, based on the host load observed during those past intervals, the host load during a corresponding time interval in the future. Host load may be measured based on any number of factors, such as TOPS (I/O requests per second), CPU busyness, memory consumption, and/or cache fullness, for example. The host load predictor 150 may sample host load every minute, every 5 minutes, every 10 minutes, or the like, over the course of every day, every Monday, or any other repeating interval. The host load predictor 150 may then predict the host load during the next repeat of that interval, under the assumption that past patterns predict future behavior.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Armangau with the teachings of Agrahari, Labute in order to provide a system that teaches rations. The motivation for applying Armangau teaching with Agrahari, Labute teaching is to provide a system that allows for design choice. Agrahari, Labute, Armangau are analogous art directed towards forecasting resources. Together Agrahari, Labute, Armangau teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Armangau with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. Claim 3, 14 the combination teaches the claim, wherein Agrahari teaches “the device of claim 2, wherein the one or more processors are further configured to execute the instructions to compute a throughput threshold for the number of worker instances, wherein the predicted number of worker instances is based on a comparison of the throughput threshold with the peak workload ([0133] FIG. 2G shows examples of possible batch job data for the batch job having a certain job code. The possible batch job data may be referred to as test batch job data, for simplicity of explanation. As illustrated by a reference numeral 310, the batch job of a same type (e.g., having the same job code) may have different batch job data. In an example, the test batch job data has three examples: (A) number of records=84072; maximum completion time=3000 seconds; (B) number of records=10013; maximum completion time=500 seconds; and (C) number of records=164264; maximum completion time=800 seconds (i.e. throughput).)”. Claim 4, 15, the combination teaches the claim, wherein Agrahari teaches “the device of claim 3, wherein the one or more processors are further configured to execute the instructions to compute the throughput threshold based on one or more of a throttling metric ([0133] FIG. 2G shows examples of possible batch job data for the batch job having a certain job code. The possible batch job data may be referred to as test batch job data, for simplicity of explanation. As illustrated by a reference numeral 310, the batch job of a same type (e.g., having the same job code) may have different batch job data. In an example, the test batch job data has three examples: (A) number of records=84072; maximum completion time=3000 seconds; (B) number of records=10013; maximum completion time=500 seconds; and (C) number of records=164264; maximum completion time=800 seconds (i.e. throughput).), a total allocation time, or a number of timeout exceptions observed from a history of workload data for the availability zone”. Claim/s 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Martin (Pub. No. US 2020/0074323). Claim 7, the combination may not explicitly teach the limitation of the claim. Martin teaches “the device of claim 1, wherein the time series forecast is based on fitting an empirical statistical distribution of a history of resource allocation requests of the availability zone ([0005] The forecasts are generated based on historical data about the user requests for network resources. In one scenario, multivariate k-nearest neighbor (k-NN) forecasting is used based on variant grouping of the requests that are grouped based on their metrics according to correlation analysis so that dependencies between metrics within the same group is high. A multivariate k-NN algorithm is then performed on each group to generate multi-step ahead predictions.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Martin with the teachings of Agrahari, Labute in order to provide a system that teaches utilizing distribution of resource requests. The motivation for applying Martin teaching with Agrahari, Labute teaching is to provide a system that allows for design choice. Agrahari, Labute, Martin are analogous art directed towards forecasting resources. Together Agrahari, Labute, Martin teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Martin with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. Claim/s 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Tripathi (Pub. No. US 2023/0342658). Claim 8, the combination may not explicitly teach the limitation of the claim. Tripathi teaches “the device of claim 1, wherein the time series forecast for the availability zone is based on the one or more processors execute the instructions to determine the availability zone is one of multiple availability zones having a threshold confidence for accuracy of predicting the time series forecast ([0087] At step 602, the service system 404 generates a topology for each of the cloud providers 406A-406C based on the received provider data, resource data, and determined dependencies, and saves the topologies in the knowledge base 412′ with classification data. [0109] At step 704C, the service system 404 generates a confidence score regarding a likelihood of successfully implementing the deployment request based on steps 704A and 704B, and the ML model. In implementations, the service system 404 utilizes active learning feedback from the end user, historic valid deployment configurations, and historic deployment configuration failures, to validate or invalidate the deployment request. In embodiments, the confidence score represents a feasibility of successful deployment of requested resources based on cost and time required to deploy the resources, without incurring costs, and while saving cost liability related to failed or partial deployments that result in rollback, causing deployment and invoicing of rolled back resources.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Tripathi with the teachings of Agrahari, Labute in order to provide a system that teaches confidence scores. The motivation for applying Tripathi teaching with Agrahari, Labute teaching is to provide a system that allows for improved resource allocation. Agrahari, Labute, Tripathi are analogous art directed towards forecasting resources. Together Agrahari, Labute, Tripathi teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Tripathi with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. Claim/s 9, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Chandrasekaran (Pub. No. US 2024/0289111) in view of Arasaratnam (Pub. No. US 2011/0029970). Claim 9, 17, the combination teaches the claim, wherein Agrahari teaches “the device of claim 1, wherein the time series forecast is based on a history of resource allocation requests including initial requests ([0086] FIG. 2A shows the training data 108 corresponding to the historical batch job with a code TXNTIP_BJ_100_01, e.g., a historical batch job among a plurality of batch jobs that were previously executed. The batch job with a job code TXNTIP_BJ_100_01 had 147 records processed by 10 threads in 156 seconds. In some embodiments, the total time associated with processing of the historical batch job is considered as maximum completion time of that historical batch job, e.g., 156 seconds in the example of FIG. 2A.)”. However, the combination may not explicitly teach further details. Chandrasekaran teaches “retries ([0068] In some cases, node pool operator 302 may get stuck due to node failures. A rollback can be triggered if the node pool operator 302 has been retrying a same phase or operation more than a maximum number of retries. A number of retries on a given phase or operation may be tracked and checked whether the number of retries meets a threshold. An error can be logged, and an alert can be triggered based on the logged error, and the alert can be transmitted to a cluster operator.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Chandrasekaran with the teachings of Agrahari, Labute in order to provide a system that teaches retries. The motivation for applying Chandrasekaran teaching with Agrahari, Labute teaching is to provide a system that allows for improved resource prediction. Agrahari, Labute, Chandrasekaran are analogous art directed towards forecasting resources. Together Agrahari, Labute, Chandrasekaran teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Chandrasekaran with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. However, the combination may not explicitly teach throttled workloads. Arasaratnam teaches “requests due to throttled workload ([0025] Based on predictive data in the predictive/historical database 115 and/or ratio information of the workload manager 110, the workload manager 110 determines which virtual machines should be allocated to the preallocation pool 150 and which virtual machines should be destroyed at 210. For example, after the virtual machines in the active virtual machine pool 130 are taken offline, the workload manager 110 determines what should happen to the virtual machines. For example, the workload manager 110 may allocate 1 web server virtual machine 155 and 1 database server virtual machine 165 to the preallocation pool 150 and destroy the other virtual machines not allocated to the preallocation pool 150, and these virtual machines 155 and 165 may be added by the workload manager 110 to maintain the constant ratio and/or to prepare for the expected load of workload requests 105 based on the database 115.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Arasaratnam with the teachings of Agrahari, Labute, Chandrasekaran in order to provide a system that teaches throttled workloads. The motivation for applying Arasaratnam teaching with Agrahari, Labute, Chandrasekaran teaching is to provide a system that allows for improved resource prediction. Agrahari, Labute, Chandrasekaran, Arasaratnam are analogous art directed towards forecasting resources. Together Agrahari, Labute, Chandrasekaran, Arasaratnam teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Arasaratnam with the teachings of Agrahari, Labute, Chandrasekaran by known methods before the effective filing date of the claimed invention and gained expected results. Claim/s 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Higginson (Pub. No. US 2023/0153165). Claim 10, the combination may not explicitly teach the limitation. Higginson teaches “the device of claim 1, wherein the one or more processors are further configured to execute the instructions to provide, to the ML model, recent workload data for the availability zone and associated performance metrics for use in predicting subsequent time series forecasts of the workload for the availability zone ([0081] The time-series model is then applied to additional time-series data for the entity to generate a forecast of the utilization of the computational resources by the entity (operation 208). For example, recently collected utilization metrics for the entity are inputted into the time-series model, and the time-series model generates output representing predictions of future values for the utilization metrics.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Tripathi with the teachings of Agrahari, Labute in order to provide a system that teaches confidence scores. The motivation for applying Tripathi teaching with Agrahari, Labute teaching is to provide a system that allows for improved resource allocation. Agrahari, Labute, Tripathi are analogous art directed towards forecasting resources. Together Agrahari, Labute, Tripathi teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Tripathi with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. Claim/s 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Agrahari, Labute in view of Nampally (Pub. No. US 2015/0207851). Claim 11, 18 the combination may not explicitly teach the limitation. Nampally teaches “the device of claim 1, wherein the one or more processors are further configured to execute the instructions to generate the recommendation to increase the number of worker instances further based on one or more of a customer priority of a customer corresponding to the workload ([0053] For instance, upon reception of a higher load of data for transmission, the system may gauge the load and scale-up the data broker nodes on-demand, signal the client to automatically establish more channels of data transmission, etc. It thus will be appreciated that the priority-based data flow management may be introduced into the system. For example, the system may be fed with the priority of the clients to cater to and, thus, the system will be able to optimize or otherwise improve resource allocation based on priority and help ensure that more important client requests do not wait for lower priority client request), a scalability of the cloud-based computing platform, or a number of throttling failures during deployment of virtual machines in the availability zone”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Nampally with the teachings of Agrahari, Labute in order to provide a system that teaches customer priority. The motivation for applying Nampally teaching with Agrahari, Labute teaching is to provide a system that allows for improved resource allocation. Agrahari, Labute, Nampally are analogous art directed towards allocation of resources. Together Agrahari, Labute, Nampally teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Nampally with the teachings of Agrahari, Labute by known methods before the effective filing date of the claimed invention and gained expected results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WYNUEL S AQUINO whose telephone number is (571)272-7478. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WYNUEL S AQUINO/Primary Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Aug 04, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103
Mar 24, 2026
Interview Requested
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596570
OPTIMIZED STORAGE CACHING FOR COMPUTER CLUSTERS USING METADATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596567
HIGH AVAILABILITY CONTROL PLANE NODE FOR CONTAINER-BASED CLUSTERS
2y 5m to grant Granted Apr 07, 2026
Patent 12585568
METHODS AND APPARATUS TO PERFORM INSTRUCTION-LEVEL GRAPHICS PROCESSING UNIT (GPU) PROFILING BASED ON BINARY INSTRUMENTATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572675
ACCESSING FILE SYSTEMS IN A VIRTUAL ENVIRONMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566639
TECHNIQUES FOR AUTO-TUNING COMPUTE LOAD RESOURCES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+20.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month