Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,535

CLOUD FITNESS ENGINEERING

Non-Final OA §103§112
Filed
Sep 20, 2023
Examiner
KIM, DONG U
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solutions Limited
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
610 granted / 702 resolved
+31.9% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
737
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
28.0%
-12.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 (similarly claims 10, 11 and 20) recites the limitation “the selected computing system”. There is insufficient antecedent basis for this limitation in the claim. It is unclear if “the selected computing system” is referring to selected at least one of the candidate computing systems or some other system. Claim 7 (similarly claim 17) recites the limitation “the selected candidate computing system”. There is insufficient antecedent basis for this limitation in the claim. It is unclear if “the selected candidate computing system” is referring to selected at least one of the candidate computing systems or some other system. Claims 2-10 and 12-19 are rejected based on rejection of its corresponding dependent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al. (Pub 20220164327) (hereafter Zhu) in view of Poornachandran et al. (Pub 20220413943) (hereafter Poornachandran). As per claim 1, Zhu teaches: A computer implemented method comprising: obtaining, at one or more processing devices, a set of target performance criteria associated with an application executable at least in part on a cloud-computing system; generating, using one or more machine-learning models from the set of target performance criteria, a set of target process parameters related to one or more processes associated with execution of the application; ([Paragraph 2], Cloud infrastructures have empowered users with unprecedented ability to store and process large amounts of data, paving the road for revolutions in the areas of web search, analytics, machine learning (ML), artificial intelligence (AI), and the translation of nearly every facet of modern life into digital systems. [Paragraph 6], Examples and implementations disclosed herein are directed to a data-driven tuning service for automatically tuning large-scale (e.g., exabyte) cloud infrastructures. The tuning service performs automated and fully data/model-driven configuration from learning various real-time performance of the cloud infrastructure. Such performance is identified through monitoring various telemetric data of the cloud infrastructure. [Paragraph 48], The job-level performance closely relates to task-level metrics. And if performance requirements at the task-level are being met, the job-level performance requirements can be automatically satisfied. [Paragraph 57], Again, efficiency of the operational parameters 216 being tested may be dictated by the SLOs of an organization or a client. For example, if a particular customer has certain requirements for GPU usage, the operational parameters 216 may be modeled based on such SLO criteria, and the experimenter 220 identifies which group of modeled operational parameters 216 produce such update (or SLO criteria) in the test group of servers 201.) performing simulations, by the one or more processing devices, of the execution of the application on respective ones of a plurality of candidate computing systems, wherein each candidate computing system includes a corresponding sequence of multiple architecture components at least a portion of which represents components of the cloud-computing system; ([Paragraph 43], The experimenter 220 runs the different combinations of operational parameters 216 that are generated by the modeler 222 on a test group of machines, or servers 201, in one or more clusters of the cloud environment 220. The experimenter 220 may also be configured to evaluate the performance of the test group of machines for the various combinations of operational parameters 216 that are modeled by the modeler 222. This largely becomes an optimization problem where the experimenter 220 identifies the most efficient combination of operational parameters 216 to use based on how well the test groups of machines function. This most efficient group of operational parameters 216 may then be pre-processed by the flighting tool 224 and deployed by the deployment tool 226 to the cloud environment 228. In other words, the cloud environment 228 is tuned with the modeled, tested, and optimized set of operational parameters 216. [Paragraph 163], The order of execution or performance of the operations in examples of the disclosure illustrated and described herein is not essential, and may be performed in different sequential manners in various examples. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure.) selecting, based on results of the simulations, at least one of the candidate computing systems that satisfies the set of target process parameters; and providing instructions for deploying the selected computing system for execution of the application, the selected computing system being deployed at least in part on the cloud-computing system. ([Paragraph 63], Four representative applications are presented that focus on significantly different aspects of the tuning service 214, the tuning of which is deployed in production and on track to produce substantial savings for operating the cloud environment 228: YARN Configuration Tuning, Machine Configuration Design, Power Capping, and Select Software Configurations (SSCs). Each is described in detail below. [Paragraph 70], In some implementations and examples, the Observational Tuning approach comprises two modules: (1) a prediction engine to predict the performance metrics given different configurations and (2) an Optimizer to select the optimal solution. Both are implemented as executable instructions (code), hardware, firmware, or a combination thereof. [Paragraph 134], In some embodiments, the tuning service 214 includes three main components: the performance monitor 218, the experimenter 220, the modeler 222, the flighting tool 224, and the deployment tool 226. In some implementations, the performance module 218 joins the data from various sources and calculates the performance metrics of interest, providing a fundamental building block for all the analysis. An end-to-end data orchestration pipeline is developed and deployed in production to collect data on a daily basis. [Paragraph 58], The flighting tool 304 may be used as a safety check before the full-cluster deployment and also to deploy experiments to collect evaluation data for the analysis. In some implementations, users may specify the machine names and the starting/ending time of each flighting through a UI of the flighting tool 224 and create new builds to deploy to the selected machines. The modeling module and flighting tool vary across different applications, as discussed in more detail below.) Although Zhu silently discloses performing a simulation (i.e. experiment) and utilizing Monte-Carlo simulations to estimate an objective function. Zhu does not explicitly recite performing simulation. Poornachandran teaches performing simulation. ([Paragraph 67], In some examples, the analysis tools 116 can instantiate simulator(s) to simulate the behavior, the configuration, etc., of a composable ML compute node to generate and/or otherwise output one or more evaluation parameters. For example, the analysis tools 116 can execute a model (e.g., a simulation model, an AI/ML model, etc.) based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the model to estimate, predict, and/or otherwise determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration. [Paragraph 196], For example, the ML system configurator 1702 can evolve the composable ML compute node by evaluating the hardware and/or the software when executing a workload and/or based on a simulation of the hardware and/or software executing the workload. In some such examples, the composable ML compute node can be composable because hardware and/or software components can be selected and assembled in various combinations to satisfy specific or pre-defined requirements (e.g., an accuracy requirement, a latency requirement, a throughput requirement, etc.).) Poornachandran similarly also discloses generating ML model from a set of target performance criteria, a set of target process parameters; plurality of candidate computing systems; selecting, based on result of the simulation, the candidate computing systems that satisfies the set of target process parameters; and deploying the selected computing system. ([Paragraph 60, 61, 66, 196, 419, 425]) It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Zhu wherein machine learning model(s) is/are (ML) used to generate target process parameters to ensure job/application performance requirements are met based on experimentations and candidate computing systems of a cloud-computing system, target candidate computing system(s) is/are selected and target computing system for executing the application is deployed, into teachings of Poornachandran wherein simulation(s) is/are performed using various configurations to monitor, analyze and evaluated the various configurations to optimize job/application deployment, because this would enhance the teachings of Zhu wherein by performing various simulations allows different set of target process parameters to be tested based on the target performance criteria leveraging ML models to predict optimal configuration for deployment and dynamic tuning of the ML model and the set of target process parameters. [Poornachandran paragraph 62, 67, 196, 200] As per claim 2, rejection of claim 1 is incorporated: Poornachandran teaches receiving, at the one or more computing devices, information pertaining to execution of the application on the deployed computing system; determining, based on the information, that execution of the application on the deployed computing system does not satisfy the set of target performance criteria; responsive to determining that the execution of the application on the deployed computing system does not satisfy the set of target performance criteria, determining, by the one or more computing devices, adjustments to be made to the deployed computing system; and providing instructions for adjusting the deployed computing system. ([Paragraph 189], In some examples, output of the deployed model may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed model can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.) Zhu also teaches ([Paragraph 2], Cloud infrastructures have empowered users with unprecedented ability to store and process large amounts of data, paving the road for revolutions in the areas of web search, analytics, machine learning (ML), artificial intelligence (AI), and the translation of nearly every facet of modern life into digital systems. Such cloud infrastructures require numerous pieces of hardware, services, and operational parameters to be tuned for them to operate as efficiently and reliably as possible. [Paragraph 23], The disclosed implementations and examples define a methodology to cope with cloud infrastructure complexity and create compact, sound, and explainable models of a cloud infrastructure based on a set of tractable metrics. The tuning service also provides an end-to-end architecture for automated tuning and provide details for the three types of tuning that are enabled: (i) observational tuning, which employs models for picking the right parameters and avoiding costly rounds of experiments; (ii) hypothetical tuning, an ML-assisted methodology for planning; and (iii) experimental tuning, a fallback approach that judiciously performs experiments when it is not possible to predict the system behavior otherwise. Moreover, the tuning service also continuously tunes database clusters, which improves cloud efficiency and prolongs hardware resources, saving a substantial amount of money.) As per claim 3, rejection of claim 1 is incorporated: Zhu teaches wherein the set of target process parameters include parameters representing one or more of: throughput, latency, response time, error rates, fault tolerance, or data security. ([Paragraph 48], The job-level performance closely relates to task-level metrics. And if performance requirements at the task-level are being met, the job-level performance requirements can be automatically satisfied. For instance, during tuning, in order to maintain the same job-level performance, one may expect that, in general, the distribution for the task execution time will shift towards the lower end, indicating a general improvement for the task-level latency.) Poornachandran also teaches ([Paragraph 393], In some such examples, the composable ML compute node can be composable because hardware and/or software components can be selected and assembled in various combinations to satisfy specific or pre-defined requirements (e.g., an accuracy requirement, a latency requirement, a throughput requirement, etc.).) As per claim 4, rejection of claim 1 is incorporated: Zhu teaches wherein the multiple architecture components include one or more of: a web server, a virtual machine, an application programming interface (API) gateway, a load balancer, a storage component, or a database. ([Paragraph 129], Data center 1016 illustrates a data center comprising a plurality of nodes, such as node 1032 and node 1034. One or more virtual machines may run on nodes of data center 1016, such as virtual machine 1036 of node 1034 for example. Although FIG. 10 depicts a single virtual node on a single node of data center 1016, any number of virtual nodes may be implemented on any number of nodes of the data center in accordance with illustrative embodiments of the disclosure.) Poornachandran also teaches ([Paragraph 272], Moreover, in some examples, some or all of the circuitry of FIG. 22 may be implemented by one or more virtual machines and/or containers executing on the microprocessor.) As per claim 5, rejection of claim 1 is incorporated: Zhu teaches wherein the multiple architecture components represent a combination of computational resources, database resources, and storage resources. ([Paragraph 23], Moreover, the tuning service also continuously tunes database clusters, which improves cloud efficiency and prolongs hardware resources, saving a substantial amount of money. [Paragraph 129], Data center 1016 illustrates a data center comprising a plurality of nodes, such as node 1032 and node 1034. One or more virtual machines may run on nodes of data center 1016, such as virtual machine 1036 of node 1034 for example. Although FIG. 10 depicts a single virtual node on a single node of data center 1016, any number of virtual nodes may be implemented on any number of nodes of the data center in accordance with illustrative embodiments of the disclosure. [Paragraph 22], The term “operational parameter” is defined herein as the set of system configurations that impact the operation of the cloud environment. Examples of operational parameters include, without limitation, YARN configurations, hardware design configurations, such as RAM/SSD per machine, power provision limits, and software configuration for storage mapping. Also. the terms “cloud,” “cloud environment,” “cloud-computing environment,” and “cloud infrastructure are used interchangeably and all mean a remote computing environment deployed across one or more regions, such as around the globe.) As per claim 6, rejection of claim 5 is incorporated: Zhu teaches wherein the multiple architecture components are representative of one or more policies associated with the execution of the application, including one or more of: load balancing policies, data replication policies, data partitioning policies, or virtual machine policies. ([Paragraph 44], Efficiency of the operational parameters 216 being tested may be dictated by the SLOs of an organization or a client, which may be specified in a service-level agreement (SLA). For example, if a particular customer has certain uptimes for a given application, the operational parameters 216 may be modeled based on such SLO criteria, and the experimenter 220 identifies which group of modeled operational parameters 216 produce such update (or SLO criteria) in the test group of servers 201. Numerous other SLO criteria may be considered.) Poornachandran also teaches ([Paragraph 125], For example, the negotiation may include making policy-based decisions using the identified resource tolerance thresholds and dynamically migrating existing workloads between IPUs to utilize all resources efficiently. Each IPU may include two portions, i) a data plane, and ii) a control plane. The control plane handles resource allocation, monitoring and policy enforcement, and the data plane handles the data flow between IPUs and the logical units associated with the IPU. An example process for negotiation is described in conjunction with FIG. 11. [Paragraph 60], The control plane handles resource allocation, monitoring and policy enforcement, and the data plane handles the data flow between IPUs and the logical units associated with the IPU. [Paragraph 165], The example application SLA manager 1406 then determines if the selected kernel meets the requested SLA (block 1512) in a sandbox configuration based on configured policies) As per claim 7, rejection of claim 1 is incorporated: Poornachandran teaches wherein performing the simulations comprises: selecting a candidate computing system from the plurality of candidate computing systems; and performing simulations, by the one or more processing devices, of the execution of the application using the selected candidate computing system. ([Paragraph 67], In some examples, the analysis tools 116 can instantiate simulator(s) to simulate the behavior, the configuration, etc., of a composable ML compute node to generate and/or otherwise output one or more evaluation parameters. For example, the analysis tools 116 can execute a model (e.g., a simulation model, an AI/ML model, etc.) based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the model to estimate, predict, and/or otherwise determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration. [Paragraph 196], For example, the ML system configurator 1702 can evolve the composable ML compute node by evaluating the hardware and/or the software when executing a workload and/or based on a simulation of the hardware and/or software executing the workload. In some such examples, the composable ML compute node can be composable because hardware and/or software components can be selected and assembled in various combinations to satisfy specific or pre-defined requirements (e.g., an accuracy requirement, a latency requirement, a throughput requirement, etc.).) As per claim 8, rejection of claim 7 is incorporated: Poornachandran teaches wherein performing the simulations comprises: identifying, stimuli conditions associated with the execution of the application; providing the stimuli conditions to a simulation of the execution of the application by the identified candidate computing system; and generating the results of the simulation affected by the stimuli condition. ([Paragraph 186], Training is performed using training data. In examples disclosed herein, the training data may be any type of dataset of features (e.g., AI features). [Paragraph 67], In some examples, the analysis tools 116 can instantiate simulator(s) to simulate the behavior, the configuration, etc., of a composable ML compute node to generate and/or otherwise output one or more evaluation parameters. For example, the analysis tools 116 can execute a model (e.g., a simulation model, an AI/ML model, etc.) based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the model to estimate, predict, and/or otherwise determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration. [Paragraph 196], For example, the ML system configurator 1702 can evolve the composable ML compute node by evaluating the hardware and/or the software when executing a workload and/or based on a simulation of the hardware and/or software executing the workload. In some such examples, the composable ML compute node can be composable because hardware and/or software components can be selected and assembled in various combinations to satisfy specific or pre-defined requirements (e.g., an accuracy requirement, a latency requirement, a throughput requirement, etc.). [Paragraph 189], In some examples, output of the deployed model may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed model can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.) As per claim 9, rejection of claim 8 is incorporated: Poornachandran teaches wherein the stimuli conditions include one or more of: network outage conditions, delay conditions due to a component malfunction or disconnection, configuration changes, security breaches, or load changes. ([Paragraph 186], Training is performed using training data. In examples disclosed herein, the training data may be any type of dataset of features (e.g., AI features). [Paragraph 67], In some examples, the analysis tools 116 can instantiate simulator(s) to simulate the behavior, the configuration, etc., of a composable ML compute node to generate and/or otherwise output one or more evaluation parameters. For example, the analysis tools 116 can execute a model (e.g., a simulation model, an AI/ML model, etc.) based on the composable ML compute node. In some such examples, the analysis tools 116 can execute the model to estimate, predict, and/or otherwise determine a throughput of the composable ML compute node when the composable ML compute node executes a particular AI/ML model having a particular configuration. [Paragraph 196], For example, the ML system configurator 1702 can evolve the composable ML compute node by evaluating the hardware and/or the software when executing a workload and/or based on a simulation of the hardware and/or software executing the workload. In some such examples, the composable ML compute node can be composable because hardware and/or software components can be selected and assembled in various combinations to satisfy specific or pre-defined requirements (e.g., an accuracy requirement, a latency requirement, a throughput requirement, etc.). [Paragraph 189], In some examples, output of the deployed model may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed model can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model. [Paragraph 226], If any of the utilization metrics is established to be lower than a pre-determined threshold value, the model scheduler circuitry 1820 then adjusts the model selection to produce another model for use by the target hardware platform. For example, if the first model 1812A begins to produce low utilization metrics on the hardware platform, the model scheduler circuitry 1820 selects the second model 1812B as the new model for use. If the second model 1812B begins to yield low utilization metrics after some time, the model scheduler circuitry 1820 may determine that the first model 1812A is better for use by the hardware platform.) As per claim 10, rejection of claim 1 is incorporated: Poornachandran teaches wherein providing the instructions for deploying the selected computing system for execution of the application comprises: providing instructions for selecting portions of the cloud-computing system for deploying at least a portion of the selected computing system; and providing, to a client device, a notification indicating deployment of the selected computing system for executing the application. ([Paragraph 188], Once trained, the deployed model may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the model, and the model executes to create an output. This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the model to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the machine learning model. [Paragraph 491], For example, the ML system configurator 3402 can execute the first operation 3818 by optimizing and/or otherwise improving a heterogeneous system solution (e.g., an example implementation of the ML compute node 3517) given a candidate AI model architecture (e.g., the software 3519 of FIG. 35, portion(s) of the proposed HW/SW instance 3522 of FIG. 35, etc.). [Paragraph 98], For example, the client hardware device may be implemented by an endpoint client hardware device (e.g., a hardware device associated with a user) or an intermediate client hardware device (e.g., a radio access network (RAN)) gateway that may facilitate communication between a server and an endpoint client hardware device). [Paragraph 124], The machine readable instructions and/or the operations 1000 of FIG. 10 begin at block 1002, at which the example orchestrator 904 detects a new instance/application (e.g., workload 902) capable of running in a heterogenous IPU-based datacenter platform along with resource and migration tolerance SLAs. For example, the resource requirements and tolerance may be established by a user/administrator when creating the new instance/application (e.g., using an SLA template). The orchestrator 904 determines if validation of the device and resource requirements is successful (block 1004). For example, the resource requirements may be analyzed to determine if they are feasible without the constraints of the computing system. If the resource requirements are not valid and/or not feasibly met by the computing system, the orchestrator 904 returns control to block 1002. [Paragraph 146], The orchestrator 904 validates the request for validity (block 1104). If the request is not valid, the user is prompted to provide a valid request and control returns to block 1102. If the request is valid (block 1104), the orchestrator 904 determines availability of computing resources (block 1106). If available computing resources (e.g., IPU resources) that are willing to negotiate are not available, control returns to block 1102.) Zhu also teaches ([Paragraph 38], The client computing devices 200a-d represent any type of client computing device 100 configured to access online resources (e.g., webpage, cloud-based application, or the like); run a deep learning (DL) job; migrate a private cloud database; or others. As depicted, the client computing devices 200a-d include a laptop 200a, a smartphone 200b, an Internet of Things (IoT) device 200c, and a wearable 200d. This is just a sample of different types of client computing devices 200, as a myriad others may access the cloud environment 228 for various reasons. [Paragraph 57], Again, efficiency of the operational parameters 216 being tested may be dictated by the SLOs of an organization or a client. For example, if a particular customer has certain requirements for GPU usage, the operational parameters 216 may be modeled based on such SLO criteria, and the experimenter 220 identifies which group of modeled operational parameters 216 produce such update (or SLO criteria) in the test group of servers 201. [Paragraph 65], For Machine Configuration Design, the configuration of operational parameters is designed without leaving one of the resources to be idle or to be the bottleneck of throughput. The most cost-efficient configuration tailored to current customer workloads is then selected. [Paragraph 128], By way of example, the fabric controller 1018 may rely on a service model (e.g., designed by a customer that owns the distributed application) to provide guidance on how, where, and when to configure server 1022 and how, where, and when to place application 1026 and application 1028 thereon. One or more role instances of a distributed application may be placed on one or more of the servers 1020 and 1024 of data center 1014, where the one or more role instances may represent the portions of software, component programs, or instances of roles that participate in the distributed application. In other examples, one or more of the role instances may represent stored data that are accessible to the distributed application. ) As per claims 11-19, these are system claims corresponding to the method claims 1-9. Therefore, rejected based on similar rationale. As per claim 20, this is a non-transitory computer-readable medium claim corresponding to the method claim 1. Therefore, rejected based on similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONG U KIM/Primary Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Feb 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596564
PRE-LOADING SOFTWARE APPLICATIONS IN A CLOUD COMPUTING ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596594
REINFORCEMENT LEARNING POLICY SERVING AND TRAINING FRAMEWORK IN PRODUCTION CLOUD SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591760
CROSS-INSTANCE INTELLIGENT RESOURCE POOLING FOR DISPARATE DATABASES IN CLOUD NATIVE ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591449
Merging Streams For Call Enhancement In Virtual Desktop Infrastructure
2y 5m to grant Granted Mar 31, 2026
Patent 12586064
BLOCKCHAIN PROVISION SYSTEM AND METHOD USING NON-COMPETITIVE CONSENSUS ALGORITHM AND MICRO-CHAIN ARCHITECTURE TO ENSURE TRANSACTION PROCESSING SPEED, SCALABILITY, AND SECURITY SUITABLE FOR COMMERCIAL SERVICES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month