Prosecution Insights
Last updated: April 19, 2026
Application No. 18/403,057

DISTRIBUTED COMPUTING MODEL EXECUTION SYSTEM

Final Rejection §103§112
Filed
Jan 03, 2024
Examiner
HUARACHA, WILLY W
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
4 (Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
4y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
300 granted / 410 resolved
+18.2% vs TC avg
Strong +53% interview lift
Without
With
+53.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
28 currently pending
Career history
438
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
26.3%
-13.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-3,5-6,8-14,16,18-19 and 21-25 are currently pending and have been examined. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims, 19, 21, 25 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The following claim languages are not clearly understood and indefinite: As per claim 19, line 6, recites “data to execute to fine-tune the computing model”. However, it is unclear and indefinite as to what it means to execute data as data does not ‘execute’ (e.g. Instructions execute and manipulate data but the data itself does not execute). As per claims 21 and 25 they are rejected as being dependent on rejected claim 19. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 9-10, 14, 16, 18 and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Perumalla et al. (U.S. Pub. No. 20230082680 A1) in view of Ragha et al. (U.S. Pub. No. 20250110784 A1), and further in view of Poorna et al. (U.S. Patent No. 12353971 B1). Perumalla and Ragha were cited in a previous office action. As per claim 1, Perumalla teaches the invention substantially as claimed including a gateway comprising: receive, from a client machine, a client request message that includes: information identifying a first computing … [task]; and … data to be executed … (par. 0002 a computing device receives a request to execute a computing task; 0056 computing device receives (block 401) a request to execute a computing task. The computing task may be any variety of tasks. Examples include image rendering, data analysis, data mining or any variety of operation [type of output to produce]); obtain a worker registry that indicates a first distributed computation worker [node] … capable of executing the first computing … [task] and a second distributed computation worker [node] …. capable of executing a second computing … [task] (par. 0076 The system (500) may include a database (502) [registry] of capabilities of registered nodes (516) to which computing assignments of a computing task are to be assigned); select, from the worker registry and based at least on the client request message, the first distributed computation worker [node] based at least on the first distributed computation worker [node] being identified in the worker registry as being capable of executing the first computing … [task] (par. 0099 … receive request instructions (724), when executed by the processor, cause the processor to receive a request to execute a computing task, which computing task includes parameters for the computing task. Identify nodes instructions (726), when executed by the processor, may cause the processor to identify, based on the parameters for the computing task, a set of assigned nodes from a pool (FIG. 5, 514) of registered nodes (FIG. 5, 516) amongst which the computing task is to be distributed. Thus, the identifying, by the processor, a set of assigned nodes from a pool corresponds to selecting from the computation node registry a distributed computing node); send, to the first distributed computation worker and based at least on selecting the first distributed computation worker, a worker request message that identifies the first computing … [task] and includes the … data (par. 0099 … Transmit assignment instructions (728), when executed by the processor, may cause the processor to transmit to a secure and isolated container on each of the assigned nodes, a computing assignment of the computing task); receive, from the first distributed computation worker, a worker response message that includes a result obtained based at least on executing the … data using the first computing … [task] (par. 0099 … Receive completed assignment instructions (730), when executed by the processor, may cause the processor to receive from each of the assigned nodes, an associated completed computing assignment); and send, based at least on the worker response message, a client response message that includes at least a portion of the result to the client machine (par. 0099 … Distribute completed task instructions (734), when executed by the processor, may cause the processor to distribute the completed computing task to a requesting device). Perumalla does not expressly describe: receive, from a client machine, a client request message that includes: information identifying a first computing model; and inference data to be executed using the first computing model. However, Ragha further teaches: receive, from a client machine, a client request message that includes: information identifying a first computing model (par. 0022 a host 132 receives a request to perform an inference operation with a model; par. 0024 The hash function can take, as input, a combination of a model identifier included in the request and each of the “active” host identifiers, such as their network addresses); and inference data to be executed using the first computing model (par. 0001 the model receives new data [inference data] that was not in its training data set and provides an output based on its learned parameters; par. 0011 According to some examples, a group of cloud provider network hosts, sometimes referred to collectively as an endpoint [workers], execute ML models to service inference requests). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of servicing inference requests for particular models of Ragha with the system/method of Perumalla resulting in system that provides for servicing inference requests including a particular model and inference data to be executed as in Ragha. One of ordinary skill in the art would have been motivated to make this combination for the purpose of providing for mitigating and potentially eliminating performance penalties by pre-loading models to hosts such that when a host is introduced, the host receives requests for models that are likely pre-loaded [par. 0011] and capable of executing. Perumalla and Ragha do not expressly disclose: obtain a worker registry that indicates a first distributed computation worker located in a first cloud computing network associated with a first cloud computing provider is capable of executing the first computing model and a second distributed computation worker located in a second cloud computing network associated with a second cloud computing provider is capable of executing a second computing model. However, Poorna teaches: obtain a worker registry that indicates a first distributed computation worker located in a first cloud computing network associated with a first cloud computing provider is capable of executing the first computing model and a second distributed computation worker located in a second cloud computing network associated with a second cloud computing provider is capable of executing a second computing model (col. 3, lines 29-32 a provider network 100 that adapts ML models for deployment to edge devices in a manner that ensures the edge devices are able to run the models; col. 4, lines 45-49 For example a user 128 may install or configure software [ML model] on the edge devices 102A-102N and “register” these devices with an edge device management service 110 [registry]; col. 5, lines 26-33 the device-characteristic map 118 may include an entry for each edge device that has been registered (e.g., with the edge device management service 110 [registry]), and may include a device identifier 202 … and one or more sets of device characteristics 204 [model capability]. The one or more sets of device characteristics 204 may, in many cases, be provided by a … provider of an edge device; col. 21, lines 36-38 model hosting system … could implement various … hosted or “cloud” computing environments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of registering nodes capable of executing ML models in an edge device management service of Poorna with the system/method of Perumalla and Ragha resulting in system that provides for proving a registry for registering nodes capable of executing different ML models located various networks or cloud computing environments as in Poorna. One of ordinary skill in the art would have been motivated to make this combination for the purpose adapts ML models for deployment to edge devices in a manner that ensures the edge devices are able to run the models efficiently without significantly affecting the performance (e.g., accuracy, precision) of these models. [col. 3, lines 29-33]. As per claim 2, Ragha further teaches: wherein the first computing model includes at least one of a pretrained machine learning model or an artificial intelligence model (par. 0011 According to some examples, a group of cloud provider network hosts [workers] … execute [machine learning] ML models to service inference requests). As per claim 3, Ragha further teaches: wherein: the executing the inference data using the first computing model comprises applying the inference data to the at least one of the pretrained machine learning model or the artificial intelligence model; and the result includes an output value generated using the at least one of the pretrained machine learning model or the artificial intelligence model processing the inference data (par. 0001 the model receives new data [inference data] that was not in its training data set and provides an output based on its learned parameters). As per claim 5, Perumalla further teaches: wherein the one or more processors are further to: expose an application procedure interface to one or more client machines, the one or more client machines including the client, wherein the client request message is received using the application procedure interface (par. 0021 … a broker which provides an interface wherein a requesting device [client] can request additional computing resources to complete a task; Fig. 1, via network 50). As per claim 9, Perumalla further teaches: wherein the one or more processors are further to: determine information associated with the first distributed computation worker, the information including at least one of model availability information, geographic location information, communication latency information, or computation cost information; and register the first distributed computation worker using at least the information (par. 0021 the computing device in this example may be a broker which provides an interface wherein a requesting device [client] can request additional computing resources to complete a task; ; Fig. 1, via network 50; par. 0002 computing device receives a request to execute a computing task. The computing task includes parameters for the computing task and where nodes, … can register to provide the additional capacities that will be used to accomplish the task; par. 0066 the brokering device presents a user interface wherein the node user may register the computing device in a pool for decentralized processing). As per claim 10, Ragha further teaches: receive information from a system storing at least the first computing model and the second computing model, wherein the designated first computing model is retrieved from the system by the first distributed computation worker (Fig. 1, describes a model hosting service 110 configure to store models; par. 0012 A model hosting service 110 of a cloud provider network can provide ML model hosting services). As per claim 14, it is a method having similar limitations as claim 1. Thus, claim 14 is rejected for the same rationale as applied to claim 1. As per claim 16, it is a method having similar limitations as claim 3. Thus, claim 16 is rejected for the same rationale as applied to claim 3. As per claim 18, it is a method having similar limitations as claim 5. Thus, claim 18 is rejected for the same rationale as applied to claim 5. As per claim 23, Perumalla further teaches: selecting, from the worker the registry, the second distributed computation worker based at least on the first computing … [task parameters] also being identified in the worker … [database] as being executable at the second distributed computation worker (par. 0076 The system (500) may include a database (502) [registry] of capabilities of registered nodes (516) to which computing assignments of a computing task are to be assigned; par. 0099 … when executed by the processor, may cause the processor to identify, based on the parameters for the computing task, a set of assigned nodes from a pool (FIG. 5, 514) of registered nodes (FIG. 5, 516) amongst which the computing task is to be distributed), send, to the second distributed computation worker, a third request message that identifies the computing … [task] (par. 0099 … Transmit assignment instructions (728), when executed by the processor, may cause the processor to transmit to a secure and isolated container on each of the assigned nodes, a computing assignment of the computing task); and receive, from the second distributed computation worker, a third response message that includes a second result obtained based at least on executing the computing …[ task], wherein the third response message further includes at least a portion of the second result (par. 0099 … Receive completed assignment instructions (730), when executed by the processor, may cause the processor to receive from each of the assigned nodes, an associated completed computing assignment). Ragha further teaches: select from the worker registry, a second distributed computation worker of the one or more distributed computation workers based at least on the computing model (par. 0023 The model hosting service 110 can maintain a registry 124 of MMEs and associated hosts. For example, an entry for MME “ABC” can have an associated set of host identifiers that identify each of its associated hosts; par. 0022 a host 132 receives a request to perform an inference operation with a model; par. 0024 The hash function can take, as input, a combination of a model identifier included in the request and each of the “active” host identifiers, such as their network addresses). As per claim 24, Ragha further teaches: storing a model registry that indicates at least one of a name of the computing model, a type of the computing model, a version of the computing model, or a source of the computing model; and determining the computing model for executing the inference data based at least on the model registry (Fig. 1, describes data stores 120 maintaining maintain a registry 124 of MMEs and associated hosts; par. 0023 For example, an entry for MME “ABC” can have an associated set of host identifiers). Claims 6, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Perumalla in view of Ragha and Poorna, and further in view of Guo et al. “When Network Operation Meets Blockchain: An Artificial-Intelligence-Driven Customization Service for Trusted Virtual Resources of IoT”. Guo was cited in a previous office action. As per claim 6, Perumalla, Ragha and Poorna teach the limitations of claim 1. Perumalla, Ragha and Poorna do not expressly describe: wherein the first distributed computation worker includes a signal node in a peer-to-peer artificial intelligence network. However, Guo teaches: wherein the first distributed computation worker includes a signal node in a peer-to-peer artificial intelligence network (Fig. 3 describes a peer-to-peer network AI comprising a service requester/client node, an ENT resource blockchain [equiv. signal node], resources nodes and resource prover node; see also Fig. 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of implementing AI-driven customization serves for virtual resources via use of an ETN resource blockchain of Guo with the system/method of Perumalla, Ragha and Poorna resulting in a system for implementing a trusted virtual resource sharing access to services/resources in a peer to peer network AI. One of ordinary skill in the art would have been motivated to make this combination for the purpose of achieving intelligent and trusted sharing of heterogeneous IoT resources and improve the dynamic allocation (p. 47, left col. lines 16-20) As per claim 8, Guo further teaches: wherein: the first distributed computation workers are controlled by an entity: and the gateway, the first distributed computation worker, and the client machine are controlled by different entities (pg. 48, left column, lines 30-39, As shown in Fig. 2, several network resource management domains exist in the blockchain network. In each network resource management domain, the resource provider provides network resources to form a resource pool‚ which is managed regionally by the resource management organization. In the resource blockchain network, there are three types of basic functional entities: the registration authority (RA), the blockchain node (BN), and the network controller (NC)). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Perumalla in view of Ragha and Poorna, and further in view of Gharibi et al. (U.S. Pub. No. 20220029971 A1). Gharibi was cited in a previous office action. As per claim 11, Perumalla, Ragha and Poorna teaches the limitations of claim 1. Perumalla, Ragha and Poorna do not expressly describe: wherein the first computing model is provided by the first computing model provider and the second computing model is provided by the second computing model provider. However, Gharibi teaches: wherein the first computing model is provided by the first computing model provider and the second computing model is provided by the second computing model provider (par.0082 process might involve receiving a first model from a first entity 318 and receiving a second model from a second entity 314 and utilizing the approach described above, performing a secure multi-party computation which can involve exchanging portions (i.e., less than the full amount of) a respective one-way encrypted version of respective models from the model providers). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Perumalla, Ragha and Poorna by incorporating the technique of receiving models as set forth by Gharibi because it would facilitate for efficiently obtaining models from different entities in order to perform operations on data at least based on models obtained from different entities, with predictable results. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Perumalla in view of Ragha and Poorna, and further in view of Polacek et al. (U.S. Pub. No. 20170230295 A1). Polacek was cited in a previous office action. As per claim 12, Perumalla, Ragha and Poorna teach the limitations of claim 1. Perumalla, Ragha and Poorna do not expressly teach: determine a dynamic latency value based at least on a timing between when the worker request message is sent and the worker response message is received; and deregister the first distributed computation worker from the worker registry based at least on determining that the dynamic latency value is inconsistent with a registered latency value stored in the worker registry. However, Polacek teaches: determine a dynamic latency value based at least on a timing between when the worker request message is sent and the worker response message is received; and deregister the first distributed computation worker from the worker registry based at least on determining that the dynamic latency value is inconsistent with a registered latency value stored in the worker registry (par. 0060 circuit breaker can be used to monitor the latency and failure rate of various servers, and remove a server from rotation if that server's latency or failure rate is too high. Servers can be added back into the rotation if their metrics improve). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Perumalla, Ragha and Poorna by incorporating the method of removing a server from a rotation as set forth by Polacek because it would allow for deregistering badly performing registered nodes relative to a specified latency level in order to maintain a desired level of performance required by requests. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Perumalla in view of Ragha and Poorna, and further in view of Hallenstal et al. (U.S. Pub. No. 20110299682 A1). Hallenstal was cited in a previous office action. As per claim 13, Perumalla, Ragha and Poorna teach the limitations of claim 1. Perumalla, Ragha and Poorna do not expressly teach: send a validation request identifying validation input data to the first distributed computation worker, receive a validation response including a validation output value, and deregister the distributed computation worker from the worker registry based at least in on determining the validation output value does not match a predetermined validation value determined by executing the validation input data using the first computing model. However, Hallenstal teaches: send a validation request identifying validation input data to the first distributed computation worker, receive a validation response including a validation output value, and deregister the distributed computation worker from the worker registry based at least in on determining the validation output value does not match a predetermined validation value determined by executing the validation input data using the first computing model (page 5, claim 3, de-registering the UE [worker node] if the 3GPP circuit switched node rejects the authentication of the UE). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Perumalla, Ragha and Poorna by incorporating the method of de-registering UE as set forth by Hallenstal because it would allow for de-registering at least on a determination that validation/authentication not being successful or matching, with predictable results. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Perumalla in view of Ragha and Poorna, and further in view of Kraker et al. (U.S. Pub. No. 20210249003 A1). As per claim 22, Perumalla, Ragha and Poorna do not expressly describe: the first request further includes additional information indicating one or more criteria, the one or more criteria including at least one of a latency, a cost, or a geographic location; and the selecting the first distributed computation worker is further based at least on the one or more criteria. However, Kraker teaches: wherein: the first request further includes additional information indicating one or more criteria, the one or more criteria including at least one of a latency, a cost, or a geographic location; and the selecting the first distributed computation worker is further based at least on the one or more criteria (par. 0051 The data processing system 102 can identify the request by processing an input audio signal detected by a microphone of the client computing device 104. The request can include selection criteria of the request, such as the device type, location; par. 0052 Responsive to the request, the data processing system 102 can select a digital component object). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Perumalla, Ragha and Poorna by incorporating the technique of selecting a digital component object based on specifying geographic location criteria as set forth by Kraker by implementing in the manner of selecting a worker for servicing inference requests at least based on a geographical location of the worker. This would have resulted in reduced latency and improved performance. Claims 19, 21, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Perumalla et al. (U.S. Pub. No. 20230082680 A1) in view of Ragha et al. (U.S. Pub. No. 20250110784 A1), and further in view of Passban et al. (U.S. Patent No. 12400104 B1). Perumalla and Ragha were cited in a previous office action. As per claim 19 it is one or more processors comprising: processing circuitry (Fig. 5, Processor 504) to: receive, from a client machine, a client request message that includes: information identifying a first computing … [task]; and … data to be executed … (par. 0002 a computing device receives a request to execute a computing task; 0056 computing device receives (block 401) a request to execute a computing task. The computing task may be any variety of tasks. Examples include image rendering, data analysis, data mining or any variety of operation [type of output to produce]); obtain a worker registry that indicates a first distributed computation … [node] and a second distributed computation worker … [node] (par. 0076 The system (500) may include a database (502) [registry] of capabilities of registered nodes (516) to which computing assignments of a computing task are to be assigned); select, from the worker registry and based at least on the client request message, the first distributed computation worker [node] based at least on the first distributed computation worker [node] being identified in the worker registry as being capable of executing the first computing … [task] (par. 0099 … receive request instructions (724), when executed by the processor, cause the processor to receive a request to execute a computing task, which computing task includes parameters for the computing task. Identify nodes instructions (726), when executed by the processor, may cause the processor to identify, based on the parameters for the computing task, a set of assigned nodes from a pool (FIG. 5, 514) of registered nodes (FIG. 5, 516) amongst which the computing task is to be distributed); send, to the first distributed computation worker and based at least on selecting the first distributed computation worker, a worker request message that identifies the first computing … [task] and includes the … data (par. 0099 … Transmit assignment instructions (728), when executed by the processor, may cause the processor to transmit to a secure and isolated container on each of the assigned nodes, a computing assignment of the computing task); receive, from the first distributed computation worker, a worker response message that includes a result obtained based at least on executing the … data using the first computing … [task] (par. 0099 … Receive completed assignment instructions (730), when executed by the processor, may cause the processor to receive from each of the assigned nodes, an associated completed computing assignment); and send, based at least on the worker response message, a client response message that includes at least a portion of the result to the client machine (par. 0099 … Distribute completed task instructions (734), when executed by the processor, may cause the processor to distribute the completed computing task to a requesting device). However, Ragha further teaches: receive, from a client machine, a client request message that includes: information identifying a first computing model (par. 0022 a host 132 receives a request to perform an inference operation with a model; par. 0024 The hash function can take, as input, a combination of a model identifier included in the request and each of the “active” host identifiers, such as their network addresses). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique of servicing inference requests for particular models of Ragha with the system/method of Perumalla resulting in system that provides for servicing inference requests including a particular model and inference data to be executed as in Ragha. One of ordinary skill in the art would have been motivated to make this combination for the purpose of providing for mitigating and potentially eliminating performance penalties by pre-loading models to hosts such that when a host is introduced, the host receives requests for models that are likely pre-loaded [par. 0011] and capable of executing. Perumalla and Ragha does not expressly describe: indication … to fine-tune the computing model. However, Passban teaches: indication … to fine-tune the computing model (col. 14, lines 3-5 The first device 110 may determine, based on the respective output data, to fine-tune the received model update data). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the technique determining to fine-tune received model update data of Passban with the system/method of Perumalla and Ragha resulting in system that provides for determining requests including a particular model data to fine-tune as in Passban. One of ordinary skill in the art would have been motivated to make this combination for the purpose of providing improving results of the NN with respect to a dataset by adjusting some or all of the parameters [col. 2, lines 42-43]. As per claim 21, Perumalla further discloses: the first request message further indicates information associated with a type of result; the second request message further indicates the information associated with the type of result; and the result is generated based at least on the information (par. 0002 computing device receives a request to execute a computing task. The computing task includes parameters for the computing task; par. 0056 computing device receives (block 401) a request to execute a computing task. The computing task may be any variety of tasks. Examples include image rendering, data analysis, data mining or any variety of operation [type of output to produce]). As per claim 25, Passban further teaches: wherein the result includes at least one or more layers of the computing model that were fine-tuned (col. 3, lines 36-41 FIG. 2A illustrates a first example selection and transformation of first model 115 data to generate first model update data 155a. In this example, the first model 115 may include a plurality of layers L1-L5, and a protocol 150a may specify that the first model update data 155a is to include parameters from layers L3 and L4). Response to Arguments Applicant's arguments with respect to claims 1, 14 and 19 have been considered but are moot in view of the new ground(s) of rejection. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pub. No. 20240177049 A1 teaches techniques for confidential tuning of pre-trained machine learning models. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Willy W. Huaracha whose telephone number is (571) 270-5510. The examiner can normally be reached on M-F 8:30-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached on (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WH/ Examiner, Art Unit 2195 /BRADLEY A TEETS/ Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Jan 03, 2024
Application Filed
Apr 19, 2024
Non-Final Rejection — §103, §112
Sep 25, 2024
Response Filed
Oct 30, 2024
Final Rejection — §103, §112
Dec 23, 2024
Response after Non-Final Action
Jan 29, 2025
Request for Continued Examination
Feb 07, 2025
Response after Non-Final Action
Aug 13, 2025
Non-Final Rejection — §103, §112
Nov 18, 2025
Response Filed
Mar 16, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547427
DESERIALIZATION METHOD AND APPARATUS, AND COMPUTING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541390
SYSTEM SUPPORT REPLICATOR
2y 5m to grant Granted Feb 03, 2026
Patent 12504993
HIGH-THROUGHPUT CONFIDENTIAL COMPUTING METHOD AND SYSTEM BASED ON RISC-V ARCHITECTURE
2y 5m to grant Granted Dec 23, 2025
Patent 12455753
CLOUD BASED AUDIO / VIDEO OPERATING SYSTEMS
2y 5m to grant Granted Oct 28, 2025
Patent 12443440
METHOD FOR EXECUTING DATA PROCESSING TASK IN CLUSTER MIXED DEPLOYMENT SCENARIO, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+53.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month