Prosecution Insights
Last updated: April 19, 2026
Application No. 18/187,399

METHODS AND SYSTEMS FOR DEPLOYING AN ARTIFICIAL WORKFORCE

Non-Final OA §102§103§112
Filed
Mar 21, 2023
Examiner
XIE, THEODORE L
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Artificial Compute, Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
1y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-2.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 7m
Avg Prosecution
38 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§102 §103 §112
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application Claims 1-20 have been examined in this application. Thiscommunication is the first action on the merits. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-12 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The following limitations of the following claims lack sufficient antecedent basis. Claim 9 recites the limitation "the AI model" in Line 2. For purpose of examination, Claim 9 will be understood as “The medium of claim [[1]] 8…” in light of similar Claim 17. Claim 10 recites the limitation "the AI model” in Line 2. For purpose of examination, Claim 10 will be understood as “The medium of claim [[1]] 8…” in light of similar Claim 17. Claim 11 recites the limitation "the AI model” in Line 2. For purpose of examination, Claim 11 will be understood as “The medium of claim [[1]] 8…” in light of similar Claim 18. Claim 12 recites the limitation "the AI model” in Line 1. For purpose of examination, Claim 12 will be understood as “The medium of claim [[1]] 8…” in light of similar Claim 19. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 4, 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang(US 20180052664 A1). Claims 1, 13 A tangible, non-transitory, machine-readable medium storing instructions that when executed by one or more processors effectuate operations, comprising: receiving, from a computing device, a request for an agent, the request including a set of one or more tasks to be automatically completed by the agent; In [0007], " In one example, a method implemented on a computer having at least one processor, a storage, and a communication platform for developing a virtual agent is disclosed...The set of modules are then integrated in the order to generate the virtual agent which, when deployed, performs actions corresponding to the set of modules in the order". In [0045], "The virtual agent development engine 170 in this example may develop a customized virtual agent for a developer via a bot design programming interface provided to the developer...The virtual agent development engine 170 may also store the customized tasks into the customized task database 139, which can provide previously generated tasks as a template for future task generation or customization during virtual agent development". We understand these stored templates to be particular commands that can execute a task. Additionally, we have support for the live update of tasks via user requests derived from usage in [0044], "During the conversation between the virtual agent and the user, the virtual agent can analyze dialog states of the dialog and manage real-time tasks related to the dialog, based on data stored in various databases, e.g. a knowledge database 134, a publisher database 136, and a customized task database 139". obtaining, from a database associated with the computing device, one or more preconfigured commands for performing the set of one or more tasks; generating, using the one or more preconfigured commands, In [0045], "The virtual agent development engine 170 in this example may develop a customized virtual agent for a developer via a bot design programming interface provided to the developer...The virtual agent development engine 170 may also store the customized tasks into the customized task database 139, which can provide previously generated tasks as a template for future task generation or customization during virtual agent development". We understand these stored templates to be particular commands that can execute a task. Additionally, we have support for the live update of tasks and derivative commands via user requests derived from usage in [0044], "During the conversation between the virtual agent and the user, the virtual agent can analyze dialog states of the dialog and manage real-time tasks related to the dialog, based on data stored in various databases, e.g. a knowledge database 134, a publisher database 136, and a customized task database 139. The virtual agent may also perform product/service recommendation to the user based on a user database 132. In one embodiment, when the virtual agent determines that the user's intent has changed or the user is unsatisfied with the current dialog, the virtual agent may redirect the user to a different agent based on a virtual agent database 1". a virtual agent configured to perform the one or more preconfigured commands; and providing an indication that the agent is configured to perform the set of one or more tasks and available for deployment. In [0096], "The modified program codes are integrated at 1118 to generate a customized virtual agent. The customized virtual agent is stored and sent at 1120 to the developer". The delivery of the generated agent serves as an indication of readiness for deployment. We also have intermediate development status updates in [0095], "One or more virtual agent modules are determined at 1108 based on the inputs. The development status of the virtual agent is stored or updated at 1110". Claim 13 is rejected as disclosing substantially similar limitations as Claim 1. Claim 2 The medium of claim 1, the operations further comprising: receiving a request for changing performance of the agent; and updating at least one of the set of one or more tasks based upon the received request. In [0065], "FIG. 4 depicts an exemplary high level system diagram of a dynamic dialog state analyzer 210 in a service virtual agent, e.g. the service virtual agent 1 142 in FIG. 2, according to an embodiment of the present teaching. The dynamic dialog state analyzer 210 can keep track of the dialog state of the conversation with the user and the user's intent based on continuously received user input. The dialog state and user intent are also continuously updated based on the new input from the user. As shown in FIG. 4, the dynamic dialog state analyzer 210 comprises a parser 402, one or more natural language models 404, a dictionary 406, a dialog state generator 408, and a dialog log recorder 410". In [0068], "For example, upon receiving all related answers of the user extracted from the user input regarding a selling product, the dialog state generator 408 may retrieve a dialog state from the dialog log database 212 and update the dialog state to indicate that the user is ready to buy the product, and it is time to provide payment method or platform to the user". Claim 4 The medium of claim 1, the operations further comprising generating an executable set of computer code for one or more commands in accordance with a determination that the database does not include one or more preconfigured commands configured to perform at least one of the one or more tasks. With respect to generating source code in response to need for customized program execution, in [0090], "Upon receiving the instruction for integrating, the visual input based program integrator 1012 in this example may integrate the modules obtained from the virtual agent module determiner 1006. For each of the modules, the visual input based program integrator 1012 may retrieve program source code for the module from the virtual agent program database 1014. For modules that have parameters customized based on inputs of the developer, the visual input based program integrator 1012 may modify the obtained source codes for the module based on the customized parameters. In one embodiment, the visual input based program integrator 1012 may invoke the machine learning engine 1016 to further modify the codes based on machine learning". Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang(US 20180052664 A1) in view of Ravichandran(US 10963754 B1). Claim 3 As to Claim 3, Zhang teaches all the limitations of Claim 1 as discussed above. Zhang does not expressly disclose the remaining limitations. However, Ravichandran teaches: The medium of claim 1, wherein the request for the agent further comprises a sample output of the set of one or more tasks, the sample output including one or more text, audio, image, and video. In Col 7 Lines 8-33, "FIG. 6 illustrates embodiments of a few-shot image classification service graphical user interface (GUI). In particular, this is a part of an overall GUI that allows a user to upload and classify images and/or train a model of his/her choosing with a small (on a relative scale) data set images. While this description is geared toward image classification, the GUI may be modified to fit the needs of other classifications (such as video, audio, etc.). In some implementations, the user interacts with this service (such as the few-shot classification service 506) via intermediate networks and interfaces such as those detailed herein. In this illustration, GUI 600 allows the user to request several different types of actions be performed by the few-shot image classification service. In particular, through this interface, a user may upload one or more images through an add image block 603 and then provide a label for the one or more images using an input box 602. Images may be added using a drop box 605 and/or by providing an image location 607 for the image (such as a URL or a pointer to a location in storage of a provider network). Once an image is “added,” in some embodiments, the user uploads the image via an upload button 609. In other embodiments, the upload occurs automatically. In this GUI, the user may also cause a training of the model via a request using train model input box 601". Zhang discloses a system for developing artificial agents. Ravichandran discloses a system meant to leverage few shot learning in the application of AI models. Each reference discloses means of utilizing and interfacing with artificial intelligence. Extending the few shot usage of Ravichandran is applicable to the system of Zhang as they share the field of endeavor of artificial intelligence. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to leverage the few shot prompting as taught in Ravichandran and apply that to the system of Zhang. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting said few shot learning would enable users to aid users in calibrating agents to perform tasks that may not have extensive representation in the training data. Claims 5, 8, 11-12, 16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang(US 20180052664 A1) in view of O'Malia(US 20220036153 A1). Claim 5 As to Claim 5, Zhang teaches all the limitations of Claim 1 as discussed above. Zhang does not expressly disclose the remaining limitations. However, O’Malia teaches: The medium of claim 1, the operations further comprising generating a text document including text describing each of the one or more tasks of the set of one or more tasks to be performed by the agent. See [0056] regarding the intended purpose of ULLM output, "The Al Agent Controller 102 may provide a system for the translation or transformation of visual and/or other data from the Al Agent environment into a format which may be optimized for and processed by the ULLM 114 to provide outputs which may, through presentation to the Al Agent 112, shape Al Agent action selection in a favorable manner, and which may enable the Al Agent 112 to generalize its abilities to more diverse environments than it has seen in the past". Specifying that its output is in text, in [0069], "The Al Agent Controller 102 may provide (208) the output text to the Al Agent 112. The Al Agent 112 may map semantics provided in the output text to actions". See [0068] for an example of ULLM output producing text instructions. Zhang discloses a system for developing artificial agents. O'Malia discloses a system meant to enhance artificial agent performance. Each reference discloses means of utilizing and interfacing with AI agents. Extending the AI-guided agent management as recorded in O'Malia to the system of Zhang is applicable as they share the AI agent field of endeavor. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the AI guided agent management of O'Malia and apply that to the system of Zhang. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting the AI guided agent management of O'Malia found in [0023] of O'Malia, "The provided methods and/or systems enabling an Al Agent to benefit from the encoded mental models and or knowledge in the Al Agent Controller/ULLM may enable the Al Agent to incorporate, or make use of, internal or external knowledge bases, or facts encoded in the Al agent's past training, or other fact-related techniques as part of the Al Agent's capability set". Claims 8, 16 As to Claim 8, Zhang teaches all the limitations of Claim 1 as discussed above. Zhang teaches: The medium of claim 1, the operations further comprising: determining the database does not have at least one command for performing the set of tasks; and based on the determining, generating a command for performing the task of the set of tasks With respect to generating source code in response to need for customized program execution, in [0090], "Upon receiving the instruction for integrating, the visual input based program integrator 1012 in this example may integrate the modules obtained from the virtual agent module determiner 1006. For each of the modules, the visual input based program integrator 1012 may retrieve program source code for the module from the virtual agent program database 1014. For modules that have parameters customized based on inputs of the developer, the visual input based program integrator 1012 may modify the obtained source codes for the module based on the customized parameters. In one embodiment, the visual input based program integrator 1012 may invoke the machine learning engine 1016 to further modify the codes based on machine learning". Zhang does not expressly disclose the remaining limitations. However, O’Malia teaches: generating a command for performing the task of the set of tasks using an artificial intelligence (AI) model. The ULLM inherently shapes agent behavior by means of its LLM functionality, in [0056], "The Al Agent Controller 102 may provide a system for the translation or transformation of visual and/or other data from the Al Agent environment into a format which may be optimized for and processed by the ULLM 114 to provide outputs which may, through presentation to the Al Agent 112, shape Al Agent action selection in a favorable manner, and which may enable the Al Agent 112 to generalize its abilities to more diverse environments than it has seen in the past". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the AI guided agent management of O'Malia and apply that to the system of Zhang. Motivation to do so comes from the same rationale as outlined above with respect to Claim 5. Claim 16 is rejected as disclosing substantially similar limitations as Claim 8. Claims 11, 18 As to Claim 11, Zhang combined with O’Malia teaches all the limitations of Claim 8 as discussed above. Zhang teaches: The medium of claim 1, the operations further comprising generating an updated version of the command by providing inputs of human interactions with computer software into the AI model. As part of our command updating occurs from live interaction with the user as outlined above in our rejection of Claim 1, in [0037], "More specifically, based on machine learning and AI technique, the disclosed system can learn how to strategically ask user questions, present intermediate candidates to the users based on historical human-human or human-machine or machine-machine conversation data, together with human or machine action data that involves calling third party applications, services or databases…The disclosed system can use the knowledge base and historical conversations for recommending high quality response messages for future conversation". Claim 18 is rejected as disclosing substantially similar limitations as Claim 11. Claim 12, 19 As to Claim 12, Zhang combined with O’Malia teaches all the limitations of Claim 8 as discussed above. The medium of claim 1, the operations further comprising training the AI model by providing human user input to generate updated versions of commands. As part of our command updating occurs from live interaction with the user as outlined above in our rejection of Claim 1, in [0037], "More specifically, based on machine learning and AI technique, the disclosed system can learn how to strategically ask user questions, present intermediate candidates to the users based on historical human-human or human-machine or machine-machine conversation data, together with human or machine action data that involves calling third party applications, services or databases". Given that this human user data is salient to guiding actions of the agent, we consider this to be relevant to updating commands. Claim 19 is rejected as disclosing substantially similar limitations as Claim 12. Claim 20 As to Claim 20, Zhang combined with O’Malia teaches all the limitations of Claim 16 as discussed above. Zhang teaches: The method of claim 16, further comprising steps for providing artificial compute agents. In [0096], "The modified program codes are integrated at 1118 to generate a customized virtual agent. The customized virtual agent is stored and sent at 1120 to the developer". The delivery of the generated agent serves as an indication of readiness for deployment. We also have intermediate development status updates in [0095], "One or more virtual agent modules are determined at 1108 based on the inputs. The development status of the virtual agent is stored or updated at 1110". Claim 9-10, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang(US 20180052664 A1) in view of O'Malia(US 20220036153 A1) in further view of Ravichandran(US 10963754 B1). Claim 9 As to Claim 9, Zhang combined with O’Malia teaches all the limitations of Claim 8 as discussed above. Zhang teaches: The medium of claim 1, the operations further comprising generating an updated version of the command As part of our command updating occurs from live interaction with the user as outlined above in our rejection of Claim 1, in [0065], "FIG. 4 depicts an exemplary high level system diagram of a dynamic dialog state analyzer 210 in a service virtual agent, e.g. the service virtual agent 1 142 in FIG. 2, according to an embodiment of the present teaching. The dynamic dialog state analyzer 210 can keep track of the dialog state of the conversation with the user and the user's intent based on continuously received user input. The dialog state and user intent are also continuously updated based on the new input from the user. As shown in FIG. 4, the dynamic dialog state analyzer 210 comprises a parser 402, one or more natural language models 404, a dictionary 406, a dialog state generator 408, and a dialog log recorder 410". In [0068], "For example, upon receiving all related answers of the user extracted from the user input regarding a selling product, the dialog state generator 408 may retrieve a dialog state from the dialog log database 212 and update the dialog state to indicate that the user is ready to buy the product, and it is time to provide payment method or platform to the user". Zhang combined with O’Malia does not expressly disclose the remaining limitations. However, Ravichandran teaches: by providing a video input into the AI model. We consider the few shot prompting that can be used to shape the functionality of the AI models to disclose this limitation, in Col 7 Lines 8-15, "While this description is geared toward image classification, the GUI may be modified to fit the needs of other classifications (such as video, audio, etc.). In some implementations, the user interacts with this service (such as the few-shot classification service 506) via intermediate networks and interfaces such as those detailed herein". Zhang combined with O’Malia discloses a system for developing artificial agents. Ravichandran discloses a system meant to leverage few shot learning in the application of AI models. Each reference discloses means of utilizing and interfacing with artificial intelligence. Extending the few shot usage of Ravichandran is applicable to the system of Zhang combined with O’Malia as they share the field of endeavor of artificial intelligence. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to leverage the few shot prompting as taught in Ravichandran and apply that to the system of Zhang combined with O’Malia. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting said few shot learning would enable users to aid users in calibrating agents to perform tasks that may not have extensive representation in the training data. Claim 10 As to Claim 10, Zhang combined with O’Malia teaches all the limitations of Claim 8 as discussed above. Zhang teaches: The medium of claim 1, the operations further comprising generating an updated version of the command As part of our command updating occurs from live interaction with the user as outlined above in our rejection of Claim 1, in [0065], "FIG. 4 depicts an exemplary high level system diagram of a dynamic dialog state analyzer 210 in a service virtual agent, e.g. the service virtual agent 1 142 in FIG. 2, according to an embodiment of the present teaching. The dynamic dialog state analyzer 210 can keep track of the dialog state of the conversation with the user and the user's intent based on continuously received user input. The dialog state and user intent are also continuously updated based on the new input from the user. As shown in FIG. 4, the dynamic dialog state analyzer 210 comprises a parser 402, one or more natural language models 404, a dictionary 406, a dialog state generator 408, and a dialog log recorder 410". In [0068], "For example, upon receiving all related answers of the user extracted from the user input regarding a selling product, the dialog state generator 408 may retrieve a dialog state from the dialog log database 212 and update the dialog state to indicate that the user is ready to buy the product, and it is time to provide payment method or platform to the user". Zhang combined with O’Malia does not expressly disclose the remaining limitations. However, Ravichandran teaches: by providing an audio input into the AI model. We consider the few shot prompting that can be used to shape the functionality of the AI models to disclose this limitation, in Col 7 Lines 8-15, "While this description is geared toward image classification, the GUI may be modified to fit the needs of other classifications (such as video, audio, etc.). In some implementations, the user interacts with this service (such as the few-shot classification service 506) via intermediate networks and interfaces such as those detailed herein". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to leverage the few shot prompting as taught in Ravichandran and apply that to the system of Zhang combined with O’Malia. Motivation to do so comes from the same rationale as outlined above with respect to Claim 9. Claim 17 As to Claim 17, Zhang combined with O’Malia teaches all the limitations of Claim 16 as discussed above. Zhang teaches: The medium of claim 1, further comprising generating an updated version of the command As part of our command updating occurs from live interaction with the user as outlined above in our rejection of Claim 1, in [0065], "FIG. 4 depicts an exemplary high level system diagram of a dynamic dialog state analyzer 210 in a service virtual agent, e.g. the service virtual agent 1 142 in FIG. 2, according to an embodiment of the present teaching. The dynamic dialog state analyzer 210 can keep track of the dialog state of the conversation with the user and the user's intent based on continuously received user input. The dialog state and user intent are also continuously updated based on the new input from the user. As shown in FIG. 4, the dynamic dialog state analyzer 210 comprises a parser 402, one or more natural language models 404, a dictionary 406, a dialog state generator 408, and a dialog log recorder 410". In [0068], "For example, upon receiving all related answers of the user extracted from the user input regarding a selling product, the dialog state generator 408 may retrieve a dialog state from the dialog log database 212 and update the dialog state to indicate that the user is ready to buy the product, and it is time to provide payment method or platform to the user". Zhang combined with O’Malia does not expressly disclose the remaining limitations. However, Ravichandran teaches: by providing a video input or an audio input into the AI model. We consider the few shot prompting that can be used to shape the functionality of the AI models to disclose this limitation, in Col 7 Lines 8-15, "While this description is geared toward image classification, the GUI may be modified to fit the needs of other classifications (such as video, audio, etc.). In some implementations, the user interacts with this service (such as the few-shot classification service 506) via intermediate networks and interfaces such as those detailed herein". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to leverage the few shot prompting as taught in Ravichandran and apply that to the system of Zhang combined with O’Malia. Motivation to do so comes from the same rationale as outlined above with respect to Claim 9. Claim 6-7, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang(US 20180052664 A1) in view of Davidson(US 20220289537 A1). Claims 6, 14 As to Claim 6, Zhang teaches all the limitations of Claim 1 as discussed above. Zhang does not expressly disclose the remaining limitations.However, Davidson teaches: The medium of claim 1, the operations further comprising assigning a score associated with an automation level for each of the one or more tasks. We understand this automation level to correspond to a prediction of agent capability to autonomously perform a task, in [0014], "For each of at least a subset of the predicted actions for a task, the robot agent 102 uses the learned model and various signals to predict whether the robot agent will fail to perform the action in accordance with one or more policies. Such signals may include an indication of the quality of the action (e.g., a qualitative or quantitative evaluation of one or both the utility of the action and the risks of the action)(block 507), a qualitative or quantitative evaluation of the probability of failure in performing the action (block 508), and the like. If the resulting failure predictor indicates that the robot agent 102 is predicted to succeed at performing the task, then the robot agent 102 operates in an unguided mode to perform the task without seeking guidance from any of the guidance sources 110". Davidson discloses a system for coordinating robot agents. Zhang discloses a system meant to create and deploy AI agents. Each reference discloses means of governing agentic operatives. Extending the failure prediction as recorded in Davidson to the system of Zhang is applicable as they both pertain to managing agents. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the task success prediction of Davidson and apply that to the system of Zhang. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting said prediction would enable intelligent routing and/or assistance to human operatives. See [0072] of Zhang, Davidson improves upon this functionality by allowing predictions to be made as to whether rerouting is needed. Claim 14 is rejected as disclosing substantially similar limitations as Claim 6. Claim 7, 15 As to Claim 7, Zhang teaches all the limitations of Claim 1 as discussed above. Zhang does not expressly disclose the remaining limitations.However, Davidson teaches: The medium of claim 1, the operations further comprising providing, for each task of the set of one or more tasks, a prediction score associated with a probability that completion of the task requires human intervention In [0028], "The ultimate decision on whether performance of the predicted action is likely to fail may include a straightforward thresholding approach (e.g., by comparing each signal to a corresponding threshold, or by generating a final failure score and comparing this single score to a corresponding threshold". In [0014], "Conversely, if the resulting failure predictor indicates that the robot agent 102 is predicted to fail in performing the task, then the robot agent 102 operates in a guided mode in which the robot agent 102 seeks guidance input from one or more of the guidance sources 110 by transmitting a guidance request 114 to the agent coordination subsystem 106.". Guidance sources encompass human operators in [0012]. "The guidance sources 110 operate as “experts” or “teachers” for the robot agents 102, and may be implemented as human operators providing feedback or other guidance through a computer interface". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the task success prediction of Davidson and apply that to the system of Zhang. Motivation to do so comes from the same rationale as outlined above with respect to Claim 6. Claim 15 is rejected as disclosing substantially similar limitations as Claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEODORE L XIE whose telephone number is (571)272-7102. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THEODORE XIE/Examiner, Art Unit 3623 /WILLIAM S BROCKINGTON III/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Mar 21, 2023
Application Filed
Jan 22, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591576
DRILLING PERFORMANCE ASSISTED WITH AN ARTIFICIAL INTELLIGENCE ENGINE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
1y 7m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month