Prosecution Insights
Last updated: April 19, 2026
Application No. 18/084,753

SYSTEMS, DEVICES, AND METHODS FOR DEVELOPING ROBOT AUTONOMY

Non-Final OA §103
Filed
Dec 20, 2022
Examiner
KASPER, BYRON XAVIER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sanctuary Cognitive Systems Corporation
OA Round
4 (Non-Final)
70%
Grant Probability
Favorable
4-5
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
72 granted / 103 resolved
+17.9% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is responsive to Application No. 18/084,753 and the amendments filed on 1/12/2026. 3. Claims 21-32 are presented for examination. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on 3/31/2023 has been fully considered by the Examiner. Response to Arguments 5. Applicant's arguments filed 1/12/2026 with respect to new independent claim 21, arguing that the previously cited prior art fails to teach a method of increasing an autonomy of a robot, have been fully considered but they are not persuasive. Regarding independent claim 21, the Applicant argues that new independent claim 21 is not disclosed, in any combination, within any of the previously cited references. However, the Examiner respectfully disagrees. After updated searching/consideration in view of the new claim, the Examiner has determined that the previous combination of US 20140163730 A1 to Mian, US 9802317 B1 to Watts, and US 8639644 B1 to Hickman still teaches all of the limitations of the claim, and therefore, claim 21 is rejected under 35 U.S.C. 103, in which will be described later. Regarding dependent claims 22-32, updated searching/consideration as also been made in view of these claims, and it has been determined that all of these claims are rejected, in which will be described later. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claim(s) 21, 23, 24, 29, 30, and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian). Regarding Claim 21, Hickman teaches a method of developing robot autonomy, the method comprising: transmitting, by a first robot in a fleet of robots, a request for instructions to other robots in the fleet of robots (Col. 11 lines 13-30, where “Because multiple robots can share the information in the shared robot knowledge base 306, information learned about a particular object by one robot, such as robot 301, can be shared with another robot authorized to access information about that particular object, such as robot 302. … In a very simple example, if robot 301 learns that a particular type of hammer weighs 3 pounds, robot 301 can update the information about the weight of the hammer that is stored in the shared robot knowledge base 306. If robot 302 is authorized to access information about hammers, then the next time robot 302 queries the cloud computing system 204 for information about the same type of hammer, then robot 302 will know that the hammer weighs 3 pounds (without having to independently weigh the hammer) based on the data about the type of hammer that was previously uploaded to the shared robot knowledge base 306 by robot 301.”), (Col. 28 lines 25-26, where “The method 600 begins at block 601 where a cloud computing system receives a first query from a first robot.”), the request for instructions comprising ancillary data collected by the first robot for each candidate action in the first set of candidate actions (Col. 28 lines 25-32, where “The method 600 begins at block 601 where a cloud computing system receives a first query from a first robot. … The first query may include identification information associated with an object. The identification information may include any type of object identification described herein, e.g., image data, textual data, sound data, etc.”); in response to transmitting the request for instructions to the other robots in the fleet of robots, receiving, by the first robot, a set of instructions for at least one candidate action in the first set of candidate actions from a second robot in the fleet of robots having a higher level of autonomy compared to the first robot (Col. 11 lines 9-30, where “One important feature of the shared robot knowledge base 306 is that many different robots may access the shared robot knowledge base 306, download information from the shared robot knowledge base 306, and upload information to the shared robot knowledge base 306. Because multiple robots can share the information in the shared robot knowledge base 306, information learned about a particular object by one robot, such as robot 301, can be shared with another robot authorized to access information about that particular object, such as robot 302. … In a very simple example, if robot 301 learns that a particular type of hammer weighs 3 pounds, robot 301 can update the information about the weight of the hammer that is stored in the shared robot knowledge base 306. If robot 302 is authorized to access information about hammers, then the next time robot 302 queries the cloud computing system 204 for information about the same type of hammer, then robot 302 will know that the hammer weighs 3 pounds (without having to independently weigh the hammer) based on the data about the type of hammer that was previously uploaded to the shared robot knowledge base 306 by robot 301.”); and executing, by the first robot, the set of instructions to obtain a result (Col. 26 line 66 – Col. 27 line 26, where “After determining the object is a cup 414 and obtaining information about the cup 414, the cloud computing system 401 can send object data 418 associated with the cup 414 to the robot 413 in response to the identification query 415 received from the robot 413. The object data 418 may include both the identity of the cup 414 and instructions for interacting with the cup 414 based on the application or task that the robot 413 has been instructed to execute or perform. … As a result, when the robot 413 applies the "grasp" task to the type of "cup" object here (i.e., when robot 413 grasps cup 414), the robot 413 successfully grasps the cup without crushing the cup 419. Thus, the second robot 413 has in effect learned from the experience of the earlier robot 404. After successfully grasping the cup 419, the robot 413 may send feedback 420 to the cloud processing engine 402 that confirms the accuracy of the instructions that the robot 413 received in the object data 418 from the cloud computing system 401.”). Hickman is silent on the request for instructions comprising a first set of candidate actions identified by the first robot; and updating, by the first robot, a control model of the first robot in a data repository of the first robot based at least in part on the result. However, Watts teaches the request for instructions comprising a first set of candidate actions identified by the first robot (Col. 5 lines 7-19, where “As an example task, the control system may attempt to determine, from a model of various boxes present in the robotic manipulator's workspace, various “box hypotheses” (e.g., hypothesized edges, corners, borders, etc. of the boxes that correspond to the actual edges, corners, borders, etc. of the boxes in the workspace) so as to segment the model. If the control system is not confident that a particular box hypothesis is accurate, the control system may request remote assistance with confirming, rejecting, or adjusting the particular box hypothesis. Whereas, when the control system's confidence level for a particular box hypothesis is high, the control system may determine that no remote assistance is necessary.”), (Col. 6 lines 16-24, where “In scenarios where the control system requests human assistance with distinguishing objects, the interface may enable the human user to select or otherwise interact with, via the interface, virtual features important for robotic operation, such as virtual/visual indications of edges, corners, and/or surfaces of the objects, so that the human user (and thereby the control system) can detect edges, corners, etc. that correspond to locations of actual boundary lines between objects in the environment.”). Further, Mian teaches updating, by the first robot, a control model of the first robot in a data repository of the first robot based at least in part on the result ([0037] via “During teleoperation by the human user 12, the computer system 21 can monitor the actions performed by the human user 12 to: learn how to address a similar situation in the future; preempt actions that will result in an unsafe condition; identify a point at which the human user 12 has provided sufficient assistance to allow the computer system 21 to retake control of the robotic device 20; and/or the like.”), ([0043] via “In an embodiment, while the human user 12 is performing some level of control, the robotic device 20 can monitor progress of the action to learn how to subsequently perform the action autonomously based on the human user's 12 actions. Furthermore, the robotic device 20 can automatically identify when the human user 12 has performed sufficient acts to enable the robotic device 20 to continue with the task without further human assistance. Subsequently, the robotic device 20 can adjust the current operating state 42A-42C in the hybrid control architecture 42 and commence autonomous operations.”), (Note: The Examiner interprets the robotic device of Mian learning the operations from the human tele-operation as the updating of the first control model in this instance.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Watts wherein the request for instructions comprises a first set of candidate actions identified by the first robot. Doing so acknowledges that the robot has a predetermined amount of action plans, and requests assistance for determining the best course of action based on the predetermined action plans, as stated above by Watts in Col. 5 lines 7-19. In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Mian of updating, by the first robot, a control model of the first robot in a data repository of the first robot based at least in part on the result. Doing so increases the autonomy of the robot to perform tasks such that the robot no longer needs assistance from outside sources, as stated above by Mian in paragraph [0043]. Regarding Claim 23, modified reference Hickman teaches the method of claim 21, wherein the request for instructions is transmitted to the other robots in the fleet of robots indirectly via a tele-operation system communicatively coupled to the fleet of robots (Col. 11 lines 9-30, where “One important feature of the shared robot knowledge base 306 is that many different robots may access the shared robot knowledge base 306, download information from the shared robot knowledge base 306, and upload information to the shared robot knowledge base 306. Because multiple robots can share the information in the shared robot knowledge base 306, information learned about a particular object by one robot, such as robot 301, can be shared with another robot authorized to access information about that particular object, such as robot 302. … In a very simple example, if robot 301 learns that a particular type of hammer weighs 3 pounds, robot 301 can update the information about the weight of the hammer that is stored in the shared robot knowledge base 306. If robot 302 is authorized to access information about hammers, then the next time robot 302 queries the cloud computing system 204 for information about the same type of hammer, then robot 302 will know that the hammer weighs 3 pounds (without having to independently weigh the hammer) based on the data about the type of hammer that was previously uploaded to the shared robot knowledge base 306 by robot 301.”), (Note: See Figure 3 of Hickman as well.). Regarding Claim 24, modified reference Hickman teaches the method of claim 21, further comprising receiving, by the first robot, an objective from a tele-operation system communicatively coupled to the fleet of robots and identifying the first set of candidate actions based on the objective (Col. 26 lines 47-56, where “When an accident happens, a human can "coach" the robot on how much force to use by, for example, manually controlling the robot's hand to grasp a new cup. The robot can capture the grasping force that it used to successfully grasp the cup under the manual control of the human, and then send feedback 411 to the cloud processing system 402 with the modified grasping force. The cloud processing system 402 can then update 412 task and object data in the shared robot knowledge base 403 to improve how the "grasp" task is applied this particular "cup" object.”). Regarding Claim 29, modified reference Hickman teaches the method of claim 21, but is silent on wherein updating the control model of the first robot comprises adjusting one or more of a classifier, a model, a policy, an algorithm, a rule, or a parameter of the control model of the first robot based on the result. However, Mian teaches wherein updating the control model of the first robot comprises adjusting one or more of a classifier, a model, a policy, an algorithm, a rule, or a parameter of the control model of the first robot based on the result ([0037] via “During teleoperation by the human user 12, the computer system 21 can monitor the actions performed by the human user 12 to: learn how to address a similar situation in the future; preempt actions that will result in an unsafe condition; identify a point at which the human user 12 has provided sufficient assistance to allow the computer system 21 to retake control of the robotic device 20; and/or the like.”), ([0043] via “In an embodiment, while the human user 12 is performing some level of control, the robotic device 20 can monitor progress of the action to learn how to subsequently perform the action autonomously based on the human user's 12 actions. Furthermore, the robotic device 20 can automatically identify when the human user 12 has performed sufficient acts to enable the robotic device 20 to continue with the task without further human assistance. Subsequently, the robotic device 20 can adjust the current operating state 42A-42C in the hybrid control architecture 42 and commence autonomous operations.”), (Note: The Examiner interprets the robotic device of Mian learning how to address the task as updating at least the model of the control model of the robot.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Mian wherein updating the control model of the first robot comprises adjusting one or more of a classifier, a model, a policy, an algorithm, a rule, or a parameter of the control model of the first robot based on the result. Doing so increases the autonomy of the robot to perform tasks such that the robot no longer needs assistance from outside sources, as stated above by Mian in paragraph [0043]. Regarding Claim 30, modified reference Hickman teaches the method of claim 21, wherein executing the set of instructions comprises causing an actuator of the first robot to execute a movement (Col. 8 lines 36-38, where “The robot 200 may also have electromechanical actuation devices 208 configured to enable the robot 200 to move about its environment or interact with objects in its environment.”). Regarding Claim 31, modified reference Hickman teaches the method of claim 21, the method further comprising: transmitting, by the first robot, a second request for instructions to a tele-operation system communicatively coupled to the fleet of robots (Col. 11 lines 13-30 and Col. 28 lines 25-26 recited above in claim 21), (Col. 10 lines 43-49, where “In the example shown in FIG. 3, the cloud computing system 304 includes a cloud processing engine 305 and a shared robot knowledge base 306. However, the cloud computing system 304 may have additional components as well. In some embodiments, individual robots, such as robots 301, 302, and 303, may send queries to the cloud computing system 304.”), (Note: The Examiner interprets Hickman using the plural “queries” and the nature of the invention of Hickman as a whole, that individual robots are able to transmit multiple requests for instructions to the tele-operation system.), the second request for instructions comprising ancillary data collected by the first robot for each candidate action in the second set of candidate actions (Col. 28 lines 25-32, where “The method 600 begins at block 601 where a cloud computing system receives a first query from a first robot. … The first query may include identification information associated with an object. The identification information may include any type of object identification described herein, e.g., image data, textual data, sound data, etc.”); in response to transmitting the second request for instructions to the tele-operation system, receiving, by the first robot, a second set of instructions for at least one candidate action in the second set of candidate actions from the tele-operation system (Col. 11 lines 9-30, where “One important feature of the shared robot knowledge base 306 is that many different robots may access the shared robot knowledge base 306, download information from the shared robot knowledge base 306, and upload information to the shared robot knowledge base 306. Because multiple robots can share the information in the shared robot knowledge base 306, information learned about a particular object by one robot, such as robot 301, can be shared with another robot authorized to access information about that particular object, such as robot 302. … In a very simple example, if robot 301 learns that a particular type of hammer weighs 3 pounds, robot 301 can update the information about the weight of the hammer that is stored in the shared robot knowledge base 306. If robot 302 is authorized to access information about hammers, then the next time robot 302 queries the cloud computing system 204 for information about the same type of hammer, then robot 302 will know that the hammer weighs 3 pounds (without having to independently weigh the hammer) based on the data about the type of hammer that was previously uploaded to the shared robot knowledge base 306 by robot 301.”); and executing, by the first robot, the second set of instructions to obtain a second result (Col. 26 line 66 – Col. 27 line 26, where “After determining the object is a cup 414 and obtaining information about the cup 414, the cloud computing system 401 can send object data 418 associated with the cup 414 to the robot 413 in response to the identification query 415 received from the robot 413. The object data 418 may include both the identity of the cup 414 and instructions for interacting with the cup 414 based on the application or task that the robot 413 has been instructed to execute or perform. … As a result, when the robot 413 applies the "grasp" task to the type of "cup" object here (i.e., when robot 413 grasps cup 414), the robot 413 successfully grasps the cup without crushing the cup 419. Thus, the second robot 413 has in effect learned from the experience of the earlier robot 404. After successfully grasping the cup 419, the robot 413 may send feedback 420 to the cloud processing engine 402 that confirms the accuracy of the instructions that the robot 413 received in the object data 418 from the cloud computing system 401.”). Hickman is silent on the second request for instructions comprising a second set of candidate actions identified by the first robot; and updating, by the first robot, the control model of the first robot based at least in part on the second result. However, Watts teaches the second request for instructions comprising a second set of candidate actions identified by the first robot (Col. 5 lines 7-19, where “As an example task, the control system may attempt to determine, from a model of various boxes present in the robotic manipulator's workspace, various “box hypotheses” (e.g., hypothesized edges, corners, borders, etc. of the boxes that correspond to the actual edges, corners, borders, etc. of the boxes in the workspace) so as to segment the model. If the control system is not confident that a particular box hypothesis is accurate, the control system may request remote assistance with confirming, rejecting, or adjusting the particular box hypothesis. Whereas, when the control system's confidence level for a particular box hypothesis is high, the control system may determine that no remote assistance is necessary.”), (Col. 6 lines 16-24, where “In scenarios where the control system requests human assistance with distinguishing objects, the interface may enable the human user to select or otherwise interact with, via the interface, virtual features important for robotic operation, such as virtual/visual indications of edges, corners, and/or surfaces of the objects, so that the human user (and thereby the control system) can detect edges, corners, etc. that correspond to locations of actual boundary lines between objects in the environment.”). Further, Mian teaches updating, by the first robot, the control model of the first robot based at least in part on the second result ([0037] via “During teleoperation by the human user 12, the computer system 21 can monitor the actions performed by the human user 12 to: learn how to address a similar situation in the future; preempt actions that will result in an unsafe condition; identify a point at which the human user 12 has provided sufficient assistance to allow the computer system 21 to retake control of the robotic device 20; and/or the like.”), ([0043] via “In an embodiment, while the human user 12 is performing some level of control, the robotic device 20 can monitor progress of the action to learn how to subsequently perform the action autonomously based on the human user's 12 actions. Furthermore, the robotic device 20 can automatically identify when the human user 12 has performed sufficient acts to enable the robotic device 20 to continue with the task without further human assistance. Subsequently, the robotic device 20 can adjust the current operating state 42A-42C in the hybrid control architecture 42 and commence autonomous operations.”), (Note: The Examiner interprets the robotic device of Mian learning the operations from the human tele-operation as the updating of the first control model in this instance.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Watts wherein the second request for instructions comprises a second set of candidate actions identified by the first robot. Doing so acknowledges that the robot has a predetermined amount of action plans, and requests assistance for determining the best course of action based on the predetermined action plans, as stated above by Watts in Col. 5 lines 7-19. In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Mian of updating, by the first robot, the control model of the first robot based at least in part on the second result. Doing so increases the autonomy of the robot to perform tasks such that the robot no longer needs assistance from outside sources, as stated above by Mian in paragraph [0043]. 9. Claim(s) 22 and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian), and further in view of Borne-Pons (US 20220147059 A1 hereinafter Borne-Pons). Regarding Claim 22, modified reference Hickman teaches the method of claim 21, but is silent on wherein the request for instructions is transmitted to the other robots in the fleet of robots directly via peer-to-peer communication. However, Borne-Pons teaches wherein the request for instructions is transmitted to the other robots in the fleet of robots directly via peer-to-peer communication ([0038] via “At a first time (t=1), the robot 420 may determine that a task is available and may broadcast a task available message 422 to neighboring robot 410 and a task available message 424 to neighboring robot 430. … In an aspect, task proposals may be transmitted to a subset of the fleet of robots, rather than all of the robots.”), (Note: See Figure 4 of Borne-Pons as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Borne-Pons wherein the request for instructions is transmitted to the other robots in the fleet of robots directly via peer-to-peer communication. Doing so allows the robots of the fleet of robots to directly communicate with each other, such that they are able to efficiently cooperate with each other, as stated by Borne-Pons ([0017] via “The above-described concepts enable the fleet of robots to continue operations in an efficient manner despite failures of robots as well as in environments where connectivity among the fleet of robots may be limited to ensure that tasks are completed in a timely manner.”). Regarding Claim 32, modified reference Hickman teaches the method of claim 31, but is silent on the method further comprising: in response to transmitting the second request for instructions to the tele-operation system, broadcasting, by the tele-operation system, the second request for instructions and the second set of instructions to at least one robot in the fleet of robots other than the first robot. However, Borne-Pons teaches in response to transmitting the second request for instructions to the tele-operation system, broadcasting, by the tele-operation system, the second request for instructions and the second set of instructions to at least one robot in the fleet of robots other than the first robot ([0026] via “The communication interfaces may enable the robot to participate in bidirectional communication nearby robots, such as to broadcast available tasks, receive transmissions related to tasks from other robots, transmit status information associated with tasks the robot 200 is performing to other robots in the vicinity of the robot 200 or, when connectivity is available, to remote devices accessible via a network (e.g., the one or more networks 140 of FIG. 1).”), ([0038] via “At a first time (t=1), the robot 420 may determine that a task is available and may broadcast a task available message 422 to neighboring robot 410 and a task available message 424 to neighboring robot 430. In an aspect, the task available messages 422, 424 may be generated based at least in part on the specification associated with the task. In an aspect, task proposals may be transmitted to a subset of the fleet of robots, rather than all of the robots.”), (Note: See Figure 4 of Borne-Pons as well.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Borne-Pons wherein the method further comprises: in response to transmitting the second request for instructions to the tele-operation system, broadcasting, by the tele-operation system, the second request for instructions and the second set of instructions to at least one robot in the fleet of robots other than the first robot. Doing so allows the robots of the fleet of robots to directly communicate with each other, such that they are able to efficiently cooperate with each other, as stated by Borne-Pons ([0017] via “The above-described concepts enable the fleet of robots to continue operations in an efficient manner despite failures of robots as well as in environments where connectivity among the fleet of robots may be limited to ensure that tasks are completed in a timely manner.”). 10. Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian), and further in view of Rembisz et al. (US 11426885 B1 hereinafter Rembisz). Regarding Claim 25, modified reference Hickman teaches the method of claim 21, but is silent on the method further comprising receiving, by the first robot, an objective from one of the other robots in the fleet of robots and identifying the first set of candidate actions based on the objective. However, Rembisz teaches receiving, by the first robot, an objective from one of the other robots in the fleet of robots and identifying the first set of candidate actions based on the objective (Col. 7 lines 3-13, where “During operation, control system 118 may communicate with other systems of robotic system 100 via wired or wireless connections, and may further be configured to communicate with one or more users of the robot. As one possible illustration, control system 118 may receive an input (e.g., from a user or from another robot) indicating an instruction to perform a requested task, such as to pick up and move an object from one location to another location. Based on this input, control system 118 may perform operations to cause the robotic system 100 to make a sequence of movements to perform the requested task.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Rembisz wherein the method further comprises receiving, by the first robot, an objective from one of the other robots in the fleet of robots and identifying the first set of candidate actions based on the objective. Doing so communicates specific information between robots such that the robots are able to act accordingly, as stated above by Rembisz. 11. Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian), and further in view of Davidson et al. (US 20220289537 A1 hereinafter Davidson). Regarding Claim 26, modified reference Hickman teaches the method of claim 21, further comprising: receiving, by the first robot, an objective from the tele-operation system communicatively coupled to the fleet of robots or from one of the other robots in the fleet of robots (Col. 28 lines 41-49, where “At block 603, the cloud computing system sends data associated with the object of the first query to the first robot in response to the first query received from the first robot at block 601. The data sent to the first robot at block 603 may include … and/or (iii) instructions for interacting with the object of the first query.”). Hickman is silent on determining, by the first robot, a second set of candidate actions based on the objective; and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on a level of autonomy of the first robot. However, Davidson teaches determining, by the first robot, a second set of candidate actions based on the objective ([0023] via “In at least one embodiment, the robot agent 102 employs a sequential model of actions to perform the assigned task; that is, the assigned task is transformed by the robot agent 102 into a sequence of predicted actions, each predicted action to be performed in turn after performance of the previous action is complete. Accordingly, at block 404 the action prediction module 214 of the configuration 200 for the robot agent 102 predicts the next action to be performed by the robot agent 102 in furtherance of the task (or, if the robot agent 102 is starting this process from a newly-received task, predicting the first action to be performed).”); and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on a level of autonomy of the first robot ([0027] via “With a predicted action identified, at block 406 the action failure prediction module 216 uses the learned model 222 to analyze the predicted action … to predict whether the robot agent 102 is likely to fail to adequately perform the action. In this context, “failure” can be specified in various ways, depending on the goals and parameters of the system 100. … In other contexts, “failure” could be considered performing an action, or performing the overall task, over a duration that exceeds a maximum threshold, or performing the action or overall task in a manner that introduces uncertainty in the result (e.g., successfully picking and transporting a pallet, but placing it in a destination location that is outside of a threshold margin of the intended destination location).”), ([0028] via “The signals considered in making this prediction can include, for example, … familiarity of the circumstances surrounding the action to other actions and circumstances encountered previously, unfamiliar or anomalous circumstances, status or capability indicia for the components of the robot agent 102 itself, status or capability indicia for other robot agents 102 or other elements in the operating environment 104, and the like.”), (Note: The Examiner interprets the predicted actions of Davidson being classified as “failure” and non-failure as identifying the first set of candidate actions.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Davidson wherein the method further comprises: determining, by the first robot, a second set of candidate actions based on the objective; and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on a level of autonomy of the first robot. Doing so prevents the first robot from executing a set of candidate actions that the first robot is not equipped to properly execute, as stated by Davidson ([0029] via “In the event that the robot agent 102 is predicted to not fail (that is, succeed) in adequately performing the predicted action, then the robot agent 102 implements an unguided mode for performing the action. In the unguided mode, the robot agent 102 directly performs the action without seeking guidance from one or more teachers or experts in the form of guidance sources 110.”), ([0030] via “Returning to block 406, in the event that the robot agent 102 is predicted to fail to adequately perform the predicted action, then the robot agent 102 implements a guided mode for performing the action. In the guided mode, the robot agent 102 seeks and implements guidance from one or more of the guidance sources 110 to identify a guided solution for performing the action in a manner that reduces the risk of failure, and then performs the action on the basis of the guided solution.”). 12. Claim(s) 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian), and further in view of Davidson et al. (US 202200289537 A1 hereinafter Davidson) and Groz (US 20210031364 A1 hereinafter Groz). Regarding Claim 27, modified reference Hickman teaches the method of claim 21, but is silent on the method further comprising: determining, by the first robot, a second set of candidate actions based on an objective; assigning, by the first robot, a probability to each of the candidate actions in the second set of candidate actions based on an expected contribution of the candidate action to the objective; ranking, by the first robot, the second set of candidate actions based on the respective probabilities; determining, by the first robot, a threshold level of autonomy based at least in part on the ranking; and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on the candidate actions in the second set of candidate actions for which the level of autonomy of the first robot is less than the threshold level of autonomy. However, Davidson teaches determining, by the first robot, a second set of candidate actions based on an objective ([0023] via “In at least one embodiment, the robot agent 102 employs a sequential model of actions to perform the assigned task; that is, the assigned task is transformed by the robot agent 102 into a sequence of predicted actions, each predicted action to be performed in turn after performance of the previous action is complete. Accordingly, at block 404 the action prediction module 214 of the configuration 200 for the robot agent 102 predicts the next action to be performed by the robot agent 102 in furtherance of the task (or, if the robot agent 102 is starting this process from a newly-received task, predicting the first action to be performed).”); assigning, by the first robot, a probability to each of the candidate actions in the second set of candidate actions based on an expected contribution of the candidate action to the objective ([0039] via “As explained above, the decision by a robot agent 102 of whether to proceed with performing an action in an unguided mode or a guided mode is predicated on whether the robot agent 102 is predicted to fail in performing the action without guidance. … “failure” in this context typically means performing the action in a manner that risks injury to humans or to property, … inability to execute the task successfully (such as being unable to place a pallet because another pallet is present in the place location), inability to perform the task with sufficient precision or accuracy, in ability to perform the task with a result that has a sufficiently high key performance indicator (KPI) or other performance-related metric for the task, and so forth. As such, the evaluation of a predicted action is not only whether the robot agent 102 can successfully perform the sequence movements or motions to enact the action itself, but whether the action can be enacted in a way that certain policies are met, …. This evaluation, in one embodiment, involves at least two determinations by the failure prediction module 216 of the robot agent 102 (or other component of the system 100): a determination as to the quality of the action (block 502) and a determination as to the probability (or other likelihood representation) that the robot agent 102 will be able to perform the action without violating one or more specified policies (block 504). … The signals considered in determining the probability of failure to perform the action may take any of a variety of forms. To illustrate, some signals may be a binary value, or a value representative of a magnitude of the corresponding confidence or other signal parameter. In other instances, the signal may constitute a measure on distributions or probability functions.”); ranking, by the first robot, the second set of candidate actions based on the respective probabilities ([0044] via “Some or all of these signals, as well as other various signals that may be utilized based on the teachings provided herein, are considered by the failure prediction module 216 to predict whether the robot agent will fail to perform the predicted action at issue (block 506). In at least one embodiment, this prediction is presented as a fail predictor, which may be formatted as a binary predictor (e.g., 0=predicted to succeed, 1=predicted to fail), or as a value within a range of more than two values (e.g., a range of 1 to 10), or using some other format as appropriate. Any of a variety of techniques may be employed to determine the fail predictor from the signals under consideration. To illustrate, in some embodiments, a thresholding approach (block 522) may be employed.”); and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on the candidate actions in the second set of candidate actions for which the level of autonomy of the first robot is less than the threshold level of autonomy ([0027] via “With a predicted action identified, at block 406 the action failure prediction module 216 uses the learned model 222 to analyze the predicted action … to predict whether the robot agent 102 is likely to fail to adequately perform the action. In this context, “failure” can be specified in various ways, depending on the goals and parameters of the system 100. … In other contexts, “failure” could be considered performing an action, or performing the overall task, over a duration that exceeds a maximum threshold, or performing the action or overall task in a manner that introduces uncertainty in the result (e.g., successfully picking and transporting a pallet, but placing it in a destination location that is outside of a threshold margin of the intended destination location).”), ([0028] via “The signals considered in making this prediction can include, for example, … familiarity of the circumstances surrounding the action to other actions and circumstances encountered previously, unfamiliar or anomalous circumstances, status or capability indicia for the components of the robot agent 102 itself, status or capability indicia for other robot agents 102 or other elements in the operating environment 104, and the like.”), (Note: The Examiner interprets the predicted actions of Davidson being classified as “failure” and non-failure as identifying the first set of candidate actions.). Further, Groz teaches determining, by the first robot, a threshold level of autonomy based at least in part on the ranking ([0083] via “FIG. 7 is a flow chart showing a method 700 for backup control based continuous training of robots, according to some example embodiments. The method 700 may commence with collecting, by a processor of the robot, sensor data from a plurality of sensors of the robot at operation 705. The sensor data may be related to a task being performed by the robot based on an AI model. The method 700 may further include determining, based on the sensor data and the AI model, that a probability of completing the task is below a threshold at operation 710.”), ([0084] via “The method 700 may continue with sending, in response to the determination that the probability of completing the task is below the threshold, a request for operator assistance to a remote computing device at operation 715. … In response to sending the request, teleoperation data may be received by the processor from the remote computing device at operation 720.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Davidson wherein the method further comprises: determining, by the first robot, a second set of candidate actions based on an objective; assigning, by the first robot, a probability to each of the candidate actions in the second set of candidate actions based on an expected contribution of the candidate action to the objective; ranking, by the first robot, the second set of candidate actions based on the respective probabilities; and identifying, by the first robot, the first set of candidate actions from the second set of candidate actions based on the candidate actions in the second set of candidate actions for which the level of autonomy of the first robot is less than the threshold level of autonomy. Doing so prevents the first robot from executing a set of candidate actions that the first robot is not equipped to properly execute, as stated by Davidson ([0029] via “In the event that the robot agent 102 is predicted to not fail (that is, succeed) in adequately performing the predicted action, then the robot agent 102 implements an unguided mode for performing the action. In the unguided mode, the robot agent 102 directly performs the action without seeking guidance from one or more teachers or experts in the form of guidance sources 110.”), ([0030] via “Returning to block 406, in the event that the robot agent 102 is predicted to fail to adequately perform the predicted action, then the robot agent 102 implements a guided mode for performing the action. In the guided mode, the robot agent 102 seeks and implements guidance from one or more of the guidance sources 110 to identify a guided solution for performing the action in a manner that reduces the risk of failure, and then performs the action on the basis of the guided solution.”). In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Groz wherein the method further comprises: determining, by the first robot, a threshold level of autonomy based at least in part on the ranking. Doing so allows a teleoperator to assist the robot in completing the task when the robot is less likely to complete the task on its own, as stated above by Groz in paragraph [0084]. 13. Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hickman et al. (US 8639644 B1 hereinafter Hickman) in view of Watts et al. (US 9802317 B1 hereinafter Watts) and Mian (US 20140163730 A1 hereinafter Mian), and further in view of Davidson et al. (US 20220289537 A1 hereinafter Davidson) and Yamada et al. (US 20200101613 A1 hereinafter Yamada). Regarding Claim 28, modified reference Hickman teaches the method of claim 21, further comprising: receiving, by the first robot, an objective from a tele-operation system communicatively coupled to the fleet of robots or from one of the other robots in the fleet of robots (Col. 26 lines 47-56, where “When an accident happens, a human can "coach" the robot on how much force to use by, for example, manually controlling the robot's hand to grasp a new cup. The robot can capture the grasping force that it used to successfully grasp the cup under the manual control of the human, and then send feedback 411 to the cloud processing system 402 with the modified grasping force. The cloud processing system 402 can then update 412 task and object data in the shared robot knowledge base 403 to improve how the "grasp" task is applied this particular "cup" object.”). Hickman is silent on determining, by the first robot, the first set of candidate actions based on the objective; assigning, by the first robot, a probability to each of the candidate actions in the first set of candidate actions based on an expected contribution of the candidate action to the objective; and ordering, by the first robot, the first set of candidate actions based on the probability assigned to each of the candidate actions. However, Davidson teaches determining, by the first robot, the first set of candidate actions based on the objective ([0023] via “In at least one embodiment, the robot agent 102 employs a sequential model of actions to perform the assigned task; that is, the assigned task is transformed by the robot agent 102 into a sequence of predicted actions, each predicted action to be performed in turn after performance of the previous action is complete. Accordingly, at block 404 the action prediction module 214 of the configuration 200 for the robot agent 102 predicts the next action to be performed by the robot agent 102 in furtherance of the task (or, if the robot agent 102 is starting this process from a newly-received task, predicting the first action to be performed).”); and assigning, by the first robot, a probability to each of the candidate actions in the first set of candidate actions based on an expected contribution of the candidate action to the objective ([0039] via “As explained above, the decision by a robot agent 102 of whether to proceed with performing an action in an unguided mode or a guided mode is predicated on whether the robot agent 102 is predicted to fail in performing the action without guidance. … “failure” in this context typically means performing the action in a manner that risks injury to humans or to property, … inability to execute the task successfully (such as being unable to place a pallet because another pallet is present in the place location), inability to perform the task with sufficient precision or accuracy, in ability to perform the task with a result that has a sufficiently high key performance indicator (KPI) or other performance-related metric for the task, and so forth. As such, the evaluation of a predicted action is not only whether the robot agent 102 can successfully perform the sequence movements or motions to enact the action itself, but whether the action can be enacted in a way that certain policies are met, …. This evaluation, in one embodiment, involves at least two determinations by the failure prediction module 216 of the robot agent 102 (or other component of the system 100): a determination as to the quality of the action (block 502) and a determination as to the probability (or other likelihood representation) that the robot agent 102 will be able to perform the action without violating one or more specified policies (block 504). … The signals considered in determining the probability of failure to perform the action may take any of a variety of forms. To illustrate, some signals may be a binary value, or a value representative of a magnitude of the corresponding confidence or other signal parameter. In other instances, the signal may constitute a measure on distributions or probability functions.”). Further, Yamada teaches ordering, by the first robot, the first set of candidate actions based on the probability assigned to each of the candidate actions ([0092] via “Before controlling the manipulator 1, the controller 4 sends operation information for executing a simulation to the determination unit 205. If there is a plurality of candidates for operation information, and when the robot 10 actually operates, operation information with the highest probability that the target object 41 is moved to the target area 42 is executed. Alternatively, the estimation unit 213 may send a plurality of candidates for operation information to the determination unit 205. In this case, priority is given to operation information with a high probability of success of a task, whereby the determination unit 205 enables the user to confirm operation information in descending order of probability.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Davidson wherein the method further comprises: determining, by the first robot, the first set of candidate actions based on the objective; and assigning, by the first robot, a probability to each of the candidate actions in the first set of candidate actions based on an expected contribution of the candidate action to the objective. Doing so prevents the first robot from executing a set of candidate actions that the first robot is not equipped to properly execute, as stated by Davidson ([0029] via “In the event that the robot agent 102 is predicted to not fail (that is, succeed) in adequately performing the predicted action, then the robot agent 102 implements an unguided mode for performing the action. In the unguided mode, the robot agent 102 directly performs the action without seeking guidance from one or more teachers or experts in the form of guidance sources 110.”), ([0030] via “Returning to block 406, in the event that the robot agent 102 is predicted to fail to adequately perform the predicted action, then the robot agent 102 implements a guided mode for performing the action. In the guided mode, the robot agent 102 seeks and implements guidance from one or more of the guidance sources 110 to identify a guided solution for performing the action in a manner that reduces the risk of failure, and then performs the action on the basis of the guided solution.”). In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Yamada wherein the method further comprises: ordering, by the first robot, the first set of candidate actions based on the probability assigned to each of the candidate actions. Doing so initiates execution of the task using operation information that is most likely going to result in a successful execution of the task, prior to using potentially less successful operation information, as stated above by Yamada. Examiner’s Note 14. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Conclusion 15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYRON X KASPER whose telephone number is (571)272-3895. The examiner can normally be reached Monday - Friday 8 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may b obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYRON XAVIER KASPER/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Dec 10, 2024
Non-Final Rejection — §103
Mar 17, 2025
Response Filed
Mar 31, 2025
Non-Final Rejection — §103
Jul 11, 2025
Response Filed
Aug 06, 2025
Final Rejection — §103
Nov 12, 2025
Response after Non-Final Action
Jan 12, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594964
METHOD OF AND SYSTEM FOR GENERATING REFERENCE PATH OF SELF DRIVING CAR (SDC)
2y 5m to grant Granted Apr 07, 2026
Patent 12594137
HARD STOP PROTECTION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12583101
METHOD FOR OPERATING A MODULAR ROBOT, MODULAR ROBOT, COLLISION AVOIDANCE SYSTEM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12576529
ROBOT SIMULATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12564962
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
70%
Grant Probability
88%
With Interview (+18.4%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month