Prosecution Insights
Last updated: April 19, 2026
Application No. 18/598,038

SYSTEMS, METHODS, AND CONTROL MODULES FOR CONTROLLING STATES OF ROBOT SYSTEMS

Final Rejection §103
Filed
Mar 07, 2024
Examiner
ABUELHAWA, MOHAMMED YOUSEF
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sanctuary Cognitive Systems Corporation
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
54 granted / 67 resolved
+28.6% vs TC avg
Strong +20% interview lift
Without
With
+20.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
104
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 67 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 12/01/2025, has been received and made of record. In response to the Non-Final Office Action, dated on 05/30/2025. Claims 1-3, 5-9, 11-17 and 19-20 are pending in the current application. Claims 4, 10 and 18 have been cancelled. Response to Arguments Applicant’s arguments filed on 12/01/2025 have been fully considered. In the Arguments/Remarks: Re: Rejection of the Claims Under 35 U.S.C. 103 Applicant’s arguments regarding rejection of the claims under 35 U.S.C. 103 have been fully considered, however applicant’s arguments are directed towards language not found within the claims. The language on page 15 of applicant’s remarks recites “a balance is achieved where control of the robot body is still reasonably accurate…without undue burden on processing resources (which would be caused by validating every single state).” this language is not found within the applicant’s claims. Examiner encourages the applicant to include the above-mentioned language into the claims for full consideration. Examiner has augmented the rejection below (see updated rejection below) in view of applicant’s amendments. Applicant’s arguments regarding the limitations on page 14 of the remarks are directed towards the newly amended claim limitations and are addressed in the updated rejection (see below). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-9, 11-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fay (US 2021/0309310 A1) in view of Pramanick (US 2021/0232121 A1). Regarding claim 1, Fay teaches a method for operating a robot system including a robot controller and a robot body, the method comprising: capturing, by at least one environment sensor carried by the robot body, first environment data representing an environment of the robot body at a first time [(see at least paragraphs 40-41) As in 41 “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”] capturing, by at least one robot body sensor carried by the robot body, first robot body data representing a configuration of the robot body at the first time; [(see at least paragraph 43) “As an example, the robotic system 100 may use force sensors to measure load on various components of the robotic system 100. In some implementations, the robotic system 100 may include one or more force sensors on an arm or a leg to measure the load on the actuators that move one or more members of the arm or leg. As another example, the robotic system 100 may use one or more position sensors to sense the position of the actuators of the robotic system. For instance, such position sensors may sense states of extension, retraction, or rotation of the actuators on arms or legs.”] accessing, by the robot controller, context data indicating a context for operation of the robot system, the context data including semantic information about a specific task being performed by the robot system at the first time; [(see at least paragraph 27) “A legged robot may include one or more controllers that drive the legged robot's actuators. As described to herein, a “controller” may refer to a control configuration and parameters that, when applied to a control system that operates the robot's actuators, causes the robot to perform a particular action, carry out an operation, or act in accordance with a particular behavior. In some implementations, a controller might operate the robot's actuators to effect a certain gait (e.g., walk, trot, run, bound, gallop, etc.), maintain a certain condition (e.g., maintain balance, height, velocity, etc.), or some combination thereof. One or more controllers may be activated at a given time, each of which controls (or contributes to controlling) actuators on the robot to accomplish its particular task.”] Examiner notes that the semantic information is being interpreted as related task information such as task behavior or task progress. Fay teaches determining, by the robot controller, a first state of the robot body within the environment for the first time, based on at least the first environment data and the first robot body data [(see at least paragraph 73) “Similarly, roll control may involve applying horizontal forces against the ground to prevent the robot from tipping over. Footstep location control may involve any combination of footstep planning and near-real time adjustments to footsteps based on the robot's kinematic state. Yaw control may involve trajectory planning, steering control, and/or obstacle avoidance.”] Fay teaches controlling, by the robot controller, the robot body to transition through the sequence of states [(see at least paragraph 106) “As one example, the robot's behavior resulting from a single future step may be simulated, and the robot's position, velocity, balance, and other aspects of the robot's state may be predicted immediately after the robot's foot steps down at that single future step. This process may be repeated for a plurality of future steps at different spatial locations (each of which may correspond to a respective robot leg), such that a plurality of estimated future states is determined. Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”]; and for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequence of states, validating whether an actual state matches a predicted state. [(see at least paragraphs 98) “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600”] Fay does not explicitly teach applying, by the robot controller, a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states, wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system. However, Pramanick teaches applying, by the robot controller, a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states [(see at least paragraphs 23, 31-35) As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”], and wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system [(see at least paragraph 49-55, 23) As in 49 “To execute a task, a robot needs to perform a sequence of basic actions or tasks supported by its motion and manipulation capabilities. A task plan is a sequence of such actions that satisfies the intended task or goal. A task specified in an instruction is considered to change a hypothetical state of the world (initial state) to an expected state (goal state). The initial and goal conditions of a task are encoded as a conjunction of fluents expressed in first-order logic. The task templates are grounded using the predicted task dependency labels at step 304 to generate a planning problem in a Planning Domain Definition Language (PDDL) format.” As in 50 “During the grounding of the templates, assumed initial conditions for a task are updated by the post conditions of the actions of a previous sequential task. In the case of conditionals, a plan is generated for each conditional-dependent pair, and in run-time, the correct action sequence is chosen from the actual observed outcome of the conditional task. Therefore, the problem of generating a robotic task plan for the complex instruction is reduced to the ordering of the tasks catering to the execution dependencies, followed by planning individually for the goals of the tasks in order while validating the assumed initial states by the action post conditions.”] Examiner notes that it would be obvious to one of ordinary skill that when initial conditions for a task are updated by the post conditions of the actions of a previous sequential task, the statistics of the temporal evolution of the robot body would be learned in order to update the model accordingly to ensure successful execution of future tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fay to incorporate the teachings of Pramanick of applying, by the robot controller, a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to ensure task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state [(Pramanick 32)] and of wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 2, In view of the above combination of references, Fay further teaches further comprising, at or after the second time: capturing, by the at least one environment sensor, second environment data representing an environment of the robot body at the second time [(see at least paragraph 41) “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”] Examiner notes that monitoring the environment in real-time is being interpreted as taking environment data at a second time.; capturing, by the at least one robot body sensor, second robot body data representing a configuration of the robot body at the second time [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determining, by the robot controller, an actual second state of the robot body within the environment for the second time, based on the second environment data and the second robot body data [(see at least paragraphs 41-43) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and wherein validating whether an actual state matches a predicted state includes determining, by the robot controller, whether the actual second state matches the predicted second state from the sequence of states for the robot body. [(see at least paragraph 98) “The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Regarding claim 3, Modified Fay has all of the elements of claim 2 as discussed above. Fay teaches controlling the robot body to transition through the updated sequence of states. [(see at least paragraph 98) “For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Fay does not explicitly teach at or after the second time: if the actual second state is determined as not matching the predicted second state, applying the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time. However, Pramanick teaches at or after the second time: if the actual second state is determined as not matching the predicted second state, applying the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time [(see at least paragraph 52) “In accordance with an embodiment of the present disclosure, the step 306 of generating a CPT with a resolved task sequence comprises modifying the original task sequence w.sub.1: n. The modification of the original task sequence w.sub.1: n ensures that a conditional task is planned before any of its dependent tasks agnostic of a position of the dependent task in the original task sequence w.sub.1: n. In case of multiple conditional tasks in the same instruction, it is assumed that two such conditional tasks indicate the same conditional task, if the two tasks have the same task type. If so, it is ensured that the dependent tasks of the subsequent conditionals are planned after the original conditional task. Repeated tasks (different words meaning the same task may be identified based on the pre-condition template and the post-condition template for the tasks from the Knowledge Base) are masked. If a subsequent conditional task is of a different type, its subsequent tasks that have either dependent positive and dependent negative labels are considered to be actually dependent, i.e to be planned after the conditional perquisite. For tasks having a sequential dependency label, they are ordered as per their corresponding positions in the instruction.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of at or after the second time: if the actual second state is determined as not matching the predicted second state, applying the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 5, In view of the above combination of references, Fay further teaches wherein controlling the robot body to transition through the sequence of states comprises, for at least one state transitioned to: capturing, by the at least one environment sensor, respective environment data representing an environment of the robot body at a respective time of the state transitioned to [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; capturing, by the at least one robot body sensor, respective robot body data representing a configuration of the robot body at the respective time of the state transitioned to [(see at least paragraph 42) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; and wherein validating whether an actual state matches a predicted state comprises, for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequence of states: determining an actual state of the robot body within the environment for the respective time of the state transitioned to, based on at least the respective environment data and the respective robot body data [(see at least paragraphs 42, 98) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determining whether the actual state matches a predicted state of the robot body for the respective time of the state transitioned to, as predicted during the prediction of the sequence of states. [(see at least paragraphs 98,106) As in 98 “Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 106 “Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”] Examiner notes that the matching is being interpreted as being scored and compared to a selected satisfactory score. Regarding claim 6, In view of the above combination of references, Fay further teaches wherein controlling the robot body to transition through the sequence of states comprises, for the at least one state transitioned to: if the actual state is determined to match a predicted state for the respective time of the state transitioned to: continue controlling the robot system to transition through the sequence of states [(see at least paragraphs 98,74) As in 98 “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 74 “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay teaches and controlling the robot body to transition through the updated sequence of states. [(see at least paragraph 74) “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay does not explicitly teach and if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states. However, Pramanick teaches if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states [(see at least paragraphs 23, 31-34, 60-63) As in 23 “For example, in a simple instruction “pick the pen and bring it to me”, the robot has to first perform a picking task, followed by a bringing task. However, the execution of a task may be dependent upon a condition or the outcome of another task. For example, in the instruction “if the coffee is hot, then bring it to me, otherwise put it on the oven”, both the task of bringing the coffee and the task of putting the coffee on the oven is dependent upon the state of the coffee, i.e., whether it is hot. Moreover, the assumption that the tasks are to be performed in their order of appearance in the instruction, may not hold. For example, in the instruction “Bring me a pen if you find one on the table”, the robot has to find a pen first, before attempting to bring it, although the bringing task appears in the instruction earlier. Understanding such dependencies between tasks becomes even more difficult when the dependency spans across multiple sentences.” As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to accurately perform a sequence of sub-tasks during a complex task instruction. [(Pramanick 32)] Regarding claim 7, Fay teaches a robot system comprising: a robot body; at least one environment sensor carried by the robot body; at least one robot body sensor carried by the robot body; [(see at least paragraph 40) “The robotic system 100 may include sensor(s) 112 arranged to sense aspects of the robotic system 100. The sensor(s) 112 may include one or more force sensors, torque sensors, velocity sensors, acceleration sensors, position sensors, proximity sensors, motion sensors, location sensors, load sensors, temperature sensors, touch sensors, depth sensors, ultrasonic range sensors, infrared sensors, object sensors, and/or cameras, among other possibilities. Within some examples, the robotic system 100 may be configured to receive sensor data from sensors that are physically separated from the robot (e.g., sensors that are positioned on other robots or located within the environment in which the robot is operating).”] a robot controller which includes at least one processor and at least one non- transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one processor-readable storage medium storing processor-executable instructions which when executed by the at least one processor cause the robot system to [(see at least paragraph 5)]: capture, by the at least one environment sensor, first environment data representing an environment of the robot body at a first time [(see at least paragraphs 40-41) As in 41 “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”]; capture, by the at least one robot body sensor, first robot body data representing a configuration of the robot body at the first time [(see at least paragraph 43) “As an example, the robotic system 100 may use force sensors to measure load on various components of the robotic system 100. In some implementations, the robotic system 100 may include one or more force sensors on an arm or a leg to measure the load on the actuators that move one or more members of the arm or leg. As another example, the robotic system 100 may use one or more position sensors to sense the position of the actuators of the robotic system. For instance, such position sensors may sense states of extension, retraction, or rotation of the actuators on arms or legs.”]; access context data indicating a context for operation of the robot system, the context data including semantic information about a specific task being performed by the robot system at the first time [(see at least paragraph 27) “A legged robot may include one or more controllers that drive the legged robot's actuators. As described to herein, a “controller” may refer to a control configuration and parameters that, when applied to a control system that operates the robot's actuators, causes the robot to perform a particular action, carry out an operation, or act in accordance with a particular behavior. In some implementations, a controller might operate the robot's actuators to effect a certain gait (e.g., walk, trot, run, bound, gallop, etc.), maintain a certain condition (e.g., maintain balance, height, velocity, etc.), or some combination thereof. One or more controllers may be activated at a given time, each of which controls (or contributes to controlling) actuators on the robot to accomplish its particular task.”] Examiner notes that the semantic information is being interpreted as related task information such as task behavior or task progress. Fay teaches determine a first state of the robot body within the environment for the first time, based on at least the first environment data and the first robot body data; [(see at least paragraph 73) “Similarly, roll control may involve applying horizontal forces against the ground to prevent the robot from tipping over. Footstep location control may involve any combination of footstep planning and near-real time adjustments to footsteps based on the robot's kinematic state. Yaw control may involve trajectory planning, steering control, and/or obstacle avoidance.”] Fay teaches control the robot body to transition through the sequence of states [(see at least paragraph 106) “As one example, the robot's behavior resulting from a single future step may be simulated, and the robot's position, velocity, balance, and other aspects of the robot's state may be predicted immediately after the robot's foot steps down at that single future step. This process may be repeated for a plurality of future steps at different spatial locations (each of which may correspond to a respective robot leg), such that a plurality of estimated future states is determined. Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”]; and for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequence of states, validate whether an actual state matches a predicted state. [(see at least paragraphs 98) “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600”] Fay does not explicitly teach apply a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states, and wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system. However, Pramanick teaches apply a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time [(see at least paragraphs 31-32) As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”], a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states, wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system [(see at least paragraphs 49-55, 23, 31-35) As in 49 “To execute a task, a robot needs to perform a sequence of basic actions or tasks supported by its motion and manipulation capabilities. A task plan is a sequence of such actions that satisfies the intended task or goal. A task specified in an instruction is considered to change a hypothetical state of the world (initial state) to an expected state (goal state). The initial and goal conditions of a task are encoded as a conjunction of fluents expressed in first-order logic. The task templates are grounded using the predicted task dependency labels at step 304 to generate a planning problem in a Planning Domain Definition Language (PDDL) format.” As in 50 “During the grounding of the templates, assumed initial conditions for a task are updated by the post conditions of the actions of a previous sequential task. In the case of conditionals, a plan is generated for each conditional-dependent pair, and in run-time, the correct action sequence is chosen from the actual observed outcome of the conditional task. Therefore, the problem of generating a robotic task plan for the complex instruction is reduced to the ordering of the tasks catering to the execution dependencies, followed by planning individually for the goals of the tasks in order while validating the assumed initial states by the action post conditions.”] Examiner notes that it would be obvious to one of ordinary skill that when initial conditions for a task are updated by the post conditions of the actions of a previous sequential task, the statistics of the temporal evolution of the robot body would be learned in order to update the model accordingly to ensure successful execution of future tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fay to incorporate the teachings of Pramanick of applying, by the robot controller, a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to ensure task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state [(Pramanick 32)] and of wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 8, In view of the above combination of references, Fay further teaches wherein the processor- executable instructions further cause the robot system to, at or after a second time: capture, by the at least one environment sensor, second environment data representing an environment of the robot body at the second time [(see at least paragraph 41) “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”] Examiner notes that monitoring the environment in real-time is being interpreted as taking environment data at a second time.; capture, by the at least one robot body sensor, second robot body data representing a configuration of the robot body at the second time [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determine an actual second state of the robot body within the environment for the second time, based on the second environment data and the second robot body data [(see at least paragraphs 41-43) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and wherein the processor-executable instructions which when executed by the at least one processor cause the robot system to validate whether an actual state matches a predicted state, cause the robot system to determine whether the actual second state matches the predicted second state from the sequence of states for the robot body. [(see at least paragraph 98) “The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Regarding claim 9, Modified Fay has all of the elements of claim 8 as discussed above. Fay teaches control the robot body to transition through the updated sequence of states. [(see at least paragraph 98) “For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Fay does not explicitly teach wherein the processor-executable instructions further cause the robot system to: at or after the second time: if the actual second state is determined as not matching the predicted second state, apply the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the predicted third state of the robot body within the environment for the third time subsequent the second time. However, Pramanick teaches wherein the processor-executable instructions further cause the robot system to: at or after the second time: if the actual second state is determined as not matching the predicted second state, apply the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time [(see at least paragraph 52) “In accordance with an embodiment of the present disclosure, the step 306 of generating a CPT with a resolved task sequence comprises modifying the original task sequence w.sub.1: n. The modification of the original task sequence w.sub.1: n ensures that a conditional task is planned before any of its dependent tasks agnostic of a position of the dependent task in the original task sequence w.sub.1: n. In case of multiple conditional tasks in the same instruction, it is assumed that two such conditional tasks indicate the same conditional task, if the two tasks have the same task type. If so, it is ensured that the dependent tasks of the subsequent conditionals are planned after the original conditional task. Repeated tasks (different words meaning the same task may be identified based on the pre-condition template and the post-condition template for the tasks from the Knowledge Base) are masked. If a subsequent conditional task is of a different type, its subsequent tasks that have either dependent positive and dependent negative labels are considered to be actually dependent, i.e to be planned after the conditional perquisite. For tasks having a sequential dependency label, they are ordered as per their corresponding positions in the instruction.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of at or after the second time: if the actual second state is determined as not matching the predicted second state, applying the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 11, In view of the above combination of references, Fay further teaches wherein the processor-executable instructions which cause the robot system to control the robot body to transition through the sequence of states cause the robot system to, for at least one state transitioned to: capture, by the at least one environment sensor, respective environment data representing an environment of the robot body at a respective time of the state transitioned to [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; capture, by the at least one robot body sensor, respective robot body data representing a configuration of the robot body at the respective time of the state transitioned to [(see at least paragraph 42) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; and wherein the processor-executable instructions which when executed by the at least one processor cause the robot system to, for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequence of states, validate whether an actual state matches a predicted state cause the robot system to: determine an actual state of the robot body within the environment for the respective time of the state transitioned to, based on at least the respective environment data and the respective robot body data [(see at least paragraph 42, 98) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determine whether the actual state matches a predicted state of the robot body for the respective time of the state transitioned to, as predicted during the prediction of the sequence of states. [(see at least paragraphs 98,106) As in 98 “Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 106 “Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”] Examiner notes that the matching is being interpreted as being scored and compared to a selected satisfactory score. Regarding claim 12, In view of the above combination of references, Fay further teaches wherein the processor-executable instructions which cause the robot system to control the robot body to transition through the sequence of states cause the robot system to, for the at least one state transitioned to: if the actual state is determined to match a predicted state for the respective time of the state transitioned to: continue to control the robot system to transition through the sequence of states [(see at least paragraphs 98,74) As in 98 “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 74 “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay teaches control the robot body to transition through the updated sequence of states. [(see at least paragraph 74) “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay does not explicitly teach if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states. However, Pramanick teaches if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states [(see at least paragraphs 23, 31-34, 60-63) As in 23 “For example, in a simple instruction “pick the pen and bring it to me”, the robot has to first perform a picking task, followed by a bringing task. However, the execution of a task may be dependent upon a condition or the outcome of another task. For example, in the instruction “if the coffee is hot, then bring it to me, otherwise put it on the oven”, both the task of bringing the coffee and the task of putting the coffee on the oven is dependent upon the state of the coffee, i.e., whether it is hot. Moreover, the assumption that the tasks are to be performed in their order of appearance in the instruction, may not hold. For example, in the instruction “Bring me a pen if you find one on the table”, the robot has to find a pen first, before attempting to bring it, although the bringing task appears in the instruction earlier. Understanding such dependencies between tasks becomes even more difficult when the dependency spans across multiple sentences.” As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to accurately perform a sequence of sub-tasks during a complex task instruction. [(Pramanick 32)] Regarding claim 13, In view of the above combination of references, Fay further teaches wherein the at least one environment sensor includes one or more environment sensors selected from a group of sensors consisting of: an image sensor operable to capture image data; an audio sensor operable to capture audio data; and a tactile sensor operable to capture tactile data. [(see at least paragraph 40) “The sensor(s) 112 may include one or more force sensors, torque sensors, velocity sensors, acceleration sensors, position sensors, proximity sensors, motion sensors, location sensors, load sensors, temperature sensors, touch sensors, depth sensors, ultrasonic range sensors, infrared sensors, object sensors, and/or cameras, among other possibilities. Within some examples, the robotic system 100 may be configured to receive sensor data from sensors that are physically separated from the robot (e.g., sensors that are positioned on other robots or located within the environment in which the robot is operating).”] Regarding claim 14, In view of the above combination of references, Fay further teaches wherein the at least one robot body sensor includes one or more robot body sensors selected from a group of sensors consisting of: a haptic sensor which captures haptic data; an actuator sensor which captures actuator data indicating a state of a corresponding actuator; a battery sensor which captures battery data indicating a state of a battery; an inertial sensor which captures inertial data; a proprioceptive sensor which captures proprioceptive data indicating a position, movement, or force applied for a corresponding actuatable member of the robot body; and a position encoder which captures position data about at least one joint or appendage of the robot body. [(see at least paragraph 43) “As an example, the robotic system 100 may use force sensors to measure load on various components of the robotic system 100. In some implementations, the robotic system 100 may include one or more force sensors on an arm or a leg to measure the load on the actuators that move one or more members of the arm or leg. As another example, the robotic system 100 may use one or more position sensors to sense the position of the actuators of the robotic system. For instance, such position sensors may sense states of extension, retraction, or rotation of the actuators on arms or legs.”] Regarding claim 15, Fay teaches a robot control module comprising at least one non- transitory processor-readable storage medium storing processor-executable instructions or data that, when executed by at least one processor of a processor-based system, cause the processor-based system to [(see at least paragraph 5) “the present application describes a non-transitory computer-readable medium having instructions stored thereon that, upon execution by at least one processor, causes a quadruped robot to perform a set of operations. The operations include obtaining a model of the quadruped robot that represents the quadruped robot as a first point mass rigidly coupled with a second point mass along a longitudinal axis.”]: capture, by at least one environment sensor carried by a robot body of the processor-based system, first environment data representing an environment of the robot body at a first time [(see at least paragraphs 40-41) As in 41 “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”]; capture, by at least one robot body sensor carried by the robot body, first robot body data representing a configuration of the robot body at the first time [(see at least paragraph 43) “As an example, the robotic system 100 may use force sensors to measure load on various components of the robotic system 100. In some implementations, the robotic system 100 may include one or more force sensors on an arm or a leg to measure the load on the actuators that move one or more members of the arm or leg. As another example, the robotic system 100 may use one or more position sensors to sense the position of the actuators of the robotic system. For instance, such position sensors may sense states of extension, retraction, or rotation of the actuators on arms or legs.”]; access context data indicating a context for operation of the processor-based system, the context data including semantic information about a specific task being performed by the processor-based system at the first time [(see at least paragraph 27) “A legged robot may include one or more controllers that drive the legged robot's actuators. As described to herein, a “controller” may refer to a control configuration and parameters that, when applied to a control system that operates the robot's actuators, causes the robot to perform a particular action, carry out an operation, or act in accordance with a particular behavior. In some implementations, a controller might operate the robot's actuators to effect a certain gait (e.g., walk, trot, run, bound, gallop, etc.), maintain a certain condition (e.g., maintain balance, height, velocity, etc.), or some combination thereof. One or more controllers may be activated at a given time, each of which controls (or contributes to controlling) actuators on the robot to accomplish its particular task.”] Examiner notes that the semantic information is being interpreted as related task information such as task behavior or task progress. Fay teaches determine a first state of the robot body within the environment for the first time, based on at least the first environment data and the first robot body data; [(see at least paragraph 73) “Similarly, roll control may involve applying horizontal forces against the ground to prevent the robot from tipping over. Footstep location control may involve any combination of footstep planning and near-real time adjustments to footsteps based on the robot's kinematic state. Yaw control may involve trajectory planning, steering control, and/or obstacle avoidance.”] Fay teaches control the robot body to transition through the sequence of states [(see at least paragraph 106) “As one example, the robot's behavior resulting from a single future step may be simulated, and the robot's position, velocity, balance, and other aspects of the robot's state may be predicted immediately after the robot's foot steps down at that single future step. This process may be repeated for a plurality of future steps at different spatial locations (each of which may correspond to a respective robot leg), such that a plurality of estimated future states is determined. Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”]; and for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequences of states, validate whether an actual state matches a predicted state. [(see at least paragraphs 98) “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600”] Fay does not explicitly teach apply a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states, and wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system. However, Pramanick teaches apply a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states [(see at least paragraphs 23, 31-35) As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”], and wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system [(see at least paragraph 49-55, 23) As in 49 “To execute a task, a robot needs to perform a sequence of basic actions or tasks supported by its motion and manipulation capabilities. A task plan is a sequence of such actions that satisfies the intended task or goal. A task specified in an instruction is considered to change a hypothetical state of the world (initial state) to an expected state (goal state). The initial and goal conditions of a task are encoded as a conjunction of fluents expressed in first-order logic. The task templates are grounded using the predicted task dependency labels at step 304 to generate a planning problem in a Planning Domain Definition Language (PDDL) format.” As in 50 “During the grounding of the templates, assumed initial conditions for a task are updated by the post conditions of the actions of a previous sequential task. In the case of conditionals, a plan is generated for each conditional-dependent pair, and in run-time, the correct action sequence is chosen from the actual observed outcome of the conditional task. Therefore, the problem of generating a robotic task plan for the complex instruction is reduced to the ordering of the tasks catering to the execution dependencies, followed by planning individually for the goals of the tasks in order while validating the assumed initial states by the action post conditions.”] Examiner notes that it would be obvious to one of ordinary skill that when initial conditions for a task are updated by the post conditions of the actions of a previous sequential task, the statistics of the temporal evolution of the robot body would be learned in order to update the model accordingly to ensure successful execution of future tasks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Fay to incorporate the teachings of Pramanick of applying, by the robot controller, a context-aware state prediction model to predict, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, a sequence of states for the robot body within the environment for a sequence of times subsequent the first time, wherein each state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to ensure task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state [(Pramanick 32)] and of wherein the context-aware state prediction model includes a foundational model that is trained to learn statistics of temporal evolution of the robot body that depend on the specific task being performed by the robot system in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 16, In view of the above combination of references, Fay further teaches wherein the processor-executable instructions further cause the processor-based system to, at or after a second time: capture, by the at least one environment sensor, second environment data representing an environment of the robot body at the second time [(see at least paragraph 41) “The sensor(s) 112 may provide sensor data to the processor(s) 102 (perhaps by way of data 107) to allow for interaction of the robotic system 100 with its environment, as well as monitoring of the operation of the robotic system 100. The sensor data may be used in evaluation of various factors for activation, movement, and deactivation of mechanical components 110 and electrical components 116 by control system 118. For example, the sensor(s) 112 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation. In an example configuration, sensor(s) 112 may include RADAR (e.g., for long-range object detection, distance determination, and/or speed determination), LIDAR (e.g., for short-range object detection, distance determination, and/or speed determination), SONAR (e.g., for underwater object detection, distance determination, and/or speed determination), VICON® (e.g., for motion capture), one or more cameras (e.g., stereoscopic cameras for 3D vision), a global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment in which the robotic system 100 is operating. The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment.”] Examiner notes that monitoring the environment in real-time is being interpreted as taking environment data at a second time.; capture, by the at least one robot body sensor, second robot body data representing a configuration of the robot body at the second time [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determine an actual second state of the robot body within the environment for the second time, based on the second environment data and the second robot body data [(see at least paragraphs 41-43) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and wherein the processor-executable instructions which when executed by the at least one processor cause the processor-based system to validate whether an actual state matches a predicted state, cause the processor-based system to determine whether the actual second state matches the predicted second state from the sequence of states for the robot body. [(see at least paragraph 98) “The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Regarding claim 17, Modified Fay has all of the elements of claim 16 as discussed above. Fay teaches control the robot body to transition through the updated sequences of states [(see at least paragraph 98) “For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.”] Fay does not explicitly teach wherein the processor-executable instructions further cause the processor-based system to: at or after the second time: if the actual second state is determined as not matching the predicted second state, apply the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time. However, Pramanick teaches wherein the processor-executable instructions further cause the processor-based system to: at or after the second time: if the actual second state is determined as not matching the predicted second state, apply the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time [(see at least paragraph 52) “In accordance with an embodiment of the present disclosure, the step 306 of generating a CPT with a resolved task sequence comprises modifying the original task sequence w.sub.1: n. The modification of the original task sequence w.sub.1: n ensures that a conditional task is planned before any of its dependent tasks agnostic of a position of the dependent task in the original task sequence w.sub.1: n. In case of multiple conditional tasks in the same instruction, it is assumed that two such conditional tasks indicate the same conditional task, if the two tasks have the same task type. If so, it is ensured that the dependent tasks of the subsequent conditionals are planned after the original conditional task. Repeated tasks (different words meaning the same task may be identified based on the pre-condition template and the post-condition template for the tasks from the Knowledge Base) are masked. If a subsequent conditional task is of a different type, its subsequent tasks that have either dependent positive and dependent negative labels are considered to be actually dependent, i.e to be planned after the conditional perquisite. For tasks having a sequential dependency label, they are ordered as per their corresponding positions in the instruction.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of the processor-executable instructions further cause the processor-based system to: at or after the second time: if the actual second state is determined as not matching the predicted second state, apply the context-aware state prediction model to update, based on the actual second state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states for the robot body within the environment for a sequence of times subsequent the second time in order to update task templates for plan generation and pre-defined response templates for task execution for the robot system. [(Pramanick 33)] Regarding claim 19, In view of the above combination of references, Fay further teaches wherein the processor-executable instructions which cause the processor-based system to control the robot body to transition through the sequence of states cause the processor-based system to, for at least one state transitioned to: capture, by the at least one environment sensor, respective environment data representing an environment of the robot body at a respective time of the state transitioned to [(see at least paragraphs 41-42) “The sensor(s) 112 may monitor the environment in real time, and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other aspects of the environment. Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; and capture, by the at least one robot body sensor, respective robot body data representing a configuration of the robot body at the respective time of the state transitioned to [(see at least paragraph 42) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100.”]; and wherein the processor-exectuable instructions which when executed by the at least one processor cause the processor-based system to, for a subset of states in the sequence of states, the subset of states consisting of fewer than all of the states in the sequence of states, validate whether an actual state matches a predicted state, cause the processor-based system to: determine an actual state of the robot body within the environment for the respective time of the state transitioned to, based on at least the respective environment data and the respective robot body data [(see at least paragraph 42, 98) “Further, the robotic system 100 may include sensor(s) 112 configured to receive information indicative of the state of the robotic system 100, including sensor(s) 112 that may monitor the state of the various components of the robotic system 100. The sensor(s) 112 may measure activity of systems of the robotic system 100 and receive information based on the operation of the various features of the robotic system 100, such the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic system 100. The data provided by the sensor(s) 112 may enable the control system 118 to determine errors in operation as well as monitor overall operation of components of the robotic system 100.”]; and determine whether the actual state matches a predicted state of the robot body for the respective time of the state transitioned to, as predicted during the prediction of the sequence of states. [(see at least paragraphs 98,106) As in 98 “Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 106 “Then, the estimated future states may be evaluated against costs and constraints in order to determine a “score” associated with each future footstep. The resulting scores may be compared against each other and/or a threshold score in order to select a satisfactory score. The robot may then be instructed to step toward the footstep location associated with the selected score.”] Examiner notes that the matching is being interpreted as being scored and compared to a selected satisfactory score. Regarding claim 20, In view of the above combination of references, Fay further teaches wherein the processor-executable instructions which cause the processor-based system to control the robot body to transition through the sequence of states cause the processor-based system to, for the at least one state transitioned to: if the actual state is determined to match a predicted state for the respective time of the state transitioned to: continue to control the processor-based system to transition through the sequence of states; [(see at least paragraphs 98,74) As in 98 “The robot model(s) 612 may also include “forward” models, which may be used to predict the future state of the robot (e.g., simulating the robot's behavior based on control inputs). Forward models may include similar relationships as described above with respect to the LIP model and the barbell model. Simulating future robot behavior may involve applying control inputs to a model of the robot, estimating the future state after a short period of time (possibly allowing for small angle approximations and linearization), and repeating that estimation for some duration of time into the future. The predicted future state of the robot may serve as a basis for adjusting planned robotic control. For instance, the predicted future state may be “scored,” allowing multiple predicted future states resulting from different control inputs to be quantitatively compared. A control input that produces a satisfactory predicted future state may then be selected and carried out by the robotic device 600.” As in 74 “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay teaches control the robot body to transition through the updated sequence of states. [(see at least paragraph 74) “a control system receives a “template” footstep sequence and may adjust aspects of that template footstep sequence based on the state of the robot. The template footstep sequence may include a combination of force values, timing information, and/or locations for one or more planned future footsteps of a robot. Aspects of the template footstep sequence may be modified by one or more controllers within the control system based on the state of the robot, control commands, and/or environmental conditions. As described herein, the “template” footstep sequence may also be referred to as a footstep “tape” or a “predetermined” footstep sequence.”] Fay does not explicitly teach if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states However, Pramanick teaches if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states [(see at least paragraphs 23, 31-34, 60-63) As in 23 “For example, in a simple instruction “pick the pen and bring it to me”, the robot has to first perform a picking task, followed by a bringing task. However, the execution of a task may be dependent upon a condition or the outcome of another task. For example, in the instruction “if the coffee is hot, then bring it to me, otherwise put it on the oven”, both the task of bringing the coffee and the task of putting the coffee on the oven is dependent upon the state of the coffee, i.e., whether it is hot. Moreover, the assumption that the tasks are to be performed in their order of appearance in the instruction, may not hold. For example, in the instruction “Bring me a pen if you find one on the table”, the robot has to find a pen first, before attempting to bring it, although the bringing task appears in the instruction earlier. Understanding such dependencies between tasks becomes even more difficult when the dependency spans across multiple sentences.” As in 32 “The Plan generator is configured to ensure a context aware input is generated for a task planner for the identified task. One-to-one mapping may not be possible between a human intended task and the primitive actions supported by the robot, because a high-level task goal may require performing a sequence of sub-tasks. To enable task, a state of a world model (current state of a robot and its environment) is exposed to the robot in terms of grounded fluents, which are logical predicates that may have variables as arguments. A task starts from an initial state of the world model and leads to a different state of the world model, namely a goal state.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Fay to further incorporate the teachings of Pramanick of if the actual state is determined to not match a predicted state for the respective time of the state transitioned to: apply the context-aware state prediction model to update, based on the first state of the robot body and the semantic information about the specific task being performed by the robot system at the first time, the sequence of states of the robot body within the environment for an updated sequence of times subsequent the respective time of the state transitioned to, wherein each updated state in the sequence of states is predicted based at least in part on an immediately prior state in the sequence of states in order to accurately perform a sequence of sub-tasks during a complex task instruction. [(Pramanick 32)] The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED YOUSEF ABUELHAWA whose telephone number is (571)272-3219. The examiner can normally be reached Monday-Friday 8:30-5:00 with flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED YOUSEF ABUELHAWA/Examiner, Art Unit 3656 /WADE MILES/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Mar 07, 2024
Application Filed
May 31, 2024
Non-Final Rejection — §103
Sep 05, 2024
Response Filed
Sep 20, 2024
Final Rejection — §103
Nov 25, 2024
Response after Non-Final Action
Jan 25, 2025
Request for Continued Examination
Jan 27, 2025
Response after Non-Final Action
May 28, 2025
Non-Final Rejection — §103
Dec 01, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598706
Method of inserting an electronic components in through-hole technology, THT, into a printed circuit board, PCB, by an industrial robot
2y 5m to grant Granted Apr 07, 2026
Patent 12558786
RESTRICTING MOVEMENT OF A MOBILE ROBOT
2y 5m to grant Granted Feb 24, 2026
Patent 12552031
WORK MANAGEMENT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12533813
ROBOT, SYSTEM COMPRISING ROBOT AND USER DEVICE AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12472641
GENERATING REFERENCES FOR ROBOT-CARRIED OBJECTS AND RELATED TECHNOLOGY
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+20.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 67 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month