Prosecution Insights
Last updated: April 19, 2026
Application No. 18/638,551

TECHNIQUES FOR AUTOMATED PROGRAMMING OF ROBOT TASKS USING LANGUAGE MODELS

Final Rejection §103
Filed
Apr 17, 2024
Examiner
EL SAYAH, MOHAMAD O
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Autodesk, Inc.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
166 granted / 218 resolved
+24.1% vs TC avg
Moderate +5% lift
Without
With
+5.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
41 currently pending
Career history
259
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 218 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 11/28/2025 has been entered, claims 1-20 remain pending in the application. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable by You (“Robot-Enabled Construction Assembly with Automated Sequence Planning based on ChatGPT: RoboGPT, from IDS”) in view of Xia (US20250018562). Regarding claim 1, You teaches a computer-implemented method for generating program code to control a robot, the method comprising: receiving user input specifying a task to be performed by the robot (Fig.1 section III part A disclosing receiving instruction of a task from a user); processing the user input via a first machine learning model to generate a plurality of subtasks for performing the task (Fig. 1 section III part A disclosing Chat GPT using natural language machine learning generates a sequential solution commands to perform the task); and for each subtask included in the plurality of the subtasks, processing the subtask via a second machine learning model to generate program code for controlling the robot to perform the subtask (Fig.1 section III part D disclosing the decoder of the chat GPT LLM to generate the program code for controlling the robot to perform the subtask). While You does not disclose a second machine learning model, it would have been obvious to have made the machine learning of You separable which enables each machine learning to run on a separate computer thus reducing a load and reducing resources used on a single system. You does not teach replacing gone or more pieces of text included in the plurality of subtasks with one or more non-reversible aliases to generate a second plurality of subtasks; Xia teaches replacing gone or more pieces of text included in the plurality of subtasks with one or more non-reversible aliases to generate a second plurality of subtasks ([0005] disclosing the inner monologue to correct steps, thus if a key does not fit, the robot has to pick another vs an initial plan of unlocking door with fitting key. [0045]-[0060] disclosing based on a success or failure of a step, to replan the next step of the robot, i.e., replacing one or more text included in the plurality of subtasks not yet accomplished by the robot to avoid a propagating error and fixing mistakes that has been detected. See also [0079]-[0080] disclosing the robot should repeat the task that was unsuccessful then take the soda to the person); The combination of the teaching of Xia to replace pieces of text included in a the plurality of subtasks to generate a second set is obvious yielding predictable results in order to correct the actions of the robot when there is an error in the subtasks or when the sequence is unsuccessful thus preventing propagating error and failure of the full task. Further, combining the generated second subtasks with the teaching of You to generate the code based on the LLM as taught in section III part D and figure 1 is obviously combined to the second subtasks yielding predictable results in order to generate the code that the robot can perform. Regarding claim 2, You as modified by Xia teaches the computer-implemented method of claim 1, further comprising: You as modified by Xia further teaches performing one or more operations to simulate the robot performing each subtask included in the plurality of subtasks based on the program code for controlling the robot to perform the subtask; and in response to one or more errors when simulating the robot to perform a first subtask included in the plurality of subtasks, processing the first subtask and the one or more errors via the second machine learning model to generate additional program code for controlling the robot to perform the first subtask. Xia teaches performing one or more operations to simulate the robot performing each subtask included in the plurality of second subtasks based on the program code for controlling the robot to perform the subtask ([0004] disclosing the sequential actions and taking corrective actions. [0026]-[0043] disclosing the subtasks created by a natural language program are simulated to determine success or failure. See [0040] disclosing the determination if the low level skill has succeeded, wherein the skill is the action performed by the robot which is interpreted to be based on the program code for controlling the robot to perform the subtask. The processes of [0026]-[0060] is iterative wherein the robot keeps on tracking the progress when completing the second set of subtasks and corrects based on identifying errors ); and in response to one or more errors when simulating the robot to perform a first subtask included in the second plurality of subtasks, processing the first subtask and the one or more errors via the second machine learning model to generate additional program code for controlling the robot to perform the first subtask ([0004] disclosing the LLM with inner monologue for the robot similar to a human that can determine corrective action in case of an error. [0037]-[0043] disclosing the inner dialogue in a textual feedback which is applied also to simulation. [0050] disclosing leveraging the inner monologue to identify a different policy for the robot to recover from mistakes. The processes of [0026]-[0060] is iterative wherein the robot keeps on tracking the progress when completing the second set of subtasks and corrects based on identifying errors. [0074]-[0080] disclosing generating a different grasping task based on the failure of the first grasp, i.e., generating additional program code for controlling the robot to perform the first task of grasping). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of You to incorporate the teaching of Xia of and in response to one or more errors when simulating the robot to perform a first subtask included in the plurality of subtasks, processing the first subtask and the one or more errors via the second machine learning model to generate additional program code for controlling the robot to perform the first subtask in order to replan and recover from policy mistakes as taught by Xi [0050]. Regarding claim 3, You as modified by XIa teaches the computer-implemented method of claim 1, wherein the user input is a natural language input, and each of the first machine learning model and the second machine learning model is a language model (You section 3 part A at least disclosing the chat GPT utilizing LLM). Regarding claim 4, You as modified by Xia teaches the computer-implemented method of claim 1, further comprising, for each subtask included in the second plurality of subtasks, selecting one or more examples of program code associated with the subtask from a plurality of examples of program code, wherein processing the subtask via the second machine learning model comprises inputting the subtask and the one or more examples of program code associated with the subtask into the second machine learning model (Xia teaches generating the second set of tasks, You section III part D at least disclosing the subtasks are matched with program codes to run on the robot from a plurality of subtasks and plurality of program codes). The combination of the second subtasks with the method of generating the codes of You is obvious in order to control the robot based on a language the robot understands thus enabling a control of various robots and of different actions of the same robot instantaneously by converting the language into a robot code. Regarding claim 5, You as modified by Xia teaches the computer-implemented method of claim 1, wherein processing the user input via the first machine learning model comprises inputting, into the first machine learning model, at least one of a role of the first machine learning model to generate the plurality of subtasks, one or more rules for output of the first machine learning model, one or more parameters associated with the robot, or one or more parameters associated with the task (You section III part A disclosing at least a user inputting the specific descriptions for the task, i.e., parameters associated with the task). Regarding claim 6, You as modified by Xia teaches computer-implemented method of claim 1, wherein processing the subtask via the second machine learning model comprises inputting, into the second machine learning model, at least one of a role of the second machine learning model to generate program code, one or more rules for output of the second machine learning model, one or more parameters associated with the robot, one or more parameters associated with the task, one or more definitions associated with a programming language associated with the program code, or one or more examples of program code (You section III part D disclosing at least one or more examples of program codes input into the machine learning to determine a match with the subtask). Regarding claim 7, You as modified by Xia further teach the computer-implemented method of claim 1, wherein processing the subtask via the second machine learning model further comprises inputting, into the second machine learning model, at least one of one or more previous outputs of the second machine learning model or one or more errors from one or more simulations of the robot performing the subtask. Xia teaches wherein processing the subtask via the second machine learning model further comprises inputting, into the second machine learning model, at least one of one or more previous outputs of the second machine learning model or one or more errors from one or more simulations of the robot performing the subtask ([0004] disclosing the LLM with inner monologue for the robot similar to a human that can determine corrective action in case of an error. [0037]-[0043] disclosing the inner dialogue in a textual feedback which is applied also to simulation. [0050] disclosing the LLM machine learning leveraging the inner monologue to identify a different policy for the robot to recover from mistakes, i.e., the errors are input into the machine learning). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of You to incorporate the teaching of Xia of wherein processing the subtask via the second machine learning model further comprises inputting, into the second machine learning model, at least one of one or more previous outputs of the second machine learning model or one or more errors from one or more simulations of the robot performing the subtask in order to replan and recover from policy mistakes as taught by Xi [0050]. Regarding claim 8, You as modified by Xia teaches the computer-implemented method of claim 1, further comprising performing one or more operations to verify that the program code for controlling the robot to perform the subtask is program code (You section III part B disclosing and section IV disclosing testing the program subtasks thus determining that it is a successful or not successful program code based on the results). Regarding claim 9, You as modified by Xia teaches the computer-implemented method of claim 1, wherein the first machine learning model is the second machine learning model (You section III part A disclosing the machine learning is a natural language thus is one model). Regarding claim 10, You as modified by Xia teaches the computer-implemented method of claim 1, further comprising causing the robot to move based on the program code for each subtask (You section III A-D and sections IV disclosing the control of the robot based on the code to perform the tasks). Claims 12, is rejected for similar reasons as claims 2, respectively, see above rejection. Regarding claim 13, You as modified by Xia teaches the one or more non-transitory computer readable media of claim 12, wherein the one or more error include at least a run-time exceptions, a syntax error, a format error, a collision, or a failure to achieve a final state associated with the task. Xia further teaches the error include a failure to achieve a final state associated with the task ([0004] disclosing the LLM with inner monologue for the robot similar to a human that can determine corrective action in case of an error. [0037]-[0043] disclosing the inner dialogue in a textual feedback which is applied also to simulation. [0050] disclosing the LLM machine learning leveraging the inner monologue to identify a different policy for the robot to recover from mistakes, i.e., the errors are input into the machine learning). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of You to incorporate the teaching of Xia of error in reaching a final state in order to replan and recover from policy mistakes as taught by Xi [0050] thus enabling the robot to achieve a goal state. Regarding claim 18, You as modified by Xia teaches the one or more non-transitory computer-readable media of claim 11, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to perform the step of selecting the one or more examples of program code based on a behavior label associated with the subtask (section III part D disclosing the names and strings of the subtask steps “labels” are converted into actions). Claims 11, 14, 15, 16, 17, 19 are rejected for similar reasons as claims 1, 3, 4, 5, 6, 10, respectively, see above rejection. Claim 20 is rejected for similar reasons as claim 1 see above rejection. Response to Arguments Applicant’s arguments filed on 11/28/2025 have been fully considered but they are not persuasive. With respect to applicant’s arguments that the prior art references including “You” do not teach the amended subject matter, examiner respectfully disagrees. While You does not explicitly teach the replacing of the one or more pieces of text, Xia is cited to teach the amended subject matter of claim 1. Xia teaches replanning portions of the tasks after the failure of a previous step, thus replacing at least one text in the program of the robot to avoid future failure of the goal task. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art cited in PTO-892 and not mentioned above disclose related devices and methods. US11931894 disclosing template reusable primitives that are sequenced by an LLM model to determine a sequence to perform a task. US20240131712 disclosing LLM for subtasks and sending query to user to update task. US20240288870 disclosing natural language to generate sequence of actions for task. US20240351218 disclosing LLM for specific task for robot simulated for safety and errors. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMAD O EL SAYAH whose telephone number is (571)270-7734. The examiner can normally be reached on M-Th 6:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMAD O EL SAYAH/Examiner, Art Unit 3658B
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Aug 27, 2025
Non-Final Rejection — §103
Nov 28, 2025
Response Filed
Feb 10, 2026
Final Rejection — §103
Apr 08, 2026
Applicant Interview (Telephonic)
Apr 08, 2026
Examiner Interview Summary
Apr 10, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600372
OPTIMIZATION OF VEHICLE PERFORMANCE TO SUPPORT VEHICLE CONTROL
2y 5m to grant Granted Apr 14, 2026
Patent 12576838
PROCESS AND APPARATUS FOR CONTROLLING THE FORWARD MOVEMENT OF A MOTOR VEHICLE AS A FUNCTION OF ROUTE PARAMETERS IN A DRIVING MODE WITH A SINGLE PEDAL
2y 5m to grant Granted Mar 17, 2026
Patent 12565239
AUTONOMOUS DRIVING PREDICTIVE DEFENSIVE DRIVING SYSTEM THROUGH INTERACTION BASED ON FORWARD VEHICLE DRIVING AND SITUATION JUDGEMENT INFORMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554260
Iterative Feedback Motion Planning
2y 5m to grant Granted Feb 17, 2026
Patent 12552364
VEHICLE TURNING CONTROL DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
82%
With Interview (+5.4%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 218 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month