Prosecution Insights
Last updated: April 18, 2026
Application No. 18/516,324

ROBOT TEACHING METHOD AND APPARATUS FOR HIGH-LEVEL WORK

Final Rejection §103
Filed
Nov 21, 2023
Examiner
VISCARRA, RICARDO I
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
21 granted / 34 resolved
+9.8% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
23 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 09/12/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS(s) has/have been considered by the examiner. Response to Arguments Applicant’s arguments with respect to claim(s) 7 and 19 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 7-8, 10-12, and 19-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han (US 20200230817 A1) in view of Gaydarov (US 20210060771 A1), and further in view of Kolluri et al. (US 20210362330 A1, hereinafter Kolluri). Regarding claim 7, Han teaches: A robot teaching method comprising: displaying unit tasks operable by a robot on a display together with descriptive information corresponding to each of the unit tasks (at least as in paragraph 0115, “Referring to FIG. 8, the task builder 500 may divide the screen of the display 800 into two screens, and one screen (hereinafter referred to as a task screen 510) may be used to display a task to be generated, and the other one screen (hereinafter referred to as a skill screen 520) may be used to display the user skill stored in the user DB 700”, and further wherein the screen displays various user skill descriptions “such as ‘Pick palette, gripper,’ ‘Door open, CNC,’ ‘Place workpiece, CNC,’ ‘Door close, CNC,’ ‘Pick workpiece, CNC,’ and ‘Place palette, gripper’”); generating a task to be performed using the robot by combining unit tasks selected by a user among the displayed unit tasks (at least as in paragraph 0115, “The user may generate the task by bringing the skill from the skill screen and disposing and connecting it on the task screen in a time series manner. As an example, the user may bring the user skills such as ‘Pick palette, gripper,’ ‘Door open, CNC,’ ‘Place workpiece, CNC,’ ‘Door close, CNC,’ ‘Pick workpiece, CNC,’ and ‘Place palette, gripper’ from the skill screen to dispose the task screen in a progress order to complete the task”), the generating of the task comprising: identifying (at least as in paragraph 0117, “the task builder 500 may receive the user skill and the task template generated and recommended by the work cell engine, and generate the task by using one task template selected by the user among the received task templates, or adding, deleting, or switching the user skill based on one task template selected by the user among the received task templates”; at least as in paragraph 0120, recommending a plurality of task templates); and receiving user teaching information (at least as in paragraph 0115, “At this time, the coordinates information or the posture information necessary for the user skill to operate may be previously set by the work cell engine as described above, or may be set by the user by using the direct teaching or the indirect teaching in the current operation” at least as in paragraph 0121-0122, “Hereinafter, modification may be made to constitute the desired task based on the task template shown in the task screen… For example, when the position of the target workpiece of the Pick skill is changed, the parameter of the work cell item may be changed by using the work cell manager 300”; at least as in paragraph 0072, “the user may set the parameter relevant to the work cell item selected according to the robot work environment (operation S320)”; at least as in paragraph 0075, “FIGS. 4A to 4H are diagrams showing the work cell manager 300 for displaying a screen on the display 800 so that the user according to an exemplary embodiment of the present disclosure selects the work cell item, and inputs the parameter items relevant to the selected work cell item”; at least as in paragraph 0078, “After selecting the work cell item, the work cell manager 300 may display the relevant parameter items stored in the work cell DB of the master DB 600 on the screen of the display 800 together with the selected work cell item so that the user may input and set the information on the selected work cell item (operation S320)”; at least as in paragraph 0057, “The work cell item and the relevant parameter item may be in plural, and at least any one among the category of the work cell item, the type of the work cell item, the model of the work cell item, name, weight, center of gravity, TCP, volume, reference point, turning point, approach point, way point, and operation point of the work cell item, and the posture information, speed, acceleration, connection type, connection port, and wait time of respective points may be included as the parameter item”; see also paragraphs 0075-0092 and Figs. 4A-4H); and deriving a final teaching result of the robot so that the robot performs the generated task(at least as in paragraph 0093, “when the work cell item to be used by the work cell manager 300 is selected and the parameter setting for the selected work cell item is completed, the work cell engine 400 may search the master skill relevant to the selected work cell item from the master DB 600, and generate the user skill by applying at least one parameter among the parameters of the selected work cell item to the searched master skill”; at least as in paragraph 0099, “at least one parameter among the parameters of the selected work cell item or combination of the work cell items stored in the work cell DB of the user DB 700 may be applied to the searched skill (operation S530)”; at least as in paragraph 0128, “When the generation of the task is completed, the operation of the task may be verified through simulation, and if it is verified, it may be executed by the robot”). Han does not explicitly teach “an omitted unit task … in response to the identifying of the omitted unit task, displaying a type of the omitted unit task and an order of the omitted unit task among the unit tasks selected by the user; for each of the provided unit tasks… by correcting the received user teaching information according to a preset standard.” However, Gaydarov, in the same field of endeavor of robot system generating a schedule or plan for a task, specifically teaches: an omitted unit task (at least as in paragraph 0065, “during transition generation the system searches the graph for gaps between sequenced actions that lack transitions. Thus, the system can identify the portion 400a as a portion of the graph having two sequenced actions but lacking a transition between them”)… in response to the identifying of the omitted unit task, displaying a type of the omitted unit task and an order of the omitted unit task among the unit tasks selected by the user (at least as in paragraph 0066, “the system generated four alternative transitions for transitioning between the action node 420 and the action node 430. Each alternative transition is represented by its own action node, e.g., the transition nodes 450, 460, 470, and 480”; at least as in paragraph 0085, “the system can present a user interface that allows a user to manually specify a next transformer to be applied in the graph or some other manipulation of the graph. As part of this user interface, the system can also present a graphical simulation of the robots executing a particular candidate schedule. For example, if a particular transition between actions seems too awkward or otherwise not ideal, a human can manually select a different transition”; at least as in paragraph 0045, “the planner 120 can provide a candidate process definition graph 123 to a user interface device 140. The user interface device 140 can then present a user interface that allows a user to input a user transformer selection 125, which directs the planner 120 to perform the next iteration using a transformer selection 127 specified by the user”)… However, Kolluri also teaches skill based robotic programming and learning. Specifically, Kolluri is directed at a skill template distribution system distributing skill templates, which allow for a robot to be programmed to perform a robotic task using a customized control policy learned using demonstration data. Kolluri specifically teaches “for each of the unit tasks included in the generated task (at least as in paragraph 0039, “local demonstration data is data gathered while a user is controlling a robot to demonstrate how the robot can perform a particular task by causing the robot to perform physical movements”; at least as in paragraph 0062, “a user merely needs to guide the robot in performing the subtasks that are indicated by the skill template as requiring local demonstration data. The robot will automatically capture the local demonstration data, which the training system can use to refine a base control policy associated with the connector insertion subtask”)… by correcting the received user teaching information according to a preset standard (at least as in paragraph 0041, “The training system 120 is a computer system that can use machine learning techniques to generate a customized control policy 125 from the local demonstration data 115; at least as in paragraph 0049, “The training system 120 can thus refine a base control policy using the local demonstration data 115 in order to generate the customized control policy 125 for the particular robot that was used to generate the demonstration data. The customized control policy 125 adjusts the base control policy to account for characteristics of the particular robot as well as local variables for the task”; at least as in paragraph 0062, “When training of the customized control policy is complete, the robot merely needs to download the final trained customized control policy in order to be equipped to perform the subtask”; see also Fig. 5; at least as in paragraph 0077, “During training, a training engine 240 generates parameter corrections 255 by using a representation of a locally demonstrated action 275 as well as a proposed command 245 generated by the tuned control policy 210. The training engine can then use the parameter corrections 255 to refine the tuned control policy 210 so that the command generated by the tuned control policy 210 in future iterations will more closely match the locally demonstrated action 275”).” Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Han, to include Gaydarov’s teaching of a robot system identifying gaps for transitions and Kolluri's teaching of obtaining local demonstration data for each required subtask and tuning the base control policy using the demonstration data and corrective parameters, since Gaydarov teaches wherein the robot system simplifies the difficulty in robotics planning, decreases time spent for programming, and improves the success rate of programming and Kolluri teaches wherein the local demonstration data and corrective parameters vastly improves robotic learning and teaching methods by significantly reducing the amount of required training and simplifying the robot teaching. Regarding claim 8, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching method of claim 7, wherein the generating of the task comprises arranging the unit tasks selected by the user according to a selection order and generating the task to be performed using the robot (at least as in paragraph 0141, “Then, the user may use it as it is by selecting one of the provided templates, or select one of the provided task templates, and generate a new task by further adding the user skill to the selected task template, or deleting, switching, or changing the user skill included in the selected task template”; at least as in paragraph 0113, “The task builder 500 may generate a task to be performed by the robot based on the above-described user skill. That is, the task builder 500 may arrange one or more user skills in a time-series manner through an interface with the user”; at least as in paragraph 0115 & Fig. 8, wherein the task builder displays various user skills and a short description of each user skill, and further wherein “The user may generate the task by bringing the skill from the skill screen and disposing and connecting it on the task screen in a time series manner”; at least as in paragraph 0117, “the task builder 500 may receive the user skill and the task template generated and recommended by the work cell engine, and generate the task by using one task template selected by the user among the received task templates, or adding, deleting, or switching the user skill based on one task template selected by the user among the received task templates”). Regarding claim 10, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching method of claim 7, wherein the receiving of the user teaching information comprises receiving the user teaching information through a handheld terminal that allows a user to intuitively control movement of the robot (see 0072, 0075, at least as in paragraph 0056, “The display 800 may be an LCD monitor connected with a computer, a teaching pendant screen, or the screen of a mobile device such as a smartphone, and the input apparatus 900 may be a keyboard, a mouse, a touch screen, etc. Further, like the screen of the touch screen, the display 800 and the input apparatus 900 may be configured together in a single apparatus”; at least as in paragraph 0139, “This may be informed by various methods including an indirect teaching method using a teaching pendant or a direct teaching method of informing coordinates by directly moving the robot, etc.”). Regarding claim 11, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching method of claim 7, wherein the received user teaching information comprises at least one information of speed, acceleration, a contact force, or a trajectory at which the robot moves (at least as in paragraph 0072, “the user may set the parameter relevant to the work cell item selected according to the robot work environment (operation S320)”; at least as in paragraph 0075, “FIGS. 4A to 4H are diagrams showing the work cell manager 300 for displaying a screen on the display 800 so that the user according to an exemplary embodiment of the present disclosure selects the work cell item, and inputs the parameter items relevant to the selected work cell item”; at least as in paragraph 0078, “After selecting the work cell item, the work cell manager 300 may display the relevant parameter items stored in the work cell DB of the master DB 600 on the screen of the display 800 together with the selected work cell item so that the user may input and set the information on the selected work cell item (operation S320)”; at least as in paragraph 0057, “The work cell item and the relevant parameter item may be in plural, and at least any one among the category of the work cell item, the type of the work cell item, the model of the work cell item, name, weight, center of gravity, TCP, volume, reference point, turning point, approach point, way point, and operation point of the work cell item, and the posture information, speed, acceleration, connection type, connection port, and wait time of respective points may be included as the parameter item”; see also paragraphs 0075-0092 and Figs. 4A-4H). Regarding claim 12, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching method of claim 7, further comprising: executing the derived final teaching result of the robot, wherein the executing of the derived final teaching result comprises executing the final teaching result of the robot through a virtual simulation or a real robot and displaying an execution result on the display (at least as in paragraph 0128, “When the generation of the task is completed, the operation of the task may be verified through simulation, and if it is verified, it may be executed by the robot”). Regarding claim 19, Han teaches: A robot teaching apparatus (at least as in paragraph 0054 & Fig. 1, “a robot programming apparatus”) comprising: at least one processor (at least as in paragraph 0147, “the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontroller, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof”); and a memory configured to load or store a program executed by the processor (at least as in paragraph 0146, “The software codes may be stored in memory units and executed by processors. The memory unit may also be implemented within the processor and implemented outside the processor, and in this case, the memory unit may be communicatively coupled to the processor by various means as known”), wherein the program comprises instructions that cause the processor to: display unit tasks operable by a robot on a display together with descriptive information corresponding to each of the unit tasks (at least as in paragraph 0115, “Referring to FIG. 8, the task builder 500 may divide the screen of the display 800 into two screens, and one screen (hereinafter referred to as a task screen 510) may be used to display a task to be generated, and the other one screen (hereinafter referred to as a skill screen 520) may be used to display the user skill stored in the user DB 700”, and further wherein the screen displays various user skill descriptions “such as ‘Pick palette, gripper,’ ‘Door open, CNC,’ ‘Place workpiece, CNC,’ ‘Door close, CNC,’ ‘Pick workpiece, CNC,’ and ‘Place palette, gripper’”); generate a task to be performed using the robot by combining unit tasks selected by a user among the displayed unit tasks (at least as in paragraph 0115, “The user may generate the task by bringing the skill from the skill screen and disposing and connecting it on the task screen in a time series manner. As an example, the user may bring the user skills such as ‘Pick palette, gripper,’ ‘Door open, CNC,’ ‘Place workpiece, CNC,’ ‘Door close, CNC,’ ‘Pick workpiece, CNC,’ and ‘Place palette, gripper’ from the skill screen to dispose the task screen in a progress order to complete the task”), the generating of the task comprising: identifying (at least as in paragraph 0117, “the task builder 500 may receive the user skill and the task template generated and recommended by the work cell engine, and generate the task by using one task template selected by the user among the received task templates, or adding, deleting, or switching the user skill based on one task template selected by the user among the received task templates”; at least as in paragraph 0120, recommending a plurality of task templates); and receive user teaching information (at least as in paragraph 0115, “At this time, the coordinates information or the posture information necessary for the user skill to operate may be previously set by the work cell engine as described above, or may be set by the user by using the direct teaching or the indirect teaching in the current operation” at least as in paragraph 0121-0122, “Hereinafter, modification may be made to constitute the desired task based on the task template shown in the task screen… For example, when the position of the target workpiece of the Pick skill is changed, the parameter of the work cell item may be changed by using the work cell manager 300”; at least as in paragraph 0072, “the user may set the parameter relevant to the work cell item selected according to the robot work environment (operation S320)”; at least as in paragraph 0075, “FIGS. 4A to 4H are diagrams showing the work cell manager 300 for displaying a screen on the display 800 so that the user according to an exemplary embodiment of the present disclosure selects the work cell item, and inputs the parameter items relevant to the selected work cell item”; at least as in paragraph 0078, “After selecting the work cell item, the work cell manager 300 may display the relevant parameter items stored in the work cell DB of the master DB 600 on the screen of the display 800 together with the selected work cell item so that the user may input and set the information on the selected work cell item (operation S320)”; at least as in paragraph 0057, “The work cell item and the relevant parameter item may be in plural, and at least any one among the category of the work cell item, the type of the work cell item, the model of the work cell item, name, weight, center of gravity, TCP, volume, reference point, turning point, approach point, way point, and operation point of the work cell item, and the posture information, speed, acceleration, connection type, connection port, and wait time of respective points may be included as the parameter item”; see also paragraphs 0075-0092 and Figs. 4A-4H); and derive a final teaching result of the robot so that the robot performs the generated task(at least as in paragraph 0093, “when the work cell item to be used by the work cell manager 300 is selected and the parameter setting for the selected work cell item is completed, the work cell engine 400 may search the master skill relevant to the selected work cell item from the master DB 600, and generate the user skill by applying at least one parameter among the parameters of the selected work cell item to the searched master skill”; at least as in paragraph 0099, “at least one parameter among the parameters of the selected work cell item or combination of the work cell items stored in the work cell DB of the user DB 700 may be applied to the searched skill (operation S530)”; at least as in paragraph 0128, “When the generation of the task is completed, the operation of the task may be verified through simulation, and if it is verified, it may be executed by the robot”). Han does not explicitly teach “an omitted unit task … in response to the identifying of the omitted unit task, displaying a type of the omitted unit task and an order of the omitted unit task among the unit tasks selected by the user; for each of the provided unit tasks… by correcting the received user teaching information according to a preset standard.” However, Gaydarov, in the same field of endeavor of robot system generating a schedule or plan for a task, specifically teaches: an omitted unit task (at least as in paragraph 0065, “during transition generation the system searches the graph for gaps between sequenced actions that lack transitions. Thus, the system can identify the portion 400a as a portion of the graph having two sequenced actions but lacking a transition between them”)… in response to the identifying of the omitted unit task, displaying a type of the omitted unit task and an order of the omitted unit task among the unit tasks selected by the user (at least as in paragraph 0066, “the system generated four alternative transitions for transitioning between the action node 420 and the action node 430. Each alternative transition is represented by its own action node, e.g., the transition nodes 450, 460, 470, and 480”; at least as in paragraph 0085, “the system can present a user interface that allows a user to manually specify a next transformer to be applied in the graph or some other manipulation of the graph. As part of this user interface, the system can also present a graphical simulation of the robots executing a particular candidate schedule. For example, if a particular transition between actions seems too awkward or otherwise not ideal, a human can manually select a different transition”; at least as in paragraph 0045, “the planner 120 can provide a candidate process definition graph 123 to a user interface device 140. The user interface device 140 can then present a user interface that allows a user to input a user transformer selection 125, which directs the planner 120 to perform the next iteration using a transformer selection 127 specified by the user”)… However, Kolluri also teaches skill based robotic programming and learning. Specifically, Kolluri is directed at a skill template distribution system distributing skill templates, which allow for a robot to be programmed to perform a robotic task using a customized control policy learned using demonstration data. Kolluri specifically teaches “for each of the unit tasks included in the generated task (at least as in paragraph 0039, “local demonstration data is data gathered while a user is controlling a robot to demonstrate how the robot can perform a particular task by causing the robot to perform physical movements”; at least as in paragraph 0062, “a user merely needs to guide the robot in performing the subtasks that are indicated by the skill template as requiring local demonstration data. The robot will automatically capture the local demonstration data, which the training system can use to refine a base control policy associated with the connector insertion subtask”)… by correcting the received user teaching information according to a preset standard (at least as in paragraph 0041, “The training system 120 is a computer system that can use machine learning techniques to generate a customized control policy 125 from the local demonstration data 115; at least as in paragraph 0049, “The training system 120 can thus refine a base control policy using the local demonstration data 115 in order to generate the customized control policy 125 for the particular robot that was used to generate the demonstration data. The customized control policy 125 adjusts the base control policy to account for characteristics of the particular robot as well as local variables for the task”; at least as in paragraph 0062, “When training of the customized control policy is complete, the robot merely needs to download the final trained customized control policy in order to be equipped to perform the subtask”; see also Fig. 5; at least as in paragraph 0077, “During training, a training engine 240 generates parameter corrections 255 by using a representation of a locally demonstrated action 275 as well as a proposed command 245 generated by the tuned control policy 210. The training engine can then use the parameter corrections 255 to refine the tuned control policy 210 so that the command generated by the tuned control policy 210 in future iterations will more closely match the locally demonstrated action 275”).” Therefore, it would have been obvious to one of the ordinary skill in the art at the effective filing date of the instant invention to modify the teachings of Han, to include Gaydarov’s teaching of a robot system identifying gaps for transitions and Kolluri's teaching of obtaining local demonstration data for each required subtask and tuning the base control policy using the demonstration data and corrective parameters, since Gaydarov teaches wherein the robot system simplifies the difficulty in robotics planning, decreases time spent for programming, and improves the success rate of programming and Kolluri teaches wherein the local demonstration data and corrective parameters vastly improves robotic learning and teaching methods by significantly reducing the amount of required training and simplifying the robot teaching. Regarding claim 20, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching apparatus of claim 19, wherein the generating of the task comprises arranging the unit tasks selected by the user according to a selection order and generating the task to be performed using the robot (at least as in paragraph 0141, “Then, the user may use it as it is by selecting one of the provided templates, or select one of the provided task templates, and generate a new task by further adding the user skill to the selected task template, or deleting, switching, or changing the user skill included in the selected task template”; at least as in paragraph 0113, “The task builder 500 may generate a task to be performed by the robot based on the above-described user skill. That is, the task builder 500 may arrange one or more user skills in a time-series manner through an interface with the user”; at least as in paragraph 0115 & Fig. 8, wherein the task builder displays various user skills and a short description of each user skill, and further wherein “The user may generate the task by bringing the skill from the skill screen and disposing and connecting it on the task screen in a time series manner”; at least as in paragraph 0117, “the task builder 500 may receive the user skill and the task template generated and recommended by the work cell engine, and generate the task by using one task template selected by the user among the received task templates, or adding, deleting, or switching the user skill based on one task template selected by the user among the received task templates”). Regarding claim 21, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching apparatus of claim 19, wherein the receiving of the user teaching information comprises receiving the user teaching information through a handheld terminal that allows a user to intuitively control movement of the robot (see 0072, 0075, at least as in paragraph 0056, “The display 800 may be an LCD monitor connected with a computer, a teaching pendant screen, or the screen of a mobile device such as a smartphone, and the input apparatus 900 may be a keyboard, a mouse, a touch screen, etc. Further, like the screen of the touch screen, the display 800 and the input apparatus 900 may be configured together in a single apparatus”; at least as in paragraph 0139, “This may be informed by various methods including an indirect teaching method using a teaching pendant or a direct teaching method of informing coordinates by directly moving the robot, etc.”). Regarding claim 22, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching apparatus of claim 19, wherein the received user teaching information comprises at least one information of speed, acceleration, a contact force, or a trajectory at which the robot moves (at least as in paragraph 0072, “the user may set the parameter relevant to the work cell item selected according to the robot work environment (operation S320)”; at least as in paragraph 0075, “FIGS. 4A to 4H are diagrams showing the work cell manager 300 for displaying a screen on the display 800 so that the user according to an exemplary embodiment of the present disclosure selects the work cell item, and inputs the parameter items relevant to the selected work cell item”; at least as in paragraph 0078, “After selecting the work cell item, the work cell manager 300 may display the relevant parameter items stored in the work cell DB of the master DB 600 on the screen of the display 800 together with the selected work cell item so that the user may input and set the information on the selected work cell item (operation S320)”; at least as in paragraph 0057, “The work cell item and the relevant parameter item may be in plural, and at least any one among the category of the work cell item, the type of the work cell item, the model of the work cell item, name, weight, center of gravity, TCP, volume, reference point, turning point, approach point, way point, and operation point of the work cell item, and the posture information, speed, acceleration, connection type, connection port, and wait time of respective points may be included as the parameter item”; see also paragraphs 0075-0092 and Figs. 4A-4H). Regarding claim 23, in view of the above combination of Han, Gaydarov, and Kolluri, Han further teaches: The robot teaching apparatus of claim 19, wherein wherein the program comprises instructions that cause the processor to execute the derived final teaching result of the robot, and the executing of the derived final teaching result comprises executing the final teaching result of the robot through a virtual simulation or a real robot and displaying an execution result on the display (at least as in paragraph 0128, “When the generation of the task is completed, the operation of the task may be verified through simulation, and if it is verified, it may be executed by the robot”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICARDO ICHIKAWA VISCARRA whose telephone number is (571)270-0154. The examiner can normally be reached M-F 9-12 & 2-4 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICARDO I VISCARRA/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Nov 21, 2023
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Mar 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12558719
BINDING DEVICE, BINDING SYSTEM, METHOD FOR CONTROLLING BINDING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12545356
MICROMOBILITY ELECTRIC VEHICLE WITH WALK-ASSIST MODE
2y 5m to grant Granted Feb 10, 2026
Patent 12528400
MOBILE FULFILLMENT CONTAINER APPARATUS, SYSTEMS, AND RELATED METHODS
2y 5m to grant Granted Jan 20, 2026
Patent 12502781
ROBOT OFFSET SIMULATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12487602
IMPROVED NAVIGATION FOR A ROBOTIC WORK TOOL
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
90%
With Interview (+27.9%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month