Prosecution Insights
Last updated: April 19, 2026
Application No. 18/017,755

CONTROL DEVICE, ROBOT CONTROL DEVICE, AND CONTROL METHOD

Final Rejection §103
Filed
Jan 24, 2023
Examiner
OSTROW, ALAN LINDSAY
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
26 granted / 35 resolved
+22.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims Claims 1, 3, and 6-7 are currently pending and have been examined in this application. This Final Rejection is in response to the amendment submitted on 11/10/2025. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/21/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments and Amendment Applicant’s arguments, regarding the incorrect numbering of claim 3 as claim 2 in the previous office action have been fully considered and are persuasive. Several instances of this specific numbering error occurred in the previous office action. The examiner has made appropriate corrections in this office action. The examiner would also like to emphasize that the limitations of claim 3 were properly addressed in the previous office action by the art applied [Baier (US 20200171657 A1)] and that the error did not extend beyond the labeling of the claim number. Please refer to the rejection of claim 3 below. Applicant’s arguments, filed on 6/17/2025, with respect to the rejection of Claims 1, 3 and 6-7 under 35 USC 103 have been fully considered but they are moot in view of the new grounds of rejection provided below, which was necessitated based on Applicant’s amendments to the claims, which changed the scope of the claims. Examiner notes wherein Applicant’s arguments are directed towards the newly amended claim limitation(s), which are addressed by the newly found prior art, as indicated below. Additional Examiner Reply to Arguments with Regard to the 35 USC 103 Rejection: Applicant’s Remarks:“(C) In Inagaki, at least one of the phase and gain of the learning correction amount is adjusted, but a parameter is not adjusted.” Examiner Reply: Applicant has provided 3 arguments regarding the previous 35 USC 103 rejection, which are sequentially labeled by Applicant as points (A) to (C). Points (A) and (B) have been addressed by the newly found art which was necessitated by the amended claim language. However regarding Point (C), Applicant remarks that a “ … parameter is not adjusted.” in Inagaki. Examiner notes wherein that Inagaki was not cited to address the limitation of parameter adjustment. The examiner has used the primary reference Hasegawa (US 2018/0225113) to address any limitations which involve the implementation of parameters and Hasegawa was subsequently combined with Inagaki to address other limitations as follows. Inagaki was cited to address the limitations related to detecting and correcting unwanted vibrations and was therefore not used to address parameters. Please see the 35 USC 103 rejection below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Hasegawa (US 20180225113 A1) as modified by Inagaki (US 20200171654 A1) in view of Ho US 5587899 A) Claim 1: Hasegawa teaches the following limitations: A robot controller comprising: a controller for creating a compensation amount for controlling a motion of a robot; (Hasegawa - [0082] … A torque acting on a robot may be calculated on the basis of the acting force fL and a distance from a tool contact point (a contact point between an end effector and a workpiece) to the force sensor P, and is specified as an fL torque component (not illustrated). The control unit 43 performs gravity compensation on the acting force fL. The gravity compensation is a process of removing the gravity component from the acting force fL. ..) the controller comprising: a learning control unit that has a parameter for use for learning control for creating the compensation amount; [0235] In a case where learning was performed in the past, the learned parameter θ is used as an initial value. In a case where a similar target was learned in the past, the parameter θ in the learning may be used as an initial value. The past learning may be performed by a user by using the robot 3, and may be performed by a manufacturer of the robot 3 before the robot 3 are sold. In this case, there may be a configuration in which a manufacturer prepares a plurality of initial value sets according to types of target objects or work, and a user selects an initial value during learning. In a case where an initial value of the parameter θ is determined, the initial value is stored in the learning information 44e as the current value of the parameter θ.) a parameter storage unit that stores the parameter set before shipment; and (Hasegawa - [0057] The detection unit 42 performs the template matching process by referring to parameters. In other words, various parameters 44a are stored in the storage unit 44, and the parameters 44a include parameters related to detection in the detection unit 42. FIG. 3 is a diagram illustrating examples of the parameters 44a. In the examples illustrated in FIG. 3, the parameters 44a include optical parameters, operation parameters, and force control parameters.) a parameter adjustment unit that, at a time of production by the robot, adjusts the parameter stored in the parameter storage unit and sets the adjusted parameter in the learning control unit, (Hasegawa - [0144] If the behavior is selected, the learning portion 41b changes the parameters 44a corresponding to the behavior. For example, in the example illustrated in FIG. 7, in a case where the behavior al increasing the x coordinate of the imaging unit by a predetermined value is selected, the learning portion 41b increases the x coordinate by the predetermined value at a position of the imaging unit indicated by the imaging unit parameter of the optical parameters. In a case where the parameters 44a are changed, the control unit 43 controls the robots 1 and 2 by referring to the parameters 44a.; [0217] In the example illustrated in FIG. 11, a reward is evaluated on the basis of whether work is good or bad performed by the robot 3. In other words, the learning portion 41b changes the force control parameters corresponding the behavior a, and then operates the robot 3 according to the force control parameters, and the robot 3 performs work of picking up a target object detected by the detection unit 42. The learning portion 41b observes whether the work is good or bad so as to evaluate whether the work is good or bad. The learning portion 41b determines a reward for the behavior a, and the states s and s′ on the basis of whether the work is good or bad.) wherein the parameter adjustment unit adjusts the parameter (Hasegawa - [0144] If the behavior is selected, the learning portion 41b changes the parameters 44a corresponding to the behavior. For example, in the example illustrated in FIG. 7, in a case where the behavior al increasing the x coordinate of the imaging unit by a predetermined value is selected, the learning portion 41b increases the x coordinate by the predetermined value at a position of the imaging unit indicated by the imaging unit parameter of the optical parameters. In a case where the parameters 44a are changed, the control unit 43 controls the robots 1 and 2 by referring to the parameters 44a.) the learning control unit outputs the compensation amount created based on the adjusted parameters to the motion control unit. (Hasegawa - [0144] If the behavior is selected, the learning portion 41b changes the parameters 44a corresponding to the behavior. For example, in the example illustrated in FIG. 7, in a case where the behavior al increasing the x coordinate of the imaging unit by a predetermined value is selected, the learning portion 41b increases the x coordinate by the predetermined value at a position of the imaging unit indicated by the imaging unit parameter of the optical parameters. In a case where the parameters 44a are changed, the control unit 43 controls the robots 1 and 2 by referring to the parameters 44a.) Hasegawa does not explicitly teach the following limitations, however Inagaki teaches: a motion control unit that adds the compensation amount inputted from the controller to a feedback loop, to control the motion of the robot; (Inagaki - [0036] …the amount of vibration in each control cycle in a case that the operation program is executed and the amount of learning correction for canceling the amount of vibration are calculated and then recorded in a memory in the learning control unit 11. The learning correction amount recorded in the memory is used to correct the operation command in the robot control unit 12 in a case that the same operation program is subsequently executed. By repeatedly executing this series of processes, the vibration of the robot mechanism unit 103 continues to be reduced.) a frequency generation unit that generates a signal of which frequency changes; and (Inagaki - [0005] … a learning correction amount updating unit, in a case that as a result of comparison by the comparison unit, there exists a frequency component for which the power spectrum at the time of the current learning is greater than the power spectrum at the time immediately preceding learning, configured to adjust at least one of a phase and a gain of the learning correction amount used for correcting the operation command at the time of the current learning such that the power spectrum at the time of the current learning becomes less than the power spectrum at the time of the immediately preceding learning and to set the adjusted learning correction amount as a new learning correction amount used for correcting an operation command at the time of the next learning.) frequency response characteristic of the robot based on the signal and an output signal from a sensor attached to a position detection part of the robot; (Inagaki - [0005] According to an aspect of the present disclosure, a robot system having a robot mechanism unit including a sensor configured to acquire vibration data of a control target portion and a robot control device configured to control an operation of the robot mechanism unit according to an operation program, includes a learning control unit configured to perform learning for calculating a learning correction amount for bringing a position of the control target portion detected by the sensor toward a target position during the operation of the robot mechanism unit, … ) the output signal being a signal when the signal generated by the frequency generation unit is input to the motion control unit, (Inagaki - [0005] … a learning correction amount updating unit, in a case that as a result of comparison by the comparison unit, there exists a frequency component for which the power spectrum at the time of the current learning is greater than the power spectrum at the time immediately preceding learning, configured to adjust at least one of a phase and a gain of the learning correction amount used for correcting the operation command at the time of the current learning such that the power spectrum at the time of the current learning becomes less than the power spectrum at the time of the immediately preceding learning and to set the adjusted learning correction amount as a new learning correction amount used for correcting an operation command at the time of the next learning.) Hasegawa in combination with Inagaki does not explicitly teach the following limitations, however Ho teaches: a frequency characteristic measurement unit that measures a frequency response characteristic (Ho - [Column 7, Line 62 to Column 8, line 1] During a tuning sequence, process gain and phase lag calculation module 42 measures the gain and phase lag of the output signal y(t) with respect to the input signal u'(t) at least one, and possibly several, observation frequencies. To ensure an accurate measurement, both signals must contain frequency components centered around the frequencies at which the measurements will be made….) based on a reciprocal of the frequency response characteristic of the robot, and (Ho - [Column 6, Lines 16 - 19] … Accordingly, the measured value of the ultimate gain KU would be the reciprocal of the process gain, which is defined as the distance from the origin to the frequency .omega..sub.2 on function G2 (iω). ) wherein the frequency response characteristic is an input/output gain and a phase lag. (Ho - [Column 3, Lines 21 - 29] The apparatus of the present invention comprises a process gain and phase lag calculation module, an ultimate gain (KU) and ultimate period (TU) calculation module, and a tuning sequence control module. The tuning sequence control module initiates and controls a tuning sequence, selects at least one observation frequency, determines controller parameters based on the ultimate gain and the ultimate period, and provides the controller parameters to the controller.) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hasegawa (which teaches a method of storing and changing operational robot parameters) to include a method of detecting and correcting unwanted vibrations as taught in Inagaki and to further measure and adjust the operational frequency response parameters in order to dampen unwanted vibrations as taught in Ho. Having the ability to measure vibrations that occur during robot motion and use parameters to generate a feedback-based frequency response, to suppress unwanted vibrations, increases the accuracy and efficiency of the robot’s motion. Claim 6: Hasegawa teaches the following limitations: The robot controller according to claim 1, wherein the sensor 1s one selected from an acceleration sensor, a gyro sensor, an inertia sensor, a force sensor, a laser tracker, a camera, and a motion capture. (Hasegawa - [0010] The position information is calculated on the basis of at least one of an output from an inertial sensor provided in the robot and a position detection unit disposed outside the robot. According to the inertial sensor, it is possible for the robot to calculate position information on the basis of the generally used sensor. The detection unit disposed outside the robot can calculate position information without being influenced by an operation of the robot.) Claim 7: Hasegawa teaches the following limitations: A control method for a robot controller, the robot controller comprising: a controller for creating a compensation amount for controlling a motion of a robot; (Hasegawa - [0082] … A torque acting on a robot may be calculated on the basis of the acting force fL and a distance from a tool contact point (a contact point between an end effector and a workpiece) to the force sensor P, and is specified as an fL torque component (not illustrated). The control unit 43 performs gravity compensation on the acting force fL. The gravity compensation is a process of removing the gravity component from the acting force fL. ..) the control method comprising: reading a parameter for use for learning control for creating the compensation amount before shipment from a parameter storage unit; (Hasegawa - [0235] In a case where learning was performed in the past, the learned parameter θ is used as an initial value. In a case where a similar target was learned in the past, the parameter θ in the learning may be used as an initial value. The past learning may be performed by a user by using the robot 3, and may be performed by a manufacturer of the robot 3 before the robot 3 are sold. In this case, there may be a configuration in which a manufacturer prepares a plurality of initial value sets according to types of target objects or work, and a user selects an initial value during learning. In a case where an initial value of the parameter θ is determined, the initial value is stored in the learning information 44e as the current value of the parameter θ.) adjusting the parameter stored in the parameter storage unit at a time of production by the robot, (Hasegawa - [0144] If the behavior is selected, the learning portion 41b changes the parameters 44a corresponding to the behavior. For example, in the example illustrated in FIG. 7, in a case where the behavior al increasing the x coordinate of the imaging unit by a predetermined value is selected, the learning portion 41b increases the x coordinate by the predetermined value at a position of the imaging unit indicated by the imaging unit parameter of the optical parameters. In a case where the parameters 44a are changed, the control unit 43 controls the robots 1 and 2 by referring to the parameters 44a.) outputting the compensation amount created based on the adjusted parameters to the motion control unit. (Hasegawa - [0144] If the behavior is selected, the learning portion 41b changes the parameters 44a corresponding to the behavior. For example, in the example illustrated in FIG. 7, in a case where the behavior al increasing the x coordinate of the imaging unit by a predetermined value is selected, the learning portion 41b increases the x coordinate by the predetermined value at a position of the imaging unit indicated by the imaging unit parameter of the optical parameters. In a case where the parameters 44a are changed, the control unit 43 controls the robots 1 and 2 by referring to the parameters 44a.) Hasegawa does not explicitly teach the following limitations, however Inagaki teaches: a motion control unit that adds the compensation amount inputted from the controller to a feedback loop, to control the motion of the robot; (Inagaki - [0036] …the amount of vibration in each control cycle in a case that the operation program is executed and the amount of learning correction for canceling the amount of vibration are calculated and then recorded in a memory in the learning control unit 11. The learning correction amount recorded in the memory is used to correct the operation command in the robot control unit 12 in a case that the same operation program is subsequently executed. By repeatedly executing this series of processes, the vibration of the robot mechanism unit 103 continues to be reduced.) a frequency generation unit that generates a signal of which frequency changes; and (Inagaki - [0005] … a learning correction amount updating unit, in a case that as a result of comparison by the comparison unit, there exists a frequency component for which the power spectrum at the time of the current learning is greater than the power spectrum at the time immediately preceding learning, configured to adjust at least one of a phase and a gain of the learning correction amount used for correcting the operation command at the time of the current learning such that the power spectrum at the time of the current learning becomes less than the power spectrum at the time of the immediately preceding learning and to set the adjusted learning correction amount as a new learning correction amount used for correcting an operation command at the time of the next learning.) frequency response characteristic of the robot based on the signal and an output signal from a sensor attached to a position detection part of the robot; (Inagaki - [0005] According to an aspect of the present disclosure, a robot system having a robot mechanism unit including a sensor configured to acquire vibration data of a control target portion and a robot control device configured to control an operation of the robot mechanism unit according to an operation program, includes a learning control unit configured to perform learning for calculating a learning correction amount for bringing a position of the control target portion detected by the sensor toward a target position during the operation of the robot mechanism unit, … ) the output signal being a signal when the signal generated by the frequency generation unit is input to the motion control unit (Inagaki - [0005] … a learning correction amount updating unit, in a case that as a result of comparison by the comparison unit, there exists a frequency component for which the power spectrum at the time of the current learning is greater than the power spectrum at the time immediately preceding learning, configured to adjust at least one of a phase and a gain of the learning correction amount used for correcting the operation command at the time of the current learning such that the power spectrum at the time of the current learning becomes less than the power spectrum at the time of the immediately preceding learning and to set the adjusted learning correction amount as a new learning correction amount used for correcting an operation command at the time of the next learning.) Hasegawa in combination with Inagaki does not explicitly teach the following limitations, however Ho teaches: a frequency characteristic measurement unit that measures a frequency response characteristic (Ho - [Column 7, Line 62 to Column 8, line 1] During a tuning sequence, process gain and phase lag calculation module 42 measures the gain and phase lag of the output signal y(t) with respect to the input signal u'(t) at least one, and possibly several, observation frequencies. To ensure an accurate measurement, both signals must contain frequency components centered around the frequencies at which the measurements will be made….) based on a reciprocal of the frequency response characteristic of the robot; and (Ho - [Column 6, Lines 16 - 19] … Accordingly, the measured value of the ultimate gain KU would be the reciprocal of the process gain, which is defined as the distance from the origin to the frequency .omega..sub.2 on function G2 (iω).) wherein the frequency response characteristic is an input/output gain and a phase lag. (Ho - [Column 3, Lines 21 - 29] The apparatus of the present invention comprises a process gain and phase lag calculation module, an ultimate gain (KU) and ultimate period (TU) calculation module, and a tuning sequence control module. The tuning sequence control module initiates and controls a tuning sequence, selects at least one observation frequency, determines controller parameters based on the ultimate gain and the ultimate period, and provides the controller parameters to the controller.) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hasegawa (which teaches a method of storing and changing operational robot parameters) to include a method of detecting and correcting unwanted vibrations as taught in Inagaki and to further measure and adjust the operational frequency response parameters in order to dampen unwanted vibrations as taught in Ho. Having the ability to measure vibrations that occur during robot motion and use parameters to generate a feedback-based frequency response, to suppress unwanted vibrations, increases the accuracy and efficiency of the robot’s motion. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hasegawa (US 20180225113 A1) as modified by Inagaki (US 20200171654 A1) and Ho US 5587899 A) in view of Baier (US 20200171657 A1) Claim 3: Hasegawa in combination with Inagaki and Ho does not explicitly teach the following limitations, however Baier teaches: The robot controller according to claim 1, wherein the parameter adjustment unit adjusts the parameter using a genetic algorithm. (Baier - [0059] The cost function C considers only known elements of the tensor (e.g., the sparsity of the tensor is exploited). Equation (8) may be minimized using gradient descent. As experimentally found, the stochastic optimization algorithm Adam works best for this task. Adam dynamically optimizes the learning rate individually for each parameter. The sampling of stochastic mini-batches for each update has also been shown as advantageous for speeding up training. To avoid overfitting, the training may be stopped when the performance on a validation set does not further improve.) Examiner Note: stochastic optimization algorithm corresponds to genetic algorithm Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Hasegawa in combination with Inagaki and Ho to include a method of stochastic optimization as taught in Baier. Employing the use of stochastic optimization ensures that the system is employing the most relevant and most fit data when setting the parameters used for suppressing vibrations. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Zou (US 20200101608 A1) describes a method and system for determining a motion path of a mechanical arm. The mechanical arm’s motion paths are further optimized by using evolutionary algorithm techniques. Ogata (US 20190217468 A1) describes controlling robot motion and suppressing vibrations through the use of initial parameters and a learning unit. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN LINDSAY OSTROW whose telephone number is (703)756-1854. The examiner can normally be reached M-F 8 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN LINDSAY OSTROW/ Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Jan 24, 2023
Application Filed
Dec 20, 2024
Non-Final Rejection — §103
Mar 17, 2025
Response Filed
Apr 09, 2025
Final Rejection — §103
Jul 17, 2025
Request for Continued Examination
Jul 22, 2025
Response after Non-Final Action
Aug 02, 2025
Non-Final Rejection — §103
Nov 10, 2025
Response Filed
Dec 04, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583119
TRANSFER SYSTEM AND TRANSFER METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12576525
ROBOT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12569989
ESTIMATION DEVICE, ESTIMATION METHOD, ESTIMATION PROGRAM, AND ROBOT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12539611
ROBOT CONTROL APPARATUS, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12491627
INFORMATION PROCESSING APPARATUS AND COOKING SYSTEM
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+37.7%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month