Prosecution Insights
Last updated: April 19, 2026
Application No. 18/716,930

CONTROL DEVICE

Final Rejection §103
Filed
Jun 06, 2024
Examiner
NELESKI, ELIZABETH ROSE
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
69 granted / 94 resolved
+21.4% vs TC avg
Strong +18% interview lift
Without
With
+17.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
24 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 94 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of Claims The amendment filed 12/08/2025 has been entered. Claims 1-13 have been amended. Claims 1-13 are now pending. Applicant’s amendments to the claim language have overcome the 35 U.S.C. 112(f) interpretation as set forth in the previously mailed office action. Joint Inventors This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Response to Arguments Applicant’s arguments with respect to the 35 USC 102 rejections against claims 1-3 set forth in the Non-Final Office Action mailed 09/11/2025 have been fully considered but are moot because amendments to the claim language have necessitated new grounds of rejection set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Gatland et al. (US 20200064471 A1), hereinafter Gatland in view of Nadler (US 20190325230 A1), hereinafter Nadler. Regarding claim 1, Gatland discloses: A controller device comprising: a processor configured to: acquire first information corresponding to a detection result of detecting a relative positional relationship between a robot and a target object by a visual sensor (See at least [0105]: “ More generally, such techniques may be used to provide easier selection of an object or position within any type of volume data provided by volume data source, for example, including selecting particular aircraft within a 3D plot of aircraft in an airspace generated by an air traffic control system (e.g., using AIS, radar, beacons, and/or other ranging sensor system data), selecting particular charted objects within a 3D world chart or 3D astronomical chart (e.g., generated by ranging systems and/or astronomical observation systems), selecting particular plotted objects or positions within a 3D depiction or scatterplot of volume data (e.g., including complex data), and selecting particular objects and/or positions within a 3D medical scan (e.g., detected organs, tumors, and/or other structure within a CT scan, MRI volume, and/or other 3D medical scan data).”) output, based on the first information, first coordinate system data as data representing a coordinate system based on which the robot performs motion (See at least [0063]: “ In various embodiments, a logic device of system 100 (e.g., of orientation sensor 140 and/or other elements of system 100) may be adapted to determine parameters (e.g., using signals from various devices of system 100) for transforming a coordinate frame of sonar system 110 and/or other sensors of system 100 to/from a coordinate frame of mobile structure 101, at-rest and/or in-motion, and/or other coordinate frames, as described herein. One or more logic devices of system 100 may be adapted to use such parameters to transform a coordinate frame of sonar system 110 and/or other sensors of system 100 to/from a coordinate frame of orientation sensor 140 and/or mobile structure 101, for example.”) generate an instruction to the robot based on a first coordinate system represented by the first coordinate system data (see at least [0054]: “Propulsion system 170 may be implemented as a propeller, turbine, or other thrust-based propulsion system, a mechanical wheeled and/or tracked propulsion system, a sail-based propulsion system, and/or other types of propulsion systems that can be used to provide motive force to mobile structure 101. In some embodiments, propulsion system 170 may be non-articulated, for example, such that the direction of motive force and/or thrust generated by propulsion system 170 is fixed relative to a coordinate frame of mobile structure 101. Non-limiting examples of non-articulated propulsion systems include, for example, an inboard motor for a watercraft with a fixed thrust vector, for example, or a fixed aircraft propeller or turbine.”) Gatland does not explicitly disclose, but Nadler, in an analogous field of endeavor teaches: wherein the first coordinate system data is generated such that an origin of the coordinate system corresponds to a detected position of the target object (see at least [0104]: “Furthermore, the second module 108 is configured (namely, is functional when in operation) to generate a virtual representation of the object associated with the acquired sounds related with the presence of one or more of predetermined keyword and content, with the calculated point of origin being the geospatial location for the object.”) It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gatland with the method of placing the origin at the location of the object as taught by Nadler. This is because as stated by Nadler [0005], the invention is directed to providing “accurate, reliable and graphically intuitive representations for tracking and visualizing objects.” Regarding claim 2, the combination of Gatland and Nadler teaches: The controller according to claim 1. Gatland further discloses wherein the processor is further configured to acquire the first information indicating a detection result selected by a user input among one or more of detection results stored in a predetermined storage destination (see at least Fig. 6A which shows a user interface, and a user selecting a detection result from among a plurality of detection results.) Regarding claim 3, the combination of Gatland and Nadler teaches: The controller according to claim 1. Gatland further discloses wherein the processor is further configured to provide, as a function implemented on an icon representing a function constituting a control program of a robot, a function of acquiring the first information and outputting, based on the first information, the first coordinate system data (see at least [0086]: “Portion 330 may include imagery representing bottom feature 207, fish 208, and submerged object 209, similar to objects illustrated in FIG. 2. For example, as shown in FIG. 3, portion 330 may include a number of contour lines 332 rendered by a controller (e.g., controller 221 of FIG. 2) to distinguish depths, relative distances, various characteristics of bathymetric data, and/or other characteristics of underwater features. Alternatively, or in addition, portion 330 may include icons and/or other types of graphical indicators configured to illustrate a position and/or distance to fish 208 or submerged object 209, and/or to distinguish between the two (e.g., based on fish detection processing performed on acoustic returns from fish 208 and/or submerged object 209).”) Regarding claim 4, the combination of Gatland and Nadler teaches: The controller according to claim 1. Gatland further discloses wherein the first information indicates a detected position of the target object on a second coordinate system having a specific relationship with a world coordinate system, and the processor is further configured to output the first coordinate system data, based on second coordinate system data representing the second coordinate system, and the detected position of the target object on the second coordinate system (see at least [0031]: “In some embodiments, directional measurements may initially be referenced to a coordinate frame of a particular sensor (e.g., a sonar transducer assembly or other module of sonar system 110, and/or user interface 120) and be transformed (e.g., using parameters for one or more coordinate frame transformations) to be referenced to an absolute coordinate frame and/or a coordinate frame of mobile structure 101. In various embodiments, an absolute coordinate frame may be defined and/or correspond to a coordinate frame with one or more undefined axes, such as a horizontal plane local to mobile structure 101 and referenced to a local gravitational vector but with an unreferenced and/or undefined yaw reference (e.g., no reference to Magnetic North).”) Regarding claim 5, Gatland discloses: A controller comprising: a processor configured to generate an instruction to a robot by using a first coordinate system as a coordinate system based on which the robot performs motion (see at least [0054]: “Propulsion system 170 may be implemented as a propeller, turbine, or other thrust-based propulsion system, a mechanical wheeled and/or tracked propulsion system, a sail-based propulsion system, and/or other types of propulsion systems that can be used to provide motive force to mobile structure 101. In some embodiments, propulsion system 170 may be non-articulated, for example, such that the direction of motive force and/or thrust generated by propulsion system 170 is fixed relative to a coordinate frame of mobile structure 101. Non-limiting examples of non-articulated propulsion systems include, for example, an inboard motor for a watercraft with a fixed thrust vector, for example, or a fixed aircraft propeller or turbine.”) acquire first information corresponding to a detection result of detecting a relative positional relationship between the robot and a target object by a visual sensor (See at least [0105]: “ More generally, such techniques may be used to provide easier selection of an object or position within any type of volume data provided by volume data source, for example, including selecting particular aircraft within a 3D plot of aircraft in an airspace generated by an air traffic control system (e.g., using AIS, radar, beacons, and/or other ranging sensor system data), selecting particular charted objects within a 3D world chart or 3D astronomical chart (e.g., generated by ranging systems and/or astronomical observation systems), selecting particular plotted objects or positions within a 3D depiction or scatterplot of volume data (e.g., including complex data), and selecting particular objects and/or positions within a 3D medical scan (e.g., detected organs, tumors, and/or other structure within a CT scan, MRI volume, and/or other 3D medical scan data).”) Gatland does not explicitly disclose, but Nadler, in an analogous field of endeavor teaches: shift the first coordinate system, based on the first information (see at least [0104]: “Furthermore, the second module 108 is configured (namely, is functional when in operation) to generate a virtual representation of the object associated with the acquired sounds related with the presence of one or more of predetermined keyword and content, with the calculated point of origin being the geospatial location for the object.”) It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gatland with the method of placing the origin at the location of the object as taught by Nadler. This is because as stated by Nadler [0005], the invention is directed to providing “accurate, reliable and graphically intuitive representations for tracking and visualizing objects.” Regarding claim 6, the combination of Gatland and Nadler teaches: The controller according to claim 5. Gatland discloses: wherein the processor is further configured to acquire the first information indicating a detection result selected by a user input among one or more of detection results stored in a predetermined storage destination (see at least Fig. 6A which shows a user interface, and a user selecting a detection result from among a plurality of detection results.) Regarding claim 7, the combination of Gatland and Nadler teaches: The controller according to claim 5. Wherein the processor is further configured to provide, as a function implemented on an icon representing a function constituting a control program of a robot, a function of acquiring the first information and shifting the first coordinate system based on the first information (see at least (see at least [0086]: “Portion 330 may include imagery representing bottom feature 207, fish 208, and submerged object 209, similar to objects illustrated in FIG. 2. For example, as shown in FIG. 3, portion 330 may include a number of contour lines 332 rendered by a controller (e.g., controller 221 of FIG. 2) to distinguish depths, relative distances, various characteristics of bathymetric data, and/or other characteristics of underwater features. Alternatively, or in addition, portion 330 may include icons and/or other types of graphical indicators configured to illustrate a position and/or distance to fish 208 or submerged object 209, and/or to distinguish between the two (e.g., based on fish detection processing performed on acoustic returns from fish 208 and/or submerged object 209).”) Regarding claim 8, the combination of Gatland and Nadler teaches: The controller according to claim 5. Gatland further discloses wherein the first information indicates a correction amount of a position of the target object on a second coordinate system having a specific relationship with a world coordinate system, and the processor is further configured to obtain shifted first coordinate system data as a result of shifting the first coordinate system, based on second coordinate system data representing the second coordinate system, the correction amount, and first coordinate system data representing the first coordinate system (see at least [0031]: “In some embodiments, directional measurements may initially be referenced to a coordinate frame of a particular sensor (e.g., a sonar transducer assembly or other module of sonar system 110, and/or user interface 120) and be transformed (e.g., using parameters for one or more coordinate frame transformations) to be referenced to an absolute coordinate frame and/or a coordinate frame of mobile structure 101. In various embodiments, an absolute coordinate frame may be defined and/or correspond to a coordinate frame with one or more undefined axes, such as a horizontal plane local to mobile structure 101 and referenced to a local gravitational vector but with an unreferenced and/or undefined yaw reference (e.g., no reference to Magnetic North).”) Regarding claim 9, the combination of Gatland and Nadler teaches: The controller according to claim 5, further comprising: a storage in which positional information about a predetermined position of the target object is stored (see at least [0040]: “ In embodiments where sonar system 110 is implemented with an orientation and/or position sensor, sonar system 110 may be configured to store such location/position information along with other sensor information (acoustic returns, temperature measurements, text descriptions, water depth, altitude, mobile structure speed, and/or other sensor and/or control information) available to system 100. In some embodiments, controller 130 may be configured to generate a look up table so that a user can select desired configurations of sonar.”) wherein the processor is further configured to: acquire second information indicating the predetermined position of the target object by touching up the predetermined position of the target object by the robot, and generate the instruction, based on the positional information stored in the storage and the second information (see at least [0038]: “In various embodiments, sonar system 110 may be implemented with optional orientation and/or position sensors (e.g., similar to orientation sensor 140, gyroscope/accelerometer 144, and/or GPS 146) that may be incorporated within the transducer assembly housing to provide three dimensional orientations and/or positions of the transducer assembly and/or transducer(s) for use when processing or post processing sonar data for display. The sensor information can be used to correct for movement of the transducer assembly between ensonifications to provide improved alignment of corresponding acoustic returns/samples, for example, and/or to generate imagery based on the measured orientations and/or positions of the transducer assembly. In other embodiments, an external orientation and/or position sensor can be used alone or in combination with an integrated sensor or sensors.”) Regarding claim 10, the combination of Gatland and Nadler teaches: The controller according to claim 9. Gatland further discloses wherein the processor is further configured to generate the instruction by obtaining a movement amount to a specific position of the robot on a coordinate system defined by the positional information stored in the storage, and applying the obtained movement amount as a movement amount in the first coordinate system (see at least [0046]: “ In some embodiments, user interface 120 may be adapted to accept user input including a user-defined target heading, route, and/or orientation for a transducer module, for example, and to generate control signals for steering sensor/actuator 150 and/or propulsion system 170 to cause mobile structure 101 to move according to the target heading, route, and/or orientation. In further embodiments, user interface 120 may be adapted to accept user input including a user-defined target attitude for an actuated device (e.g., sonar system 110) coupled to mobile structure 101, for example, and to generate control signals for adjusting an orientation of the actuated device according to the target attitude.”) Regarding claim 11, the combination of Gatland and Nadler teaches: The controller according to claim 9. Gatland further discloses wherein the processor is further configured to cause a tool center point (TCP) of the robot to touch up the predetermined position of the target object (see at least [0038]: “In various embodiments, sonar system 110 may be implemented with optional orientation and/or position sensors (e.g., similar to orientation sensor 140, gyroscope/accelerometer 144, and/or GPS 146) that may be incorporated within the transducer assembly housing to provide three dimensional orientations and/or positions of the transducer assembly and/or transducer(s) for use when processing or post processing sonar data for display. The sensor information can be used to correct for movement of the transducer assembly between ensonifications to provide improved alignment of corresponding acoustic returns/samples, for example, and/or to generate imagery based on the measured orientations and/or positions of the transducer assembly. In other embodiments, an external orientation and/or position sensor can be used alone or in combination with an integrated sensor or sensors.”) Regarding claim 12, the combination of Gatland and Nadler teaches: The control device according to claim 9. Gatland further discloses wherein the processor is further configured to obtain the first information as a relative movement amount between the target object and the robot by measuring the predetermined position of the target object by the visual sensor mounted on the robot before and after a relative positional relationship between the target object and the robot changes (see at least [0038]: “In various embodiments, sonar system 110 may be implemented with optional orientation and/or position sensors (e.g., similar to orientation sensor 140, gyroscope/accelerometer 144, and/or GPS 146) that may be incorporated within the transducer assembly housing to provide three dimensional orientations and/or positions of the transducer assembly and/or transducer(s) for use when processing or post processing sonar data for display. The sensor information can be used to correct for movement of the transducer assembly between ensonifications to provide improved alignment of corresponding acoustic returns/samples, for example, and/or to generate imagery based on the measured orientations and/or positions of the transducer assembly. In other embodiments, an external orientation and/or position sensor can be used alone or in combination with an integrated sensor or sensors.”) Regarding claim 13, the combination of Gatland and Nadler teaches: The controller according to claim 12. Gatland further discloses wherein the processor is further configured to shift the first coordinate system, based on the relative movement amount (see at least [0038]: “In various embodiments, sonar system 110 may be implemented with optional orientation and/or position sensors (e.g., similar to orientation sensor 140, gyroscope/accelerometer 144, and/or GPS 146) that may be incorporated within the transducer assembly housing to provide three dimensional orientations and/or positions of the transducer assembly and/or transducer(s) for use when processing or post processing sonar data for display. The sensor information can be used to correct for movement of the transducer assembly between ensonifications to provide improved alignment of corresponding acoustic returns/samples, for example, and/or to generate imagery based on the measured orientations and/or positions of the transducer assembly. In other embodiments, an external orientation and/or position sensor can be used alone or in combination with an integrated sensor or sensors.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hashimoto et al. (US 11878423 B2), which discloses: “A robot system includes a robot body, a memory, an operation controlling module, a manipulator, and a limit range setting module configured to set a limit range of the corrective manipulation by the manipulator. The operation controlling module executes a given limiting processing when a corrective manipulation is performed beyond the limit range from an operational position based on automatic operation information. The limit range setting module calculates a positional deviation between the operational position based on the automatic operation information before the correction and an operational position based on the corrected operation information, and when the positional deviation is at or below a first threshold, narrows the limit range in the next corrective manipulation by the manipulator.” Zhang (US 20230331485 A1), which discloses: “Disclosed are a method for locating a warehousing robot, a method for constructing a map, a robot and a storage medium. In a specific embodiment, a semantic map of a warehouse environment is constructed in advance, and the semantic map comprises a plurality of objects existing in the warehouse environment and semantic information of the objects. In the localization process, a warehousing robot uses its own image sensor to acquire an image or video data of a surrounding environment (11), identifies target objects in the image or video data and semantic information of the target objects (12) to obtain the relative position relationship between each target object and the warehousing robot (13), and then determines the location of the warehousing robot in the semantic map based on the relative position relationship and the semantic information of each target object (14). The method for constructing a map is based on visual semantic localization. Because the method directly detects specific targets, the detection speed is fast, semantic information is rich, and the method is not easily influenced by other interference factors. The method gets rid of the dependence on signs in the warehouse environment and has high localization flexibility.” Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH NELESKI whose telephone number is (571)272-6064. The examiner can normally be reached 10 - 6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THOMAS WORDEN can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.R.N./Examiner, Art Unit 3658 /THOMAS E WORDEN/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jun 06, 2024
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600044
GUIDE DOG ROBOT FOR THE VISUALLY IMPAIRED PERSONS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12560222
METHOD FOR PERFORMING ROTATIONAL SPEED SYNCHRONISATION
2y 5m to grant Granted Feb 24, 2026
Patent 12545410
POSITION-SENSITIVE CONTROLLER FOR AIRCRAFT SEATING
2y 5m to grant Granted Feb 10, 2026
Patent 12515346
ROBOT AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Jan 06, 2026
Patent 12491629
TRAINING ARTIFICIAL NETWORKS FOR ROBOTIC PICKING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
91%
With Interview (+17.8%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 94 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month