Prosecution Insights
Last updated: April 19, 2026
Application No. 18/712,340

DEVICE, INDUSTRIAL MACHINE AND METHOD FOR VERIFYING OPERATION OF INDUSTRIAL MACHINE

Final Rejection §102§103
Filed
May 22, 2024
Examiner
NELESKI, ELIZABETH ROSE
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
91%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
69 granted / 94 resolved
+21.4% vs TC avg
Strong +18% interview lift
Without
With
+17.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
24 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 94 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Applicant’s arguments with respect to the 35 USC 102 and 35 USC 103 rejections set forth in the previously mailed office action have been considered but are moot because the amendments to the claim language have necessitated new grounds of rejection set forth below. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3 and 9-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gonzalez et al. (US 20200324409 A1), hereinafter Gonzalez. Regarding claim 1, Gonzalez discloses: A device configured to verify an operation of an industrial machine that controls the operation based on detection data of a sensor (see at least [0022]: “Turning now to FIG. 1, an enhanced environmental analysis and robotic end effector control process 100 is illustrated. A robot may include sensor array 102, map and semantics generator 104, robot modeler 106, mission planner 108, end effector controller 128 and robotic end effector 132 (e.g., multi-fingered robot end-effectors).”) the device comprising a processor configured to: acquire the detection data detected by the sensor when executing an operation program including a plurality of instructions for causing the industrial machine to perform a plurality of the operations respectively; and associate the executed instruction and the detection data used for control of the operation performed by the executed instruction with each other (see at least [0123]: “Example 1 includes a computing system comprising one or more sensors to generate sensor data, the sensor data to include image data, a processor coupled to the one or more sensors, and a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to generate a semantic labelled image based on image data from the sensor data, wherein the semantic labelled image is to identify a shape of an object and a semantic label of the object, associate a first set of actions with the object and generate a plan based on an intersection of the first set of actions and a second set of actions to satisfy a command from a user through actuation of one or more end effectors, wherein the second set of actions are to be associated with the command.”) Regarding claim 2, Gonzalez discloses: The device of claim 1, wherein the processor is further configured to acquire the detection data as history data in which the detection data detected when executing the operation program is stored in chronological order (see at least [0080]: “The mission planner 390 may capture and unfold high-level directives from sensor data provided by the sensor array 386 (e.g., “clean the kitchen”). The mission planner 390 may decompose the directive into a fine granular sequence of physical atomic-actions or tasks (e.g., primary task, secondary task, target object part assertion, affordance list, etc.) to accomplish the high level directive. The tasks may be stored in the task information 392.”) Regarding claim 3, Gonzalez discloses: The device of claim 1, wherein the industrial machine includes: the sensor configured to detect a workpiece by imaging the workpiece (see at least [0047]: “Imaging and/or range sensors 302 may provide sensor data 336 to the scene semantic spatial context generator 304. The sensor data may include imaging data (e.g., RGB-D data) and/or range data. Imaging sensors of the imaging and/or range sensors 302 may be devices contained within a composed sensor (e.g., RGB-D camera or camera module). For example, the imaging and/or range sensors 302 may provide three data streams capturing information regarding a content in a field-of view and the time-varying 6D pose of one or more objects.”) and a robot configured to carry out a predetermined work on the workpiece by performing the operation (see at least [0039]: “The above process 100 may empower autonomous service robots to perform real-world physical-interaction tasks generating and capturing value in semi-structured environments.”) wherein the operation program includes: a first operation program including the instruction for causing the sensor to perform the operation of detecting the detection data by imaging the workpiece and a second operation program including the instruction for causing the robot to perform the operation for the predetermined work, based on the detection data detected by executing the first operation program wherein the processor is further configured to associate the instruction included in the second operation program and the detection data used for control of the operation for the predetermined work performed by the instruction included in the second operation program with each other (see at least [0123]: “Example 1 includes a computing system comprising one or more sensors to generate sensor data, the sensor data to include image data, a processor coupled to the one or more sensors, and a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to generate a semantic labelled image based on image data from the sensor data, wherein the semantic labelled image is to identify a shape of an object and a semantic label of the object, associate a first set of actions with the object and generate a plan based on an intersection of the first set of actions and a second set of actions to satisfy a command from a user through actuation of one or more end effectors, wherein the second set of actions are to be associated with the command.”) Regarding claim 9, Gonzalez discloses: An industrial machine comprising the device of claim 1 (see at least [0039]: “The above process 100 may empower autonomous service robots to perform real-world physical-interaction tasks generating and capturing value in semi-structured environments.”) Regarding claim 10, Gonzalez discloses: A method of verifying an operation of an industrial machine that controls the operation based on detection data of a sensor, the method comprising: acquiring, by a processor, the detection data detected by the sensor when executing an operation program including a plurality of instructions for causing the industrial machine to perform a plurality of the operations respectively; and associating, by the processor, the executed instruction and the detection data used for control of the operation performed by the executed instruction with each other (see at least [0123]: “Example 1 includes a computing system comprising one or more sensors to generate sensor data, the sensor data to include image data, a processor coupled to the one or more sensors, and a memory including a set of executable program instructions, which when executed by the processor, cause the computing system to generate a semantic labelled image based on image data from the sensor data, wherein the semantic labelled image is to identify a shape of an object and a semantic label of the object, associate a first set of actions with the object and generate a plan based on an intersection of the first set of actions and a second set of actions to satisfy a command from a user through actuation of one or more end effectors, wherein the second set of actions are to be associated with the command.”) Regarding claim 11, Gonzalez discloses: The device of claim 1, wherein the processor is further configured to: acquire specifying information for specifying the detection data (see at least [0130]: “Example 8 includes the apparatus of Example 7, wherein the logic coupled to the one or more substrates is to apply a first label to a first portion of the object, and apply a second label to a second portion of the object, wherein the second label is to be different from the first label.”) and store the specifying information in a data storage location designated by a register code included in the executed instruction to associate the executed instruction and the detection data used for the control of the operation performed by the executed instruction with each other via the register code and the specifying information (see at least [0109]: “FIG. 17 also illustrates a memory 270 coupled to the processor core 200. The memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 270 may include one or more code 213 instruction(s) to be executed by the processor core 200, wherein the code 213 may implement the process 100 (FIG. 1), the method 800 (FIG. 2), the process 300 (FIGS. 3A-3B, the process 350 (FIG. 4), the process 380 (FIG. 5), the method 400 (FIG. 6), the method 420 (FIG. 7) and the method 440 (FIG. 8) already discussed. The processor core 200 follows a program sequence of instructions indicated by the code 213. Each instruction may enter a front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez in view of Kang et al. (US 20180345557 A1), hereinafter Kang. Regarding claim 4, Gonzalez discloses the device of claim 1. Gonzalez does not explicitly disclose but Kang, in an analogous field of endeavor, teaches: wherein the operation program includes: a first instruction for calculating a correction amount for correcting the operation of the industrial machine, using the detection data and a second instruction for correcting the operation of the industrial machine in accordance with the correction amount calculated by executing the first instruction, wherein the association generating unit is configured to associate the first instruction or the second instruction and the detection data with each other via the correction amount (see at least [0052]: “If error data is detected from the operation data of the take-out robots 110 and the information data on the injection molding received from the main sever 220 on the production site, further, the control pendent 300 analyzes the error data, corrects, adds or deletes the operation data of the take-out robots 110 and the information data on the injection molding through the connection to the main server 220, and automatically recovers the trouble of the injection molding machine 100, on the basis of the previously stored remote control access program stored in the memory unit.”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gonzalez with the operation correction as taught by Kang. This is because, as stated by Kang [0006] of traditional industrial techniques: “… the conventional systems do not integrally control the processes, perfectly, and accordingly, they cannot immediately recognize and handle the malfunctions of the take-out robot and unexpected accidents happened by a worker's mistake, thereby undesirably reducing productivity, causing difficulties in production management and inconveniences in control, and remarkably lowering an industrial safety level.” Regarding claim 5, the combination of Gonzalez and Kang teaches the device of claim 4. Gonzalez does not explicitly disclose but Kang, in an analogous field of endeavor, teaches: wherein the operation program includes: wherein the first instruction or the second instruction includes a register code representing a data storage location of the calculated correction amount, wherein the association generating unit is configured to: acquire information for specifying the detection data used for calculation of the correction amount, and associate the first instruction or the second instruction and the detection data with each other via the register code, the correction amount, and the information (see at least [0052]: “If error data is detected from the operation data of the take-out robots 110 and the information data on the injection molding received from the main sever 220 on the production site, further, the control pendent 300 analyzes the error data, corrects, adds or deletes the operation data of the take-out robots 110 and the information data on the injection molding through the connection to the main server 220, and automatically recovers the trouble of the injection molding machine 100, on the basis of the previously stored remote control access program stored in the memory unit.”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gonzalez with the operation correction as taught by Kang. This is because, as stated by Kang [0006] of traditional industrial techniques: “… the conventional systems do not integrally control the processes, perfectly, and accordingly, they cannot immediately recognize and handle the malfunctions of the take-out robot and unexpected accidents happened by a worker's mistake, thereby undesirably reducing productivity, causing difficulties in production management and inconveniences in control, and remarkably lowering an industrial safety level.” Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez in view of Einecke (US 20150128547 A1), hereinafter Einecke. Regarding claim 6, Gonzalez discloses the device of claim 1. Gonzalez does not explicitly disclose, but Kang, in an analogous field of endeavor teaches: an input receiving unit configured to receive an input for selecting one of the plurality of instructions, and a data output unit configured to output the detection data associated with the one of the plurality of instructions by the association generating unit, in response to the input for selecting the one of the plurality of instructions, received by the input receiving unit (see at least [0059]: “According to a further aspect of the invention a software program product in the form of e.g. a mobile application is proposed. The software program product is adapted to be installed on a remote smart device for operating an autonomous robot, such as an autonomous lawn mower. The software program product is adapted to control a communication interface for receiving an input image, a display for displaying the input image, and input means for allowing a user to input or select a remote control instruction. Therein the software program product is adapted to control the communication interface so as to transmit the remote control instruction inputted or selected via the input means.”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gonzalez with the input collection as taught by Einecke. This is because, as discussed within the background of Einecke: “In view of the above-mentioned disadvantages of known autonomous mowers, the present invention is intended to improve the state of the art. A particular aim of the present invention is to provide images to a remote smart device of the user, like a smart phone, a tablet or any other user device. The lawn mower can stream camera images to the remote smart device to improve remote control and particularly to help the user to understand the situation of the mower.” One of ordinary skill in the robotics and industrial systems art would understand that the discussed improvements to household robotics could easily be applied to industrial robotics as well. Regarding claim 7, the combination of Gonzalez and Einecke teaches the device of claim 6. Gonzalez further discloses wherein the device further comprises: an image generating unit configured to generate setting image data for setting an operation parameter of the industrial machine, wherein the data output unit is configured to output image data of the detection data to the image generating unit, wherein the image generating unit is configured to generate the setting image data in which the image data of the detection data is displayed (see at least [0054]: “The free and occupied map generator 316 may generate sparse dual-space map that may capture and split the occupied and unfilled (free) spaces. This mapping may allow for: i) registering diverse 3D images while exploring various interaction (e.g., grasping) scenarios for a kinematic end effector, ii) determine possible collision-free manipulator 6D poses in the environment and iii) serve as an effective scaffolding data structure to store multiresolution local surface descriptors such as volumetric (e.g., with respect to voxels) semantic labels and other attributes.”) Regarding claim 8, the combination of Gonzalez and Einecke teaches the device of claim 7. Gonzalez does not explicitly disclose, but Einecke, in an analogous field of endeavor teaches: wherein the input receiving unit further receives an input for changing the operation parameter through the setting image data, wherein the device further includes a parameter setting unit configured to change the preset operation parameter in response to the input for changing the operation parameter (see at least [0058]: “The remote control instruction may correspond, as mentioned above, to an instruction regarding the moving means of the autonomous robot, and preferably regarding the driving (move forward or backward) and steering (turn left or right) means. The remote control instruction may also relate to the working means and e.g. the tool, e.g. activate or deactivate the working means or modify a working parameter of the tool. The remote control instruction may also relate to a parameter of the camera, such as the exposure, aperture, shutter, hue, zoom or focus, or to control the orientation of the camera.”) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation for success, to combine the invention of Gonzalez with the input collection as taught by Einecke. This is because, as discussed within the background of Einecke: “In view of the above-mentioned disadvantages of known autonomous mowers, the present invention is intended to improve the state of the art. A particular aim of the present invention is to provide images to a remote smart device of the user, like a smart phone, a tablet or any other user device. The lawn mower can stream camera images to the remote smart device to improve remote control and particularly to help the user to understand the situation of the mower.” One of ordinary skill in the robotics and industrial systems art would understand that the discussed improvements to household robotics could easily be applied to industrial robotics as well. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH NELESKI whose telephone number is (571)272-6064. The examiner can normally be reached 10 - 6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THOMAS WORDEN can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.R.N./Examiner, Art Unit 3658 /JASON HOLLOWAY/Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

May 22, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §102, §103
Oct 16, 2025
Interview Requested
Oct 30, 2025
Examiner Interview Summary
Oct 30, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Response Filed
Mar 06, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600044
GUIDE DOG ROBOT FOR THE VISUALLY IMPAIRED PERSONS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12560222
METHOD FOR PERFORMING ROTATIONAL SPEED SYNCHRONISATION
2y 5m to grant Granted Feb 24, 2026
Patent 12545410
POSITION-SENSITIVE CONTROLLER FOR AIRCRAFT SEATING
2y 5m to grant Granted Feb 10, 2026
Patent 12515346
ROBOT AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Jan 06, 2026
Patent 12491629
TRAINING ARTIFICIAL NETWORKS FOR ROBOTIC PICKING
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
91%
With Interview (+17.8%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 94 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month