Prosecution Insights
Last updated: April 19, 2026
Application No. 18/252,189

DEVICE FOR ADJUSTING PARAMETER, ROBOT SYSTEM, METHOD, AND COMPUTER PROGRAM

Non-Final OA §102§103
Filed
May 09, 2023
Examiner
OSTROW, ALAN LINDSAY
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fanuc Corporation
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
26 granted / 35 resolved
+22.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 35 resolved cases

Office Action

§102 §103
DETAILED ACTION Status of Claims Claims 1 and 4-14 are currently pending and have been examined in this application. This Non-final communication is the first action on the merits. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments and Amendments Applicant’s arguments, filed on 11/17/2025, with respect to the rejection of Claims 1, 4-6 and 9-12 under 35 USC 102 have been fully considered but they are moot in view of the new grounds of rejection provided below, which was necessitated based on Applicant’s amendments to the claims, which changed the scope of the claims. Examiner notes wherein Applicant’s arguments are directed towards the newly amended claim limitation(s), which are addressed, as indicated below. Applicant’s arguments, filed on 11/17/2025, with respect to the rejection of Claims 7 and 8 under 35 USC 103 have been fully considered and our not persuasive. Applicant’s arguments regarding claims 7 and 8 are directed towards their dependence to independent claim 1 and are therefore addressed by the examiner’s response to arguments regarding claim 1 provided in this section. Additional Examiner Reply to Arguments with Regard to the previous 35 USC 102 Rejection: Applicant’s Remarks: “The transformation parameter(s) of Watanabe is not for collating a workpiece model obtained by modeling the workpiece with the workpiece feature in the image data and is not applied to an algorithm for collating.” “Based on what is discussed in paragraph [0052] of Watanabe, one of ordinary skill in the art would, at best and if at all, consider the "fitting" of Watanabe as "collating." “ “Watanabe is silent about an "algorithm" for achieving the "fitting." “ … the transformation parameter(s) of Watanabe do not include a "displacement amount” …” Examiner Reply: Applicant has provided 3 central arguments regarding the previous 35 USC 102 and 35 USC 103 rejections, which are addressed in the responses below. (labeled as a-c) Applicant asserts that the “fitting” of Watanabe does not correspond to a the “collating” of the instant application. However, as can be gleaned from Applicant’s specification, the BRI definition of “collating” is the positioning of the workpiece model so that it coincides with an image of the workpiece feature to obtain a matching position. This definition very clearly corresponds to the “fitting’ of Watanabe as can be gleaned from at least Watanabe figures 3-4 as well as paragraphs [0057] – [0106]. Applicant asserts that Watanabe is silent regarding an algorithm to perform “fitting”. However, the BRI definition of an algorithm is a series of steps or procedures. Applicant refers to an algorithm in the specification in 9 locations, but offers no other definition that would dispute the plain meaning of the word “algorithm”. Therefore it can be gleaned from at least Watanabe figures 3-4 and paragraphs [0057]-[0106] that a computer-driven series of steps is performed in order to “fit” the workpiece model to the workpiece feature. (i.e. “fitting”) Regarding newly added claims 13 and 14, Applicant asserts that Watanabe does not teach a “displacement amount” regarding the positional difference between the workpiece model and the workpiece feature. However, Watanabe refers to transformations of 6 values representing differences of position (x, y ,z) and orientation (Rx, Ry, and Rz). Theses transformations refer directly to the difference in linear and angular displacement amounts or relative positions of both the workpiece model and the workpiece feature. (See at least Watanabe paragraph [0121]) Therefore, with regard to Applicant’s arguments above, the Examiner finds that Applicant’s arguments are not persuasive regarding the prior 35 USC 102 and 35 USC 103 rejections. In addition to the response above, the rejection has been updated to address new limitations in claims 1 and 10, which change the scope of independent claims 1 and 10. (Please see the 35 USC 102 and 35 USC 102 rejections below.) Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 4-6, and 9-12 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Watanabe (US 20170106540 A1) Claim 1: Watanabe teaches the following limitations: A device, comprising a processor configured to: obtain, as a detection position, a position of a workpiece in image data in which a workpiece feature of the workpiece imaged by a vision sensor is displayed, (Watanabe - [0049] The approximate position and orientation recognizing unit 12 acquires a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. Also, the approximate position and orientation recognizing unit 12 acquires images (grayscale and range images) from the image pickup device 14. In the present embodiment, the images contain a pile of target objects. The approximate position and orientation recognizing unit 12 detects an individual object in the pile of objects appearing in a three-dimensional model image, and calculates an approximate position and orientation of the object with respect to the image pickup device 14 to recognize the approximate position and orientation of the individual object. A three-dimensional coordinate system (reference coordinate system) serving as a reference for position and orientation measurement is defined for the image pickup device 14.) by applying a parameter for collating a workpiece model obtained by modeling the workpiece with the workpiece feature in the image data to an algorithm for collating, and collating the workpiece model with the workpiece feature in accordance with the algorithm; (Watanabe - [0043] … Then, a transformation parameter between the two positions and orientations is registered. A method for accurately calculating a position and orientation will be described, which uses the registered transformation parameter to reduce erroneous recognition. … ; [0057] FIG. 3 is a flowchart illustrating a procedure for calculating a position and orientation of a target object according to the present embodiment.; Step S301 ; [0058] In step S301, the transformation parameter calculating unit 11 and the position and orientation calculating unit 13 acquire a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. ; [0129] … The information processing apparatus 1501 is capable of storing a program that describes the processing procedure described above, and measuring the position and orientation of the target component 1505. …) generate the image data displaying the workpiece model together with the workpiece feature; (Watanabe - [0063] One of the two displayed models is defined as a reference model, and the other model is defined as an operating model. The position or orientation of one of the two displayed models may be displaced from the other model. This is because if the two displayed models completely overlap each other in the initial display, it is difficult to see that there are two models. Then, an observation image obtained by the virtual camera is rendered and displayed on the display screen of the display device 15.; [see also Figures 6A-6E]) receive first input data for displacing a position of the workpiece model displayed in the image data; (Watanabe - [0065] By allowing the user of the information processing apparatus 100 to manipulate the operating model displayed on the display screen of the display device 15, the operating model is superimposed on the reference model in a position and orientation different from that of the reference model in such a manner that the operating model is similar in appearance to the reference model. …) acquire, as a matching position, a position of the workpiece model in the image data when the position of the workpiece model displayed in the image data is displaced in response to receiving the first input data to arrange the workpiece model to coincide with the workpiece feature in the image data; (Watanabe - [0121] When button 6 is pressed, a position and orientation display window displays parameters (X, Y, Z, Rx, Ry, Rz) representing each of M positions and orientations after model fitting (M=6 in FIG. 8). Main window 2 displays an image of a pile of target objects measured by the image pickup device 14. On the basis of a position and orientation (indicated by a thick line in FIG. 8) selected from the M positions and orientations displayed in the position and orientation display window, a three-dimensional shape model is superimposed on the image displayed in main window 2 (as indicated by a broken line in FIG. 8).) adjust the parameter applied to the algorithm so as to obtain the detection position as a position corresponding to the matching position, on the basis of data representing a difference between the detection position and the matching position; and (Watanabe - [0122] The user checks each of three-dimensional shape models displayed in main window 2 by switching among the M positions and orientations. When the user finds out a correct position and orientation and an incorrect position and orientation for the same individual object in the M positions and orientations, the user checks position and orientation selection checkboxes associated with the correct and incorrect positions and orientations to select them. … ; [0123] Button 7 is enabled when two positions and orientations are selected with position and orientation selection checkboxes. As in the case of pressing button 3 in the first embodiment, pressing button 7 calculates a transformation parameter between the two selected positions and orientations and adds it to a transformation parameter list. Upon completion of the addition to the list, all the position and orientation selection checkboxes are reset (unchecked). ; [see also Figure 3 and paragraphs [0057] –[0106]] ) acquire the detection position from image data of a workpiece imaged by the vision sensor, in accordance with the algorithm to which the adjusted parameter is applied, to control a robot to execute a work on the workpiece. (Watanabe - [0042] In a first embodiment, the present invention is applied to accurately determine a position and orientation of a target object (component) in a pile and grasp the target object with a robot hand on the basis of the determined position and orientation. ; [0131] The robot system of the present embodiment recognizes a position and orientation of the target component 1505 in accordance with the processing procedure described in the first or second embodiment. Then, the robot system transforms the result from the camera coordinate system to the robot coordinate system. On the basis of the resulting position and orientation of the target component 1505 in the robot coordinate system, the robot system causes the robot controller 1503 to move the robot arm 1504 to the position and orientation in which the target component 1505 can be grasped. ; [see also Figure 3 and paragraphs [0057] –[0106]]) Examiner Note: Procedure corresponds to algorithm Fitting corresponds to collating Figure 3 describes a computer-based and pre-programmed procedure (i.e. algorithm) for fitting (i.e. collating) a workpiece model using parameters in conjunction with a detailed procedure that is explained in specification paragraphs [0057] – [0106] of Watanabe. Claim 4: Watanabe teaches the following limitations: The device according to claim 1, wherein the processor is further configured to: display the workpiece model at the detection position ; display the workpiece model at a randomly-determined position in the image data; or display the workpiece model at a position which is determined in accordance with a predetermined rule in the image data. (Watanabe - [0049] The approximate position and orientation recognizing unit 12 acquires a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. Also, the approximate position and orientation recognizing unit 12 acquires images (grayscale and range images) from the image pickup device 14. In the present embodiment, the images contain a pile of target objects. The approximate position and orientation recognizing unit 12 detects an individual object in the pile of objects appearing in a three-dimensional model image, and calculates an approximate position and orientation of the object with respect to the image pickup device 14 to recognize the approximate position and orientation of the individual object. …) Claim 5: Watanabe teaches the following limitations: The device according to claim 1, wherein the processor is further configured to: receive second input data for deleting the workpiece model from the image data, or adding a second workpiece model to the image data and in accordance with the second input data, delete the displayed workpiece model from the image data, or additionally display the second workpiece model in the image data. (Watanabe - [0063] One of the two displayed models is defined as a reference model, and the other model is defined as an operating model. The position or orientation of one of the two displayed models may be displaced from the other model. This is because if the two displayed models completely overlap each other in the initial display, it is difficult to see that there are two models. Then, an observation image obtained by the virtual camera is rendered and displayed on the display screen of the display device 15. ; [0124] Upon completion of the addition to the transformation parameter list, as in the first embodiment, two three-dimensional shape models having positions and orientations used in calculating a transformation parameter are displayed in an overlapping manner as a thumbnail (registered orientation thumbnail 2) in registered orientation thumbnail display window 2. The thumbnail is associated with the transformation parameter registered in the list. Thus, pressing the “X” button in the upper right corner of the thumbnail allows the user to delete the registered transformation parameter from the list) Claim 6: Watanabe teaches the following limitations: The device according to claim 1, wherein the processor is further configured to: in accordance with a predetermined condition, delete the displayed workpiece model from the image data, or additionally display a second workpiece model in the image data. (Watanabe - [0063] One of the two displayed models is defined as a reference model, and the other model is defined as an operating model. The position or orientation of one of the two displayed models may be displaced from the other model. This is because if the two displayed models completely overlap each other in the initial display, it is difficult to see that there are two models. Then, an observation image obtained by the virtual camera is rendered and displayed on the display screen of the display device 15. ; [0124] Upon completion of the addition to the transformation parameter list, as in the first embodiment, two three-dimensional shape models having positions and orientations used in calculating a transformation parameter are displayed in an overlapping manner as a thumbnail (registered orientation thumbnail 2) in registered orientation thumbnail display window 2. The thumbnail is associated with the transformation parameter registered in the list. Thus, pressing the “X” button in the upper right corner of the thumbnail allows the user to delete the registered transformation parameter from the list) Claim 9: Watanabe teaches the following limitations: A robot system, comprising: a vision sensor configured to image a workpiece; a robot configured to execute a work on the workpiece; and (Watanabe - [0042] In a first embodiment, the present invention is applied to accurately determine a position and orientation of a target object (component) in a pile and grasp the target object with a robot hand on the basis of the determined position and orientation. ; [0091] In step S304, the approximate position and orientation recognizing unit 12 detects an individual object in a pile of target objects appearing in the captured image, calculates six parameters representing an approximate position and orientation of the detected target object in the sensor coordinate system, and records them.) the device according to claim 1, wherein the processor is configured to: acquire, as a detection position, a position of the workpiece in the image data imaged by the vision sensor, using the adjusted parameter; (Watanabe - [0049] The approximate position and orientation recognizing unit 12 acquires a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. Also, the approximate position and orientation recognizing unit 12 acquires images (grayscale and range images) from the image pickup device 14. In the present embodiment, the images contain a pile of target objects. The approximate position and orientation recognizing unit 12 detects an individual object in the pile of objects appearing in a three-dimensional model image, and calculates an approximate position and orientation of the object with respect to the image pickup device 14 to recognize the approximate position and orientation of the individual object. A three-dimensional coordinate system (reference coordinate system) serving as a reference for position and orientation measurement is defined for the image pickup device 14.) acquire position data of the workpiece in a control coordinate system for controlling the robot, on the basis of the detection position acquired using the adjusted parameter; and generate an operation command for operating the robot on the basis of the position data. (Watanabe - [0131] The robot system of the present embodiment recognizes a position and orientation of the target component 1505 in accordance with the processing procedure described in the first or second embodiment. Then, the robot system transforms the result from the camera coordinate system to the robot coordinate system. On the basis of the resulting position and orientation of the target component 1505 in the robot coordinate system, the robot system causes the robot controller 1503 to move the robot arm 1504 to the position and orientation in which the target component 1505 can be grasped.) Claim 10: Watanabe teaches the following limitations: A method, comprising, by a processor: obtaining, as a detection position, a position of a workpiece in image data in which a workpiece feature of the workpiece imaged by a vision sensor is displayed, (Watanabe - [0049] The approximate position and orientation recognizing unit 12 acquires a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. Also, the approximate position and orientation recognizing unit 12 acquires images (grayscale and range images) from the image pickup device 14. In the present embodiment, the images contain a pile of target objects. The approximate position and orientation recognizing unit 12 detects an individual object in the pile of objects appearing in a three-dimensional model image, and calculates an approximate position and orientation of the object with respect to the image pickup device 14 to recognize the approximate position and orientation of the individual object. A three-dimensional coordinate system (reference coordinate system) serving as a reference for position and orientation measurement is defined for the image pickup device 14.) by applying a parameter for collating a workpiece model obtained by modeling the workpiece with the workpiece feature in the image data to an algorithm for collating, and collating the workpiece model with the workpiece feature in accordance with the algorithm; (Watanabe - [0043] … Then, a transformation parameter between the two positions and orientations is registered. A method for accurately calculating a position and orientation will be described, which uses the registered transformation parameter to reduce erroneous recognition. … ; [0057] FIG. 3 is a flowchart illustrating a procedure for calculating a position and orientation of a target object according to the present embodiment.; Step S301 ; [0058] In step S301, the transformation parameter calculating unit 11 and the position and orientation calculating unit 13 acquire a three-dimensional shape model of a target object from the three-dimensional shape model holding unit 10. ; [0129] … The information processing apparatus 1501 is capable of storing a program that describes the processing procedure described above, and measuring the position and orientation of the target component 1505. …) generating the image data displaying the workpiece model together with the workpiece feature; (Watanabe - [0063] One of the two displayed models is defined as a reference model, and the other model is defined as an operating model. The position or orientation of one of the two displayed models may be displaced from the other model. This is because if the two displayed models completely overlap each other in the initial display, it is difficult to see that there are two models. Then, an observation image obtained by the virtual camera is rendered and displayed on the display screen of the display device 15.; [see also Figures 6A-6E] ) receiving first input data for displacing a position of the workpiece model displayed in the image data; (Watanabe - [0065] By allowing the user of the information processing apparatus 100 to manipulate the operating model displayed on the display screen of the display device 15, the operating model is superimposed on the reference model in a position and orientation different from that of the reference model in such a manner that the operating model is similar in appearance to the reference model. …) acquiring, as a matching position, a position of the workpiece model in the image data when the position of the workpiece model displayed in the image data is displaced in response to receiving the first input data to arrange the workpiece model to coincide with the workpiece feature in the image data; (Watanabe - [0121] When button 6 is pressed, a position and orientation display window displays parameters (X, Y, Z, Rx, Ry, Rz) representing each of M positions and orientations after model fitting (M=6 in FIG. 8). Main window 2 displays an image of a pile of target objects measured by the image pickup device 14. On the basis of a position and orientation (indicated by a thick line in FIG. 8) selected from the M positions and orientations displayed in the position and orientation display window, a three-dimensional shape model is superimposed on the image displayed in main window 2 (as indicated by a broken line in FIG. 8).) adjusting the parameter applied to the algorithm so as to obtain the detection position as a position corresponding to the matching position, on the basis of data representing a difference between the detection position and the matching position; and (Watanabe - [0122] The user checks each of three-dimensional shape models displayed in main window 2 by switching among the M positions and orientations. When the user finds out a correct position and orientation and an incorrect position and orientation for the same individual object in the M positions and orientations, the user checks position and orientation selection checkboxes associated with the correct and incorrect positions and orientations to select them. … ; [0123] Button 7 is enabled when two positions and orientations are selected with position and orientation selection checkboxes. As in the case of pressing button 3 in the first embodiment, pressing button 7 calculates a transformation parameter between the two selected positions and orientations and adds it to a transformation parameter list. Upon completion of the addition to the list, all the position and orientation selection checkboxes are reset (unchecked). ; [see also Figure 3 and paragraphs [0057] –[0106]] ) acquiring the detection position from image data of a workpiece imaged by the vision sensor, using in accordance with the algorithm to which the adjusted parameter is applied, to control a robot to execute a work on the workpiece. (Watanabe - [0042] In a first embodiment, the present invention is applied to accurately determine a position and orientation of a target object (component) in a pile and grasp the target object with a robot hand on the basis of the determined position and orientation. ; [0131] The robot system of the present embodiment recognizes a position and orientation of the target component 1505 in accordance with the processing procedure described in the first or second embodiment. Then, the robot system transforms the result from the camera coordinate system to the robot coordinate system. On the basis of the resulting position and orientation of the target component 1505 in the robot coordinate system, the robot system causes the robot controller 1503 to move the robot arm 1504 to the position and orientation in which the target component 1505 can be grasped. ; [see also Figure 3 and paragraphs [0057] –[0106]]) Examiner Note: Procedure corresponds to algorithm Fitting corresponds to collating Figure 3 describes a computer-based and pre-programmed procedure (i.e. algorithm) for fitting (i.e. collating) a workpiece model using parameters in conjunction with a detailed procedure that is explained in specification paragraphs [0057] – [0106] of Watanabe. Claim 11: Watanabe teaches the following limitations: A non-transitory computer-readable storage medium storing a computer program causing the processor to execute the method according to claim 10. (Watanabe - [0041] FIG. 10 illustrates a hardware configuration of an information processing apparatus 100 according to an embodiment. Referring to FIG. 10, a central processing unit (CPU) 1010 controls an overall operation of each device connected via a bus 1000. The CPU 1010 reads and executes processing steps and programs stored in a read-only memory (ROM) 1020. An operating system (OS), each processing program according to the embodiment, device drivers, and the like are stored in the ROM 1020. They are temporarily stored in a random-access memory (RAM) 1030 and appropriately executed by the CPU 1010. …) Claim 12: Watanabe teaches the following limitations: The device according to claim 1, wherein the processor is configured to, in response to an operator manually operating an input device, receive the first input data from the input device for displacing the position of the workpiece model displayed in the image data. (Watanabe - [0048] The transformation parameter calculating unit 11 displays a three-dimensional shape model of a target object in a virtual three-dimensional space and registers, through a user's manipulation, a relationship (transformation parameter) between two different positions and orientations confusable with each other. In the present embodiment, the transformation parameter calculating unit 11 sends a three-dimensional shape model held by the three-dimensional shape model holding unit 10 to the display device 15, and renders two three-dimensional shape models of the target object in the GUI of the display device 15. The transformation parameter calculating unit 11 receives a user's manipulation from the operating device 16 and places the two three-dimensional shape models in the confusable positions and orientations in the GUI of the display device 15.) Claim 13: Watanabe teaches the following limitations: The device according to claim 1, wherein the parameter includes at least one of: a displacement amount by which a position of the workpiece model is to be virtually changed in collating the workpiece model with the workpiece feature in accordance with the algorithm; a size of a window which defines a detection range for feature points of the workpiece model and the workpiece feature to be collated with each other; data which identifies the feature points; or image roughness upon collating the workpiece model with the workpiece feature in accordance with the algorithm. (Watanabe - [0051] …The position and orientation calculating unit 13 also acquires an approximate position and orientation from the approximate position and orientation recognizing unit 12. The position and orientation calculating unit 13 also acquires a transformation parameter held by the transformation parameter calculating unit 11. The position and orientation calculating unit 13 also acquires measurement information (grayscale and range images) from the image pickup device 14. From the acquired information, the position and orientation calculating unit 13 calculates the position and orientation of the target object.; {0104] … the transformation parameter between the two positions and orientations is registered and held. In position and orientation calculation, a position and orientation confusable with a fitting result is generated on the basis of the registered transformation parameter, and another fitting is performed using the generated position and orientation as an initial value. Then, of the two fitting results, a position and orientation with a higher evaluation value is adopted. ) Examiner Note: The transformations of the 6 values representing position (x,y,z) and orientation (Rx, Ry, and Rz) represent both linear and angular displacement amounts with respect to the relative positions of both the relative positions of both the workpiece model and the workpiece feature. Fitting corresponds to collating Examiner notes wherein Applicant provides the claim limitation, “parameter includes at least one of:”. Based on the currently provided claim language, and further given the broadest reasonable interpretation of said claim language, this limitation appears to claim multiple potential parameters, and as such only one of the listed parameters needs to be addressed by the prior art in order to satisfy addressing this limitation. The examiner has selected “a displacement amount” as the limitation to be addressed in claim 13. Claim 14: Watanabe teaches the following limitations: The device according to claim 1, wherein the parameter includes a displacement amount by which a position of the workpiece model is to be virtually changed in collating the workpiece model with the workpiece feature in accordance with the algorithm, and the processor is configured to obtain, as a detection position, a position of the workpiece in the image data, by changing the position of the workpiece model by the displacement amount in the image data to collate the workpiece model with the workpiece feature. (Watanabe - [0051] …The position and orientation calculating unit 13 also acquires an approximate position and orientation from the approximate position and orientation recognizing unit 12. The position and orientation calculating unit 13 also acquires a transformation parameter held by the transformation parameter calculating unit 11. The position and orientation calculating unit 13 also acquires measurement information (grayscale and range images) from the image pickup device 14. From the acquired information, the position and orientation calculating unit 13 calculates the position and orientation of the target object.; {0104] … the transformation parameter between the two positions and orientations is registered and held. In position and orientation calculation, a position and orientation confusable with a fitting result is generated on the basis of the registered transformation parameter, and another fitting is performed using the generated position and orientation as an initial value. Then, of the two fitting results, a position and orientation with a higher evaluation value is adopted. ) Examiner Note: The transformations of the 6 values representing position (x,y,z) and orientation (Rx, Ry, and Rz) represent both linear and angular displacement amounts with respect to the relative positions of both the reference models (virtual workpiece) and the operating models (workpiece). Fitting corresponds to collating Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Watanabe (US 20170106540 A1) as modified by Fujieda (US 20100232682 A1) Claim 7: Watanabe does not explicitly teach the following limitations, however Fujieda teaches: The device according to claim 1, wherein the processor is further configured to adjust the parameter by repeatedly executing a series of operations of: determining a change amount of the parameter which allows the difference to be reduced, on the basis of the data representing the difference; ( Fujieda – [0019] … In the second step, a numerical range of the parameter, which is set while numerical data in which an amount of difference with sample data falls within the predetermined value in each parameter, is specified by performing the step A and the step B in a plurality of cycles, …) updating the parameter by changing the parameter by the determined change amount; and acquiring data representing a difference between the detection position obtained using the updated parameter and the matching position. ( Fujieda – [0106] When the three-dimensional recognition processing is ended, the recognition result closest to the sample data is selected, an amount of difference with the sample data (absolute value or square value of difference with sample data) is determined in each of the coordinate data and the angle data, and each difference amount is compared with a predetermined threshold (ST206). …) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Watanabe to provide a method for updating the measured parameters and reducing the difference between the target feature position and the model position in a cyclic manner as taught in Fujieda. Having the ability to updated the measured position parameters of the model and target feature in a cyclic manner ensures a that the system is consistently monitoring and reducing the difference between the model and the target, thus improving the accuracy of robot tasks involving the target feature. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Watanabe (US 20170106540 A1) as modified by Ohnuki (US 20170236262 A1) Claim 8: Watanabe does not explicitly teach the following limitations, however Ohnuki teaches: The device according to claim 1, wherein the workpiece feature is acquired by virtually imaging the workpiece model with a vision sensor model being a model of the vision sensor. (Ohnuki - [0073] The visual sensor simulator 150 is a module for simulating the processing performed by the visual sensor 220, and performs image measurement of an input image including at least a part of a workpiece as a subject of the image in a manner associated with the imaging area predefined on the transporting path (conveyor) in the three-dimensional virtual space. … Typically, the image measurement performed by the visual sensor simulator 150 includes searching the input image for the part corresponding to one or more pieces of predetermined reference information.) Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Watanabe to provide a method for simulating the imaging of the workpiece model as taught in Ohnuki. Having the ability to run a simulation of the workpiece imaging and targeting function without operating the robot in a production setting allows the operator to make adjustments to parameters in a safer and more economical environment. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Miao (US 20220228851A1) describes a technique for setting and values of parameters and specifying conditions for obtaining 3D measurement data represented by 3D coordinates indicating points on a surface of the measurement object. The system includes a three-dimensional sensor mountable on a robot, a parameter setter, a drive controller, a sensor controller, a registration processor, a storage, an input unit, and an output unit. Harel (US 20190143523 A1) describes a control system which is configured to determine a location of a workpiece in the workspace based on first sensor data from the first sensor and a three-dimensional (3D) model corresponding to the workpiece. The control system is configured to map a set of 2D coordinates from a second 2D image from the second sensor to a set of 3D coordinates based on the location, and to generate one or more control signals for the at least one robotic manipulator based on the set of 3D coordinates. Suzuki (US 20130238124 A1) describes a method for obtaining the positions and orientations of one or more target objects from the result of measuring a set of target objects by using a first sensor. A robot including a grip unit is controlled to grip one target object as a gripping target object among the target objects. Whether the grip unit has succeeded in gripping the gripping target object is determined from the result of measurement performed by a second sensor. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN LINDSAY OSTROW whose telephone number is (703)756-1854. The examiner can normally be reached M-F 8 - 5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAN LINDSAY OSTROW/ Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

May 09, 2023
Application Filed
Apr 03, 2025
Non-Final Rejection — §102, §103
Jun 12, 2025
Interview Requested
Jul 02, 2025
Applicant Interview (Telephonic)
Jul 02, 2025
Examiner Interview Summary
Jul 17, 2025
Response Filed
Aug 19, 2025
Final Rejection — §102, §103
Oct 15, 2025
Interview Requested
Nov 17, 2025
Request for Continued Examination
Nov 23, 2025
Response after Non-Final Action
Dec 15, 2025
Non-Final Rejection — §102, §103
Feb 20, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583119
TRANSFER SYSTEM AND TRANSFER METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12576525
ROBOT SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12569989
ESTIMATION DEVICE, ESTIMATION METHOD, ESTIMATION PROGRAM, AND ROBOT SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12539611
ROBOT CONTROL APPARATUS, ROBOT CONTROL SYSTEM, AND ROBOT CONTROL METHOD
2y 5m to grant Granted Feb 03, 2026
Patent 12491627
INFORMATION PROCESSING APPARATUS AND COOKING SYSTEM
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+37.7%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 35 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month