DETAILED ACTION
Status of Claims
Claims 1-4, 6-17, and 20-21 are currently pending and have been examined in this application. This Final Rejection is in response to the amendment submitted on 8/1/2025.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments and Amendments
Applicant’s arguments, filed on 8/1/2025, with respect to the rejection of Claims 1-21 under 35 USC 103 have been fully considered but they are moot in view of the new grounds of rejection provided below, which was necessitated based on Applicant’s amendments to the claims, which changed the scope of the claims. Examiner notes wherein Applicant’s arguments are directed towards the newly amended claim limitation(s), which are addressed by the newly found prior art, as indicated below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-7, 9-13, 19, and 20 -21 are rejected under 35 U.S.C. 103 as being unpatentable over Ishige (US 20170151671 A1) as modified by Crawford (US 20180296283 A1) and Shimodaira (US 20180250823 A1)
Claim 1:
Ishige teaches the following limitations:
A control apparatus for controlling a robot arm apparatus that holds a holdable object, the control apparatus comprising: a camera that that captures an image including at least a part of a work object and a tip of the holdable object; and (Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.)
a processing circuit that controls the robot arm apparatus that holds the holdable object, (Ishige - [0011] A control device according to an aspect of the invention is a control device which is capable of controlling each of a robot including a robot arm, and an imaging portion …) wherein the processing circuit sets a position of a target object included in the work object; (Ishige - [0065] A robot system 100 illustrated in FIG. 1 is, for example, a device which is used in work of gripping, transporting, and assembling a target, such as an electronic component and an electronic device. ; [0090] The driving control portion 51 can control the driving of each driving portion 130 across the driving of each of the arms 11 to 16 of the robot 1, and can drive and stop each of the arms 11 to 16 independently. For example, in order to move the hand 102 to a target position, the driving control portion 51 derives a target value of the motor of each driving portion 130 provided in each of the arms 11 to 16.)
wherein the processing circuit outputs a first control signal to the robot arm apparatus based on the position of the target object and the position of the tip of the holdable object, the first control signal causing the tip of the holdable object to move to the position of the target object. (Ishige - [0074] The hand 102 is attached to the tip end surface of the sixth arm 16, and the center axis of the hand 102 matches the center axis A6 of the sixth arm 16. Here, the center of the tip end surface of the hand 102 is referred to as a tool center point (TCP). … ; [0099] First, by the control of the control device 5, the robot arm 10 is driven and the target is gripped by the hand 102. After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102. …)
Examiner Note:
Hand 102 corresponds to Holdable Object
Ishige combined with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
a first marker being fixed at a predetermined position of the holdable object different from the tip of the holdable object; and
(Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
wherein the processing circuit detects the first marker from the captured image;
(Crawford - [0108] When using an external 3D tracking system 100, 300, 600 to track a full rigid body array of three or more markers attached to a robot's end effector 112 (for example, as depicted in FIGS. 13A and 13B), it is possible to directly track or to calculate the 3D position of every section of the robot 102 in the coordinate system of the cameras 200, 326. …) wherein the processing circuit calculates a position of the tip of the holdable object based on the first marker; and
(Crawford - [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
wherein the processing circuit detects feature points of the work object from the captured image; (Shimodaira - [0286] Next, a search model of a registered face is generated in a state in which a height image corresponding to each face of a search model target workpiece is registered as mentioned above. Herein, a feature point required for a three-dimensional search is extracted for each registered face. Herein, a description will be made of an example in which two types of feature points such as a feature point (a feature point on a contour) representing a contour of a shape and a feature point (a feature point on a surface) representing a surface shape are used as the feature point. …) wherein the processing circuit calculates a position of the target object based on the feature points of the work object; (Shimodaira – {0292] An evaluation index of a three-dimensional search result may be set. For example, in the example illustrated in FIG. 13C or 13D, a three-dimensional search result is scored on the basis of to what degree corresponding feature points are present in an input image… For example, workpieces are set to be preferentially gripped in an order of higher scores. … ; [534] …The operation field 142 includes the target workpiece selection field 211 for selecting a target workpiece, the detection search model display field 212, and the “grip check” button 213 for displaying grip position candidates for the selected workpiece in a list form. … )
Examiner Note:
Workpiece corresponds to Target Object or Work Object
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige to include a method of tracking the end effector with a visual marker as taught in Crawford and to further include a method of identifying the feature points of tools and targets as taught in Shimodaira. Having the ability to track the position of the end effector and also locate key features points and store them as feature point maps (such as contours and surfaces) of tools and workpieces allows the robot to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 2:
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
The control apparatus as claimed in claim 1, wherein the processing circuit calculates a position of the target object in a coordinate system of the camera (Shimodaira – [0230] Calibration which will be described later may be performed on the basis of an image captured by the sensor unit 2 such that an actual position coordinate (a coordinate of a movement position of the end effector EET) of the workpiece WK can be linked to a position coordinate on an image displayed on the display unit 3.object; ) based on the feature points of the work object (Shimodaira – [0286] Next, a search model of a registered face is generated in a state in which a height image corresponding to each face of a search model target workpiece is registered as mentioned above. Herein, a feature point required for a three-dimensional search is extracted for each registered face. Herein, a description will be made of an example in which two types of feature points such as a feature point (a feature point on a contour) representing a contour of a shape and a feature point (a feature point on a surface) representing a surface shape are used as the feature point. FIG. 12A illustrates a state in which two types of feature points are extracted with respect to a search model SM of a height image (corresponding to FIG. 9A) viewed from the X axis direction.)
wherein the processing circuit converts the position of the target object and the position of the tip of the holdable object in the coordinate system of the camera, into positions in a coordinate system of the robot arm apparatus.
(Shimodaira – [0307] … when a workpiece is picked by a real end effector, a vision coordinate which is a coordinate of a three-dimensional space (vision space) in which the workpiece is imaged by the sensor unit is required to be converted into a robot coordinate used for the robot controller 6 to actually drive the robot. ; [0313] The calibration portion 8w is a member for acquiring calibration information for converting a position and an attitude calculated in a coordinate system of a vision space which is a virtual three-dimensional space displayed on the display unit into a position and an attitude in a coordinate system of a robot space in which the robot controller operates an end effector.)
Ishige combined with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
wherein the processing circuit calculates a position of the tip of the holdable object in the coordinate system of the camera based on the first marker; and
(Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige to include a method of tracking the end effector with a visual marker as taught in Crawford and to further include a method of identifying the feature points of tools and targets as taught in Shimodaira. Having the ability to track the position of the end effector and also locate key features points and store them as feature point maps (such as contours and surfaces) of tools and workpieces allows the robot to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 3:
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
The control apparatus as claimed in claim 2, wherein the processing circuit further calculates a direction of the target object in the coordinate system of the camera (Shimodaira – [0230] Calibration which will be described later may be performed on the basis of an image captured by the sensor unit 2 such that an actual position coordinate (a coordinate of a movement position of the end effector EET) of the workpiece WK can be linked to a position coordinate on an image displayed on the display unit 3.object; )
based on the feature points of the work object, (Shimodaira – [0286] Next, a search model of a registered face is generated in a state in which a height image corresponding to each face of a search model target workpiece is registered as mentioned above. Herein, a feature point required for a three-dimensional search is extracted for each registered face. Herein, a description will be made of an example in which two types of feature points such as a feature point (a feature point on a contour) representing a contour of a shape and a feature point (a feature point on a surface) representing a surface shape are used as the feature point. FIG. 12A illustrates a state in which two types of feature points are extracted with respect to a search model SM of a height image (corresponding to FIG. 9A) viewed from the X axis direction.) wherein the processing circuit further calculates a direction of the holdable object in the coordinate system of the camera based on the captured image, and (Shimodaira – [0230] Calibration which will be described later may be performed on the basis of an image captured by the sensor unit 2 such that an actual position coordinate (a coordinate of a movement position of the end effector EET) of the workpiece WK can be linked to a position coordinate on an image displayed on the display unit 3.object; ) wherein the processing circuit converts the direction of the target object and the direction of the holdable object in the coordinate system of the camera, into directions in the coordinate system of the robot arm apparatus, and (Shimodaira – [0307] … when a workpiece is picked by a real end effector, a vision coordinate which is a coordinate of a three-dimensional space (vision space) in which the workpiece is imaged by the sensor unit is required to be converted into a robot coordinate used for the robot controller 6 to actually drive the robot. ; [0313] The calibration portion 8w is a member for acquiring calibration information for converting a position and an attitude calculated in a coordinate system of a vision space which is a virtual three-dimensional space displayed on the display unit into a position and an attitude in a coordinate system of a robot space in which the robot controller operates an end effector.) the first control signal further includes angle information based on the converted direction of the target object and the converted direction of the holdable object. (Shimodaira – [0328] In contrast, in the method according to the present embodiment, rotational axes of RX, RY, and RZ are displayed with actual rotational axes as references. Unlike the axes being displayed in a three-dimensional space using the Z-Y-X system Euler's angle of the related art (FIGS. 18 to 21), a real rotational axis which is corrected by taking into consideration a state after rotation about another rotational axis is displayed as a correction rotational axis.)
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige and Crawford to include a method of identifying the feature points of tools and targets, mapping these features to a camera coordinate system and then transforming the camera coordinates of the features into a set of robot coordinates and rotation angles as taught in Shimodaira. Having the ability to locate key features (such as contours, surfaces, and vertices) of tools and workpieces and map their coordinates directly to the current robot arm positions provides a means of more accurately positioning a robot tool tip in relationship to the target workpiece.
Claim 4:
Ishige in combination with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
The control apparatus as claimed in claim 2, wherein the first marker has a pattern formed such that a position of the first marker in the coordinate system of the camera can be calculated, and wherein the processing circuit calculates the position of the tip of the holdable object based on the pattern of the first marker.
(Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. … ; [0108] When using an external 3D tracking system 100, 300, 600 to track a full rigid body array of three or more markers attached to a robot's end effector 112 (for example, as depicted in FIGS. 13A and 13B), it is possible to directly track or to calculate the 3D position of every section of the robot 102 in the coordinate system of the cameras 200, 326. …)
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige and Shimodaira to include a method of using visual markers, cameras, and a camera coordinate system to identify the position of a tool tip in the work environment by placing a visual marker directly on the moveable robot end effector as taught in Crawford Having the ability to effectively track the robot tool tip in the camera coordinate system ensures the accuracy of tool positioning in relation to the workpiece and the work environment
Claim 6:
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
The control apparatus as claimed in claim 1, further comprising a memory that stores a feature point map in advance, (Shimodaira – [0276] In bin picking, first, each workpiece is required to be extracted from a plurality of workpiece groups loaded in bulk in order to determine a workpiece which can be gripped. Here, a shape of a search target workpiece is registered as a workpiece model in advance with respect to shapes of a workpiece group having height information obtained by the sensor unit, and a three-dimensional search is performed by using the workpiece model such that a position and an attitude of each workpiece are detected.) the feature point map including three-dimensional coordinates of a plurality of feature points included in the work object, and (Shimodaira – [0287] The feature points SCP on the surface are extracted from a surface of a workpiece model at a predetermined interval. On the other hand, regarding the feature points OCP on the contour, for example, an edge of a location or the like of which a height changes is extracted, and a location further having undergone a thinning process is extracted as a feature point at a predetermined interval. As mentioned above, each feature point indicates a three-dimensional shape of a face. ; [0288] … three-dimensional shapes are acquired by imaging a workpiece group in which a plurality of workpieces are loaded in bulk as illustrated in FIG. 13A or 13B as an input image. … ) two- dimensional coordinates of the plurality of feature points in a plurality of captured images obtained by capturing the work object from a plurality of different positions, wherein the processing circuit calculates the position of the target object with reference to the feature point map. (Shimodaira – [0367] - a two-dimensional plane image onto which a workpiece is projected onto a plane with an X-Y designation unit is displayed, and a grip position is designated on the plane image. An X-Y designation screen 230 as an aspect of the X-Y designation unit is illustrated in FIG. 35B. On the X-Y designation screen 230 illustrated in FIG. 35A, three-dimensional CAD data of a workpiece is displayed in a plan view in the image display field 141. In this state, a user designates a part of a workpiece model WM11F desired to be gripped by an end effector model and displayed in the plan view. …)
Examiner Note:
Workpiece Model corresponds to Feature Point Map
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige and Crawford to include a method of identifying the feature points of tools and targets as taught in Shimodaira. Having the ability to locate key features points and store them as feature point maps (such as contours and surfaces) of tools and workpieces allows the robot to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 7:
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
The control apparatus as claimed in claim 1, wherein the camera further obtains distances from the camera to points captured by the camera, and wherein the processing circuit: generates a feature point map based on the captured image and the distances, the feature point map including three-dimensional coordinates of a plurality of feature points included in the work object, and (Shimodaira – [0287] The feature points SCP on the surface are extracted from a surface of a workpiece model at a predetermined interval. On the other hand, regarding the feature points OCP on the contour, for example, an edge of a location or the like of which a height changes is extracted, and a location further having undergone a thinning process is extracted as a feature point at a predetermined interval. As mentioned above, each feature point indicates a three-dimensional shape of a face. ; [0288] … three-dimensional shapes are acquired by imaging a workpiece group in which a plurality of workpieces are loaded in bulk as illustrated in FIG. 13A or 13B as an input image. … ) two-dimensional coordinates of the plurality of feature points in a plurality of captured images obtained by capturing the work object from a plurality of different positions, and calculates the position of the target object with reference to the feature point map. (Shimodaira – [0367] - a two-dimensional plane image onto which a workpiece is projected onto a plane with an X-Y designation unit is displayed, and a grip position is designated on the plane image. An X-Y designation screen 230 as an aspect of the X-Y designation unit is illustrated in FIG. 35B. On the X-Y designation screen 230 illustrated in FIG. 35A, three-dimensional CAD data of a workpiece is displayed in a plan view in the image display field 141. In this state, a user designates a part of a workpiece model WM11F desired to be gripped by an end effector model and displayed in the plan view. …)
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige and Crawford to include a method of identifying the feature points of tools and targets as taught in Shimodaira. Having the ability to locate key features points and store them as feature point maps (such as contours and surfaces) of tools and workpieces allows the robot to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 9:
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
The control apparatus as claimed in claim 7, further comprising a memory that stores the feature point map generated by the processing circuit.
(Shimodaira – [0526] In step S8205, a relative position among a plurality of registered search models is registered as relationship information. Herein, the search model registration portion 8g stores relationship information of the remaining height images and the respective faces such as top, bottom, left, right, front and rear faces. ; [0628] The robot setting apparatus, the robot setting method, the robot setting program, the computer readable recording medium, and the apparatus storing the program according to the present invention can be appropriately used to verify a bin picking operation of a robot.)
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige and Crawford to include a method of storing the feature point maps of tools and targets as taught in Shimodaira. Having the ability store key features point maps allows the robot system to identify known features and to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 10:
Ishige teaches the following limitations:
The control apparatus as claimed in claim 7, wherein the processing circuit recognizes and sets the position of the target object in the work object using image processing.
(Ishige - [0101] In the work, in order to allow the robot 1 to accurately perform the work with respect to the target based on the image captured by the fixed camera 2, it is necessary to perform processing of acquiring the correction parameter for converting the coordinate (the position and the posture in the image coordinate system) on the image of the fixed camera 2 into the coordinate in the robot coordinate system, that is, the calibration of the fixed camera 2. …)
Claim 11:
Ishige teaches the following limitations:
The control apparatus as claimed in claim 7, wherein the processing circuit sets the position of the target object in the work object based on a first user input obtained through a first input apparatus.
(Ishige - [0095] The display equipment 41 includes a monitor 411 which is configured of a display panel, such as a liquid crystal display panel. The worker can confirm the image captured by the fixed camera 2 and the mobile camera 3 and the work or the like by the robot 1, via the monitor 411.)
Claim 12:
Ishige teaches the following limitations:
The control apparatus as claimed in claim 1, wherein the camera is fixed to the robot arm apparatus such that the camera can capture the tip of the holdable object when the robot arm apparatus holds the holdable object.
(Ishige - 0085] The mobile camera 3 is attached to the sixth arm 16 so as to be capable of capturing the tip end side of the robot arm 10 rather than the sixth arm 16. In addition, in the embodiment, on the design, the mobile camera 3 is attached to the sixth arm 16 so that the optical axis OA3 (optical axis of the lens 32) is substantially parallel to the center axis A6 of the sixth arm 16. In addition, since the mobile camera 3 is attached to the sixth arm 16, it is possible to change the posture thereof together with the sixth arm 16 by driving the robot arm 10.)
Claim 13:
Ishige teaches the following limitations:
The control apparatus as claimed in claim 1, wherein the control apparatus selectively obtains a captured image from a plurality of cameras, (Ishige - [0095] The display equipment 41 includes a monitor 411 which is configured of a display panel, such as a liquid crystal display panel. The worker can confirm the image captured by the fixed camera 2 and the mobile camera 3 and the work or the like by the robot 1, via the monitor 411.) the captured image including the at least part of the work object and the tip of the holdable object. (Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.)
Claim 17:
Ishige teaches the following limitations:
A control apparatus for controlling a robot arm apparatus that holds a holdable object, the control apparatus comprising: a camera that captures an image including at least a part of a work object and a tip of the holdable object, and
(Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.)
a processing circuit that controls the robot arm apparatus that holds the holdable object, (Ishige - [0011] A control device according to an aspect of the invention is a control device which is capable of controlling each of a robot including a robot arm, and an imaging portion …) wherein the processing circuit sets a position of a target object included in the work object; (Ishige - [0065] A robot system 100 illustrated in FIG. 1 is, for example, a device which is used in work of gripping, transporting, and assembling a target, such as an electronic component and an electronic device. ; [0090] The driving control portion 51 can control the driving of each driving portion 130 across the driving of each of the arms 11 to 16 of the robot 1, and can drive and stop each of the arms 11 to 16 independently. For example, in order to move the hand 102 to a target position, the driving control portion 51 derives a target value of the motor of each driving portion 130 provided in each of the arms 11 to 16.)
wherein the processing circuit that outputs a control signal to the robot arm apparatus based on a user input obtained through an input apparatus, the control signal causing the tip of the holdable object to move to the position of the target object.
(Ishige - [0096] The operation equipment 42 is an input device which is configured of a keyboard, and outputs an operation signal which corresponds to the operation by the worker to the control device 5. Therefore, the worker can instruct various types of processing or the like to the control device 5 by operating the operation equipment 42. ; [0098] In the robot system 100 having the configuration, for example, it is possible to perform the following work. ; [0099] First, by the control of the control device 5, the robot arm 10 is driven and the target is gripped by the hand 102. After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102. When accurately gripping the target, the hand 102 is moved on the work stand 61 by driving the robot arm 10. In addition, based on the image captured by the mobile camera 3, the target gripped by the hand 102 is assembled to the target which is disposed on the work stand 61 in advance.)
Ishige combined with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
a first marker being fixed at a predetermined position of the holdable object different from the tip of the holdable object; and
(Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
wherein the processing circuit detects the first marker from the captured image;
(Crawford - [0108] When using an external 3D tracking system 100, 300, 600 to track a full rigid body array of three or more markers attached to a robot's end effector 112 (for example, as depicted in FIGS. 13A and 13B), it is possible to directly track or to calculate the 3D position of every section of the robot 102 in the coordinate system of the cameras 200, 326. …) wherein the processing circuit calculates a position of the tip of the holdable object based on the first marker; (Crawford - [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
wherein the processing circuit detects feature points of the work object from the captured image; (Shimodaira - [0286] Next, a search model of a registered face is generated in a state in which a height image corresponding to each face of a search model target workpiece is registered as mentioned above. Herein, a feature point required for a three-dimensional search is extracted for each registered face. Herein, a description will be made of an example in which two types of feature points such as a feature point (a feature point on a contour) representing a contour of a shape and a feature point (a feature point on a surface) representing a surface shape are used as the feature point. …) wherein the processing circuit calculates a position of the target object based on the feature points of the work object; (Shimodaira – {0292] An evaluation index of a three-dimensional search result may be set. For example, in the example illustrated in FIG. 13C or 13D, a three-dimensional search result is scored on the basis of to what degree corresponding feature points are present in an input image… For example, workpieces are set to be preferentially gripped in an order of higher scores. … ; [534] …The operation field 142 includes the target workpiece selection field 211 for selecting a target workpiece, the detection search model display field 212, and the “grip check” button 213 for displaying grip position candidates for the selected workpiece in a list form. … )
wherein the processing circuit generates a radar chart representing a distance of the tip of the holdable object from the target object, and (Shimodaira – {0008] … an image display region in which the end effector model and the workpiece model are displayed on a virtual three-dimensional space; a grip reference point setting unit that defines a grip reference point corresponding to a position at which the workpiece model is gripped for the end effector model; a grip direction setting unit that defines a grip direction in which the end effector model grips the workpiece model; … a relative position setting unit that sets a relative position between the end effector model and the workpiece model such that the grip direction defined in the grip direction setting unit is orthogonal to a workpiece plane representing an attitude of the workpiece model displayed in the image display region, and the grip reference point is located at the grip position along the grip direction. …) outputs the radar chart and the captured image to a display apparatus such that the radar chart overlaps the captured image; and (Shimodaira – [0016] According to the robot setting apparatus related to a ninth aspect, in addition to any one of the configuration, a grip reference point and a grip direction passing through the grip reference point may be displayed to overlap the end effector model in the image display region. With this configuration, a movement direction for making an end effector model come close to a workpiece model can be presented to a user in a better understanding manner, and thus it is possible to provide an environment in which grip position adjustment work is facilitated to the user.)
Examiner Note:
The cited reference Shimodaira describes a display which relates the position of the robot tool to the position of the target object, which is equivalent to the purpose of the radar chart as described in the instant application.
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige to include a method of tracking the end effector with a visual marker as taught in Crawford and to further include a method of identifying the feature points of tools and targets and further displaying for the user the current distance from the robot tool tip to the target workpiece as taught in Shimodaira. Having the ability to identify key features of tools and workpieces and then displaying the distance from the tool to the current target provides accurate measurement feedback for the user during the process of approaching the target.
Claim 20:
Ishige teaches the following limitations:
A robot arm system comprising: a robot arm apparatus that holds a holdable object; at least one camera that captures an image including at least a part of a work object and a tip of the holdable object; and (Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.)
control apparatus comprises a processing circuit that controls the robot arm apparatus that holds the holdable object, (Ishige - [0011] A control device according to an aspect of the invention is a control device which is capable of controlling each of a robot including a robot arm, and an imaging portion …) wherein the processing circuit sets a position of a target object included in the work object; (Ishige - [0065] A robot system 100 illustrated in FIG. 1 is, for example, a device which is used in work of gripping, transporting, and assembling a target, such as an electronic component and an electronic device. ; [0090] The driving control portion 51 can control the driving of each driving portion 130 across the driving of each of the arms 11 to 16 of the robot 1, and can drive and stop each of the arms 11 to 16 independently. For example, in order to move the hand 102 to a target position, the driving control portion 51 derives a target value of the motor of each driving portion 130 provided in each of the arms 11 to 16.)
wherein the processing circuit outputs a first control signal to the robot arm apparatus based on the position of the target object and the position of the tip of the holdable object, the first control signal causing the tip of the holdable object to move to the position of the target object.
(Ishige - [0074] The hand 102 is attached to the tip end surface of the sixth arm 16, and the center axis of the hand 102 matches the center axis A6 of the sixth arm 16. Here, the center of the tip end surface of the hand 102 is referred to as a tool center point (TCP). … ; [0099] First, by the control of the control device 5, the robot arm 10 is driven and the target is gripped by the hand 102. After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102. …)
Ishige combined with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
a first marker being fixed at a predetermined position of the holdable object different from the tip of the holdable object; and (Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
wherein the processing circuit detects the first marker from the captured image;
(Crawford - [0108] When using an external 3D tracking system 100, 300, 600 to track a full rigid body array of three or more markers attached to a robot's end effector 112 (for example, as depicted in FIGS. 13A and 13B), it is possible to directly track or to calculate the 3D position of every section of the robot 102 in the coordinate system of the cameras 200, 326. …) wherein the processing circuit calculates a position of the tip of the holdable object based on the first marker; and (Crawford - [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
Ishige combined with Crawford does not explicitly teach the following limitations, however Shimodaira teaches:
wherein the processing circuit detects feature points of the work object from the captured image; (Shimodaira - [0286] Next, a search model of a registered face is generated in a state in which a height image corresponding to each face of a search model target workpiece is registered as mentioned above. Herein, a feature point required for a three-dimensional search is extracted for each registered face. Herein, a description will be made of an example in which two types of feature points such as a feature point (a feature point on a contour) representing a contour of a shape and a feature point (a feature point on a surface) representing a surface shape are used as the feature point. …) wherein the processing circuit calculates a position of the target object based on the feature points of the work object; (Shimodaira – {0292] An evaluation index of a three-dimensional search result may be set. For example, in the example illustrated in FIG. 13C or 13D, a three-dimensional search result is scored on the basis of to what degree corresponding feature points are present in an input image… For example, workpieces are set to be preferentially gripped in an order of higher scores. … ; [534] …The operation field 142 includes the target workpiece selection field 211 for selecting a target workpiece, the detection search model display field 212, and the “grip check” button 213 for displaying grip position candidates for the selected workpiece in a list form. … )
Therefore, prior to the effective filing date of the claimed invention, it would have been
obvious to one of ordinary skill in the art to modify Ishige to include a method of tracking the end effector with a visual marker as taught in Crawford and to further include a method of identifying the feature points of tools and targets as taught in Shimodaira. Having the ability to track the position of the end effector and also locate key features points and store them as feature point maps (such as contours and surfaces) of tools and workpieces allows the robot to consistently identify the correct work object and position the tip of a tool at the desired location.
Claim 21:
Ishige teaches the following limitations:
A control method for controlling a robot arm apparatus holding a holdable object, the control method including the steps of: (Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.) setting a position of a target object included in a work object; (Ishige - [0065] A robot system 100 illustrated in FIG. 1 is, for example, a device which is used in work of gripping, transporting, and assembling a target, such as an electronic component and an electronic device. ; [0090] The driving control portion 51 can control the driving of each driving portion 130 across the driving of each of the arms 11 to 16 of the robot 1, and can drive and stop each of the arms 11 to 16 independently. For example, in order to move the hand 102 to a target position, the driving control portion 51 derives a target value of the motor of each driving portion 130 provided in each of the arms 11 to 16.)
the captured image including at least a part of the work object, a tip of the holdable object, and
(Ishige – [0099] After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102.)
outputting a control signal to the robot arm apparatus based on the position of the target object and the position of the tip of the holdable object, the control signal causing the tip of the holdable object to move to the position of the target object.
(Ishige - [0074] The hand 102 is attached to the tip end surface of the sixth arm 16, and the center axis of the hand 102 matches the center axis A6 of the sixth arm 16. Here, the center of the tip end surface of the hand 102 is referred to as a tool center point (TCP). … ; [0099] First, by the control of the control device 5, the robot arm 10 is driven and the target is gripped by the hand 102. After this, the robot arm 10 is driven and the hand 102 is moved onto the fixed camera 2. Next, by capturing the target by the fixed camera 2, based on the image captured by the fixed camera 2, the control device 5 determines whether or not the target is accurately gripped by the hand 102. …)
Ishige combined with Shimodaira does not explicitly teach the following limitations, however Crawford teaches:
a first marker being fixed at a predetermined position of the holdable object different from the tip of the holdable object;
(Crawford – [0091] … FIG. 13B depicts a close-up view of the end-effector 112 with guide tube 114 and a plurality of tracking markers 118 rigidly affixed to the end-effector 112. In this embodiment, the plurality of tracking markers 118 are attached to the guide tube 112. FIG. 13C depicts an instrument 608 (in this case, a probe 608A) with a plurality of tracking markers 804 rigidly affixed to the instrument 608. As described elsewhere herein, the instrument 608 could include any suitable surgical instrument, such as, but not limited to, guide wire, cannula, a retractor, a drill, a reamer, a screw driver, an insertion tool, a removal tool, or the like.; [0097] When tracking any tool, such as a guide tube 914 connected to the end effector 912 of a robot system 100, 300, 600, the tracking array's primary purpose is to update the position of the end effector 912 in the camera coordinate system. …)
detecting the first marker from the captured image; (Crawford - [0108] When using an external 3D tracking system 100, 300, 600 to track a full rigid body array of three or more markers attached to a robot's end effector 112 (for example, as depicted in FIGS. 13A and 13B), it is possible to directly track or to calculate the 3D position of every section of the r