DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09/18/2024 and 02/04/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “ROBOT DEVICE, ROBOT DEVICE CONTROLLING METHOD, AND RECORDING MEDIUM FOR INSERTION OF A PIN USING TWO MODES OF MOVEMENT”.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Claims 14 and 18 are objected to because of the following informalities:
In claim 14, “slidably supports” should read “slidably support”.
In claim 18, “the article” should read “an article” as no article has previously been recited.
Appropriate correction is required.
Claim Interpretation
Claim 17 recites “A robot device controlling method comprising: a first step… and a second step…” with limitations similar to the first and second modes recited in claim 1. The specification recites “Therefore, in the movement control of the pin W according to the present embodiment, the first mode (first step) and the second mode (second step) can be executed as follows… In the movement control of the pin W according to the present embodiment, it is determined whether to execute, only the first mode, only the second mode, or both the first mode and the second mode” in paragraph [0079]. In light of the specification, the first step and second step recited in claim 17 are interpreted as corresponding to the first and second modes, respectively, described in the specification. Claim 17 is also interpreted such that only one of the first and second steps must be performed and that the other step may be optionally performed.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“an imaging unit configured to capture an image” in claim 1. In paragraph [0003], the specification discloses an imaging unit performs this function. In paragraph [0032], the specification discloses “a camera 120 serving as an imaging unit”. A camera has been interpreted as the corresponding structure performing the claimed function.
“a moving part configured to move an object with respect to the base part” in claim 1 and “a moving part configured to move the object with respect to the base part” in claim 17. In paragraph [0003], the specification discloses the moving part performs this function. In paragraph [0040-0043] and Fig. 3, the specification discloses the moving part comprises a first finger 141, a second finger 142, sliding carriages, linear rails, motors, pulleys, belts, ball screw shafts, and ball nuts to move a pin W relative to the base part. These components have been interpreted as the corresponding structure performing the claimed function.
“a control unit configured to control the imaging unit, the moving part, and the robot” in claim 1. In paragraph [0032], the specification discloses “a control device 200 that controls a camera 120 serving as an imaging unit, the robot arm 10A, and the robot hand 100 (a moving part 130 to be described below)”. In paragraph [0035], the specification discloses “The control device 200 is constituted by a computer” and “the program for controlling each unit can be installed in the ROM 202” of the computer. In paragraph [0042-0043], the specification discloses the CPU 201 of the computer controls the position of the fingers in the moving part by issuing a command for driving the motors. In paragraph [0072], the specification discloses “the CPU 201 performs a visual servo operation by operating the robot arm 10A via the servo control unit 210 based on the angle command value.” A programmed computer has been interpreted as the corresponding structure performing the claimed control functions via the above commands.
“the control unit is configured to execute a first mode in which the position of the object is moved by controlling the robot to move the position of the end effector, and a second mode in which the position of the object is moved by controlling the moving part” in claim 1. The steps of Fig. 10, implemented on a processor [0060], have been interpreted as the corresponding structure for executing the first mode (S27 and S28) and second mode (S29 and S30).
“the control unit is configured to calculate a first movement amount and a second movement amount based on the image captured by the imaging unit and the target image” in claim 2 and “the control unit is configured to calculate a first movement amount and a second movement amount based on the image captured by the imaging unit, the target image, and a target position known with respect to the target image” in claim 3. In paragraph [0076], the specification discloses “the target image 902 is an image including a target feature amount 912 as a target position of the pin W.” In paragraph [0077], the specification discloses “in a composite image 903 obtained by combining the current image 901 and the target image 902, a control amount 920 is calculated as a target movement amount that is a difference between the target feature amount 912 and the current feature amount 911.” In paragraph [0081], the specification discloses “as illustrated in FIG. 12A, the CPU 201 divides the control amount 920 into a first control amount 920x and a second control amount 920y as viewed based on the certain coordinate reference.” The step of calculating the first and second movement amounts from the control amount based on coordinate axes has been interpreted as the corresponding structure performing the claimed function.
“the control unit is configured to move the position of the object in the first mode in a case where it is determined that the second mode is not executable based on the first movement amount” in claim 6. See ‘No’ in step S26 to visual servo in step S27 in Fig. 10. In paragraph [0072], the specification discloses “the CPU 201 performs a visual servo operation by operating the robot arm 10A via the servo control unit 210 based on the angle command value.”
“the control unit is configured to execute visual servo based on the image captured by the imaging unit and the target image in a case where the position of the object is moved in the first mode” in claim 7. See visual servo in step S27 in Fig. 10. In paragraph [0072], the specification discloses “the CPU 201 performs a visual servo operation by operating the robot arm 10A via the servo control unit 210 based on the angle command value.” In paragraphs [0062-0072], the steps for calculating the angle command value based on the current image and target image via Formulas 1, 2, and 3 are disclosed.
“the control unit is configured to execute the visual servo until a difference between the image captured by the imaging unit and the target image becomes smaller than or equal to a certain value” in claim 8. See visual servo in step S27 looped until the difference between the images is less than or equal to a threshold in step S28 in Fig. 10. In paragraph [0072], the specification discloses “the CPU 201 performs a visual servo operation by operating the robot arm 10A via the servo control unit 210 based on the angle command value.” In paragraphs [0062-0072], the steps for calculating the angle command value based on the current image and target image via Formulas 1, 2, and 3 are disclosed.
“the control unit is configured to calculate a coordinate system of the end effector” in claim 9. In paragraph [0057], the specification discloses “the CPU 201 can calculate a coordinate system T0 of the robot device 10, a coordinate system Te of the robot hand 100 (end effector), and a coordinate system Tc of the camera 120.” In paragraph [0058], the specification discloses “In the present embodiment, since the camera 120 is fixed to the robot hand 100, the coordinate system Tc and the coordinate system Te coincide with each other.” Additionally, the Jacobians of the current image and robot position, which are related to the coordinate system Te are calculated. However, the specific steps to calculate a coordinate system of the end effector (or of the current camera position Tc) are not disclosed. Therefore, while the specification discloses the claimed function is performed by a processor, the specification does not disclose the necessary steps or algorithm to perform the claimed function.
“the moving part is configured to move the object in at least one coordinate direction in the coordinate system of the end effector” in claim 9. In paragraph [0079], the specification discloses “the moving part 130 of the robot hand 100 is controlled to linearly move the position of the pin W (object) using the first guide part 170 and the second guide part 180 described above.” Moving linearly is moving in at least one coordinate direction in the coordinate system of the end effector; see Figs 11A-12C. In paragraph [0040-0043] and Fig. 3, the specification discloses the moving part comprises a first finger 141, a second finger 142, sliding carriages, linear rails, motors, pulleys, belts, ball screw shafts, and ball nuts to move a pin W relative to the base part. These components have been interpreted as the corresponding structure performing the claimed function.
“the moving part is configured to linearly move the object with respect to the base part” in claim 10. In paragraph [0079], the specification discloses “the moving part 130 of the robot hand 100 is controlled to linearly move the position of the pin W (object) using the first guide part 170 and the second guide part 180 described above.” In paragraph [0040-0043] and Fig. 3, the specification discloses the moving part comprises a first finger 141, a second finger 142, sliding carriages, linear rails, motors, pulleys, belts, ball screw shafts, and ball nuts to move a pin W relative to the base part. These components have been interpreted as the corresponding structure performing the claimed function.
“the moving part is configured to move the object on a plane with respect to the base part” in claim 11. In paragraph [0094], the specification discloses “as illustrated in FIG. 13A, the robot hand 100 according to the second embodiment has a finger base 1143 attached so as to be movable by the moving part (not illustrated) in the X direction which is a first direction. A first finger 1141 and a second finger 1142 are attached to the finger base 1143 so as to be movable by the moving part (not illustrated) in the Y direction. Therefore, in the robot hand 100 according to the second embodiment, the pin W gripped by the first finger 1141 and the second finger 1142 can be moved on the plane including the X direction and the Y direction, that is, the pin W is movable on the plane in the second mode.” A moveable finger base, along with the components recited in [0041-0043] (fingers, sliding carriages, linear rails, motors, pulleys, belts, ball screw shafts, and ball nuts), have been interpreted as the corresponding structure performing the claimed function.
“a gripping part configured to grip and support a workpiece” in claim 12. In paragraph [0038], the specification discloses “a finger part 140 serving as a gripping part”. In paragraph [0040], the specification discloses “the finger part 140 includes a first finger 141 and a second finger 142, and is configured to be able to grip the pin W, which is a workpiece”. In paragraph [0039], the specification discloses “A fingertip portion 141b of the first finger 141 and a fingertip portion 142b of the second finger are configured to be able to abut on the pin W from respective directions to support the pin W by sandwiching and gripping pin W therebetween.” Two fingers have been interpreted as the corresponding structure performing the claimed function.
“the moving part is configured to move the gripping part” in claim 12. In paragraph [0038], the specification discloses “a finger part 140 serving as a gripping part”. In paragraph [0040-0043] and Fig. 3, the specification discloses the moving part comprises a first finger 141, a second finger 142, sliding carriages, linear rails, motors, pulleys, belts, ball screw shafts, and ball nuts to move a pin W relative to the base part. These components have been interpreted as the corresponding structure performing the claimed function.
“a sliding part configured to slidably supports the first finger and the second finger” in claim 14. In paragraph [0041], the specification discloses “the first guide part 170 serving as a sliding part includes a first linear motion guide 171, and a movable part 172 and a movable part 173 slidably supported thereby. Also, the second guide part 180 serving as a sliding part includes a second linear motion guide 181, and a movable part 182 and a movable part 183 slidably supported thereby.” In Fig. 3, the movable parts 172, 173, 182, 183 appear to be carriages. Sliding carriages and linear rails have been interpreted as the corresponding structure performing the claimed function.
“a first driving part configured to drive the first finger to slide on the sliding part” in claim 14. In paragraph [0042], the specification discloses “the first driving part 150 roughly includes a motor 151, a driving pulley 152, a belt 153, a driven pulley 154, a ball screw shaft 155, a ball nut 156, and an angle detection sensor 159.” A motor, a driving pulley, a belt, a driven pulley, a ball screw shaft, and a ball nut have been interpreted as the corresponding structure performing the claimed function.
“a second driving part configured to drive the second finger to slide on the sliding part” in claim 14. In paragraph [0042], the specification discloses “the second driving part 160 roughly includes a motor 161, a driving pulley 162, a belt 163, a driven pulley 164, a ball screw shaft 165, a ball nut 166, and an angle detection sensor 169.” A motor, a driving pulley, a belt, a driven pulley, a ball screw shaft, and a ball nut have been interpreted as the corresponding structure performing the claimed function.
“the control unit is configured to drive the first finger earlier than the second finger in a case where the workpiece is moved from the one side to the other side, and drive the second finger earlier than the first finger in a case where the workpiece is moved from the other side to the one side” in claim 15. In paragraph [0042-0043], the specification discloses the CPU 201 of the computer controls the position of the fingers in the moving part by issuing a command for driving the motors. In paragraph [0048], the specification discloses “as illustrated in FIG. 6C, when the pin W is moved in the direction indicated by the arrow X from one side toward the other side by the finger part 140 moved by the moving part 130, the first finger 141 abutting on one side of the pin W is driven earlier than the second finger 142”. The processor programmed to perform the function via a command for driving the motor to move one finger before driving the motor to move the other finger has been interpreted as the corresponding structure performing the claimed function.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For example, the limitation “a processor configured to control the imaging unit, the moving part, and the robot” would not invoke 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 9-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claim 9, claim limitation “the control unit is configured to calculate a coordinate system of the end effector” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The specification discloses the claimed function is performed by a processor in [0057]. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification merely recites the function and does not identify specific steps or an algorithm sufficient to perform the function, as described above in the Claim Interpretation section. Therefore, the claim lacks an adequate written description as required by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, because an indefinite, unbounded functional limitation would cover all ways of performing a function and indicate that the inventor has not provided sufficient disclosure to show possession of the invention. See MPEP 2163.03 and 2181.
Claims 10-15 are rejected for depending upon the rejected claim 9.
Claim 18 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
Claim 18 recites “An article manufacturing method comprising manufacturing the article using the robot device according to claim 1.” However, no steps of the article-manufacturing method or properties of the article are recited in dependent claim 18. Therefore, claim 18 covers all possible methods of manufacturing any object with the robot device of claim 1.
Specifically,
(A) The breadth of the claims: As stated above, the scope of claim 18 covers all possible methods of manufacturing any possible article with the robot device of claim 1.
(B) The nature of the invention: The invention is directed to a robot arm capable of holding and placing a pin on a workpiece using a visual servo method and/or operating a sliding-finger feature according to a calculated amount based on the difference between a target image and a current image taken by a camera mounted on the hand of the robot arm.
(C) The state of the prior art: The state of the prior art shows that ‘peg-in-hole’ tasks (similar to placing a pin on a workpiece) and the visual servo method are well known.
(D) The level of one of ordinary skill: The skill level of the PHOSITA is fairly high.
(E) The level of predictability in the art: The level of predictability is fairly high.
(F) The amount of direction provided by the inventor: Although a method (see Fig. 10, for example) of using the robot device to move a pin in two dimensions is given, the specification provides little or no guidance about the properties of the article—no materials or dimensions are disclosed—or how to make and use a method commensurate with the scope of the claim.
(G) The existence of working examples: The mechanical elements are not novel, and the specification and drawings demonstrate a usage of the robot device of claim 1.
(H) The quantity of experimentation needed to make or use the invention based on the content of the disclosure: With little guidance from the specification, it would require an undue amount of experimentation to make and use a method of manufacture commensurate with the scope of claim 18.
The broad scope of claim 18 is not enabled by the specification, so claim 18 is rejected under 35 U.S.C. 112 (a).
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 9, claim limitation “the control unit is configured to calculate a coordinate system of the end effector” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. While a processor is disclosed to perform the claimed function, no software structure corresponding to “the control unit is configured to calculate a coordinate system of the end effector” is recited. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Claims 10-15 are rejected for depending upon the rejected claim 9.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 7-8, 16-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Harris et al. (US 20120190981 A1; hereafter “Harris”) in view of Aiiso (JP 2014151377 A; hereafter “Aiiso”). Citations of publications not in the English language refer to the paragraph numbers of the English translations.
Regarding claim 1, Harris discloses
A robot device comprising: an imaging unit configured to capture an image (See NIR camera 61 of system 8 in Figs. 3A and 3B. The NIR camera 61 is used to take images of a target (in this case, of a patient's arm) [0050]. See also [0048-0051], [0101], [0124], and [0143].);
an end effector including a base part and a moving part configured to move an object with respect to the base part (See Figs. 1 and 2A: end effector: insertion module 215; moving part: needle tool 3; base part: rest of insertion module 215, including the main sensor assembly 2. In needle tool 3, “A stepper motor 43 moves the butterfly needle gripper body 46 along [linear] guide rails 45 to insert the butterfly needle 41” relative to the base part [0067]; see Figs. 11-12. See also [0068-0071] and Figs. 13-15C.);
a robot configured to support the end effector in such a manner that the robot can move a position of the end effector (Robot arm 1 supports the insertion module 215; see Fig. 1. See “Once the butterfly needle 41 is appended to the needle tool 3 [comprised in the end effector], …the robot arm 1 moves the butterfly needle 41 to that position for insertion” [0040].); and
a control unit configured to control the imaging unit, the moving part, and the robot (Control unit: master computer 90 and robot controller 1a. See “The system 8 includes primary sensors 91… to collect sensor data 92… which is sent to a master computer 90 and analyzed. Based on this sensor data 92 and the mode of operation, the master computer 90 sends commands to several primary actuators 93. In an embodiment, the primary actuators 93 include… needle tool 3… as well as the robot arm 1” to control the moving part and the robot [0083]. See “the plurality of functions and methods executable by the master computer 90 includes a get video function to instruct the NIR camera 61 to capture a video of a patient's arm 7 in real-time” [0124]. See also [0041] and [0149].),
wherein in a case where a movement control is executed to move a position of the object based on the image captured by the imaging unit and a target image, the control unit is configured to execute… controlling the robot to move the position of the end effector, and… the position of the object is moved by controlling the moving part (An “original input image is binarized through a series of processing steps and then fed into a trained vein classifier,” and an insertion target is chosen automatically or manually, establishing a ‘target’ image [0143]. Firstly, “the robot arm 1 can be commanded to move in the direction of the target site using visual feedback” of the ‘current’ image from NIR camera 61 in the looped steps 143-145, as shown in Fig. 24 [0144]. Secondly, “step 148 comprises… inserting, using the at least one actuator 93, the medical device tool 212 [e.g., butterfly needle 41] into the patient's vessel at the target insertion site” [0162] via the stepper motor 43 in needle tool 3 (moving part) [0067]. The arm 11 comprises the end effector and the moving part. In both modes, the position of the butterfly needle 41 held in needle tool 3 is moved. See also [0083], [0120], and [0130] for the control unit, [0107] and [0118] for the target image, and [0131-0165] for the method of Fig. 24.).
However, Harris does not explicitly teach “wherein in a case where a movement control is executed to move a position of the object based on the image captured by the imaging unit and a target image, the control unit is configured to execute a first mode in which the position of the object is moved by controlling the robot to move the position of the end effector, and a second mode in which the position of the object is moved by controlling the moving part.”
Aiiso, in the same field of endeavor (robot control for peg-in-hole type tasks), teaches
wherein in a case where a movement control is executed to move a position of the object based on the image captured by the imaging unit and a target image, the control unit is configured to execute a first mode in which the position of the object is moved by controlling the robot to move the position of the end effector, and a second mode in which the position of the object is moved by controlling the moving part (See “Based on the information output from the reference work recognition unit 202, the target position calculation unit 203, and the work recognition unit 204, the control unit 205 controls the arm 11, the hand 14, etc. using different control methods depending on whether the opening A2 [in the target image] and the trajectory point B1 [in the current image] can be recognized or not. Furthermore, the control unit 205 switches between using visual servoing [step S112 in Fig. 5; see translated figure below] and using impedance control [step S114 in Fig. 5] based on the result of the determination. Then, the control unit 205 moves the workpiece B [object] held by the hand 14” [0078]. See “the control unit 205 controls the arm 11 by visual servoing” [0095] and “The control unit 205 controls the movable part [arm 11; see [0046]] by impedance control” [0107]. The arm 11 comprises the end effector and the moving part. See also Fig. 3.).
Harris discloses a vision‑guided robotic intravenous insertion system in which an imaging unit (NIR camera 61) captures an input image and a target image, a control unit commands the robot arm to move the end effector toward the target site using visual feedback in a visual‑servo loop, and subsequently actuates a moving part of the end effector (e.g., stepper motor 43 in needle tool 3) to insert the butterfly needle 41 into the patient’s vessel. Aiiso teaches a control unit that determines, based on image recognition results, whether specified positions are recognized and switches control modes accordingly, using visual‑servo control when positions are recognized and impedance (force‑based) control when recognition fails (i.e. “the control unit 205 switches between using visual servoing and using impedance control based on the determination.” [0078]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to apply Aiiso’s known mode‑selection technique to Harris’s known vision‑guided robotic insertion device to improve robustness in cases where visual recognition is degraded, yielding the predictable result that when images reliably identify the target site the robot arm is moved under visual‑servo control, and when recognition fails the moving part of the end effector is actuated under impedance control to complete or adjust the procedure safely.
Regarding claim 2, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and Harris additionally discloses
wherein the control unit is configured to calculate a first movement amount and a second movement amount based on the image captured by the imaging unit and the target image (Second amount: see “a displacement vector is calculated from the current position to the target position, and the robot arm 1 is controlled to move along that direction” [0108]. The target position/target insertion site is selected from a saved target image [0107] and “continuously updated as the [current] image insertion site is tracked by the vein tracker 110f” [0108]. The vision system 110 obtains images from NIR camera 61 [0098]. First amount: in step 147 of Fig. 24, the system calculates “an insertion depth for the medical device tool 212 to be inserted into the patient's vessel along the optimal insertion path” [0158]. The depth may be found from the current and target images (see [0101], [0111], [0150], [0157-0158]) or from ultrasound [0157-0158]. Furthermore, the first amount is based on the current image and the target image because the first amount is calculated after the medical device tool 212 (butterfly needle 41) is confirmed to be in the correct position for insertion; see [0152] and [0159]. The above processes are performed by the master computer 90 [0084]. See also [0118] and [0149-0151].),
the first movement amount being a movement amount of the position of the object in a first direction in which the position of the object is movable by the moving part (See “To insert the butterfly needle 41, the stepper motor 43 pushes the needle gripper assembly 200 forward, thus inserting the butterfly needle 41 into a target vein” [0068]. The insertion depth is thus in a first direction in which the butterfly needle 41 is movable by the needle tool 3.), and
the second movement amount being a movement amount of the position of the object in a second direction different from the first direction (The second movement amount is the amount calculated when performing visual servo to move the robot arm 1 [0108]. The robot movement occurs in six degrees of freedom (position and orientation) [0118]. In contrast, the first movement occurs in only one dimension with fixed orientation. Therefore, the second direction is usually different from the first direction. See also [0150-0151].), and
in a case where it is determined to execute the second mode based on the first movement amount, the control unit is configured to move the position of the object by the first movement amount in the second mode, and move the position of the object by the second movement amount in the first mode (See “The motion control decision engine 111 may also consider information from the ultrasound device 64 when initiating or controlling the insertion procedure,” including the depth of a vein (first movement amount) [0111]. In step 148 of Fig. 24, the butterfly needle 41 is inserted “to a depth equivalent to the depth determined during step 147” [0162]. Again, these processes are performed by the master computer 90 [0084]. In a first mode, “the robot arm 1 can be commanded to move in the direction of the target site using visual feedback” of the ‘current’ image from NIR camera 61 in the looped steps 143-145, as shown in Fig. 24 [0144]. If the patient moves between steps 147 and 148, the master computer 190 generates an updated insertion path (second movement amount) and commands the robot arm to reposition the butterfly needle 41 [0159-0160]. See also [0107-0110].).
In Aiiso, see also a virtual displacement V for visual servo control and a virtual displacement I for impedance control in [0096-0100] based on a target position calculated from a target image [0066] and the current image [0094].
Regarding claim 3, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and Harris additionally discloses
wherein the control unit is configured to calculate a first movement amount and a second movement amount based on the image captured by the imaging unit, the target image, and a target position known with respect to the target image (Second amount: see “a displacement vector is calculated from the current position to the target position, and the robot arm 1 is controlled to move along that direction” [0108]. The target position/target insertion site is selected from a saved target image [0107] and “continuously updated as the [current] image insertion site is tracked by the vein tracker 110f” [0108]. The vision system 110 obtains images from NIR camera 61 [0098]. First amount: in step 147 of Fig. 24, the system calculates “an insertion depth for the medical device tool 212 to be inserted into the patient's vessel along the optimal insertion path” [0158]. The depth may be found from the current and target images (see [0101], [0111], [0150], [0157-0158]) or from ultrasound [0157-0158]. Furthermore, the first amount is based on the current image and the target image because the first amount is calculated after the medical device tool 212 (butterfly needle 41) is confirmed to be in the correct position for insertion; see [0152] and [0159]. The above processes are performed by the master computer 90 [0084]. See also [0118] and [0149-0151].),
the first movement amount being a movement amount of the position of the object in a first direction in which the position of the object is movable by the moving part (See “To insert the butterfly needle 41, the stepper motor 43 pushes the needle gripper assembly 200 forward, thus inserting the butterfly needle 41 into a target vein” [0068]. The insertion depth is thus in a first direction in which the butterfly needle 41 is movable by the needle tool 3.), and
the second movement amount being a movement amount of the position of the object in a second direction different from the first direction (The second movement amount is the amount calculated when performing visual servo to move the robot arm 1 [0108]. The robot movement occurs in six degrees of freedom (position and orientation) [0118]. In contrast, the first movement occurs in only one dimension with fixed orientation. Therefore, the second direction is usually different from the first direction. See also [0150-0151].), and
in a case where it is determined to execute the second mode based on the first movement amount, the control unit is configured to move the position of the object by the first movement amount in the second mode, and move the position of the object by the second movement amount in the first mode (See “The motion control decision engine 111 may also consider information from the ultrasound device 64 when initiating or controlling the insertion procedure,” including the depth of a vein (first movement amount) [0111]. In step 148 of Fig. 24, the butterfly needle 41 is inserted “to a depth equivalent to the depth determined during step 147” [0162]. Again, these processes are performed by the master computer 90 [0084]. In a first mode, “the robot arm 1 can be commanded to move in the direction of the target site using visual feedback” of the ‘current’ image from NIR camera 61 in the looped steps 143-145, as shown in Fig. 24 [0144]. If the patient moves between steps 147 and 148, the master computer 190 generates an updated insertion path (second movement amount) and commands the robot arm to reposition the butterfly needle 41 [0159-0160]. See also [0107-0110].).
In Aiiso, see also a virtual displacement V for visual servo control and a virtual displacement I for impedance control in [0096-0100] based on a target position calculated from a target image [0066] and the current image [0094].
Regarding claim 4, Harris/Aiiso discloses the limitations of claim 2 as addressed above, and Harris additionally discloses
wherein the control unit is configured to determine to execute the second mode in a case where the first movement amount is larger than the second movement amount (Because the first amount is calculated after the medical device tool 212 (butterfly needle 41) is confirmed to be in the correct position for insertion (i.e., after visual servo movement; see [0152] and [0159]), the second movement amount is zero when the first amount is calculated. If the patient moves between steps 147 and 148, the master computer 190 generates an updated insertion path (second movement amount) and commands the robot arm to reposition the butterfly needle 41 [0159-0160]. Therefore, when the motion control decision engine 111 (running on master computer 90) determines to insert butterfly needle 41 to the calculated depth in step 148, the first movement amount is larger than the second movement amount; see [0108-0111], [0115-0116], and [0162].).
Regarding claim 5, Harris/Aiiso discloses the limitations of claim 2 as addressed above, and Harris additionally discloses
wherein the control unit is configured to determine to execute the second mode in a case where the first movement amount is larger than a preset specified value (The motion control decision engine 111 (running on master computer 90) determines to insert butterfly needle 41 to the calculated depth (first movement amount) in step 148; see [0108-0111], [0115-0116], and [0162]. This occurs when the depth is larger than 0, as verified in step 146 [0154-0157].).
Regarding claim 7, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and Harris additionally discloses
wherein the control unit is configured to execute visual servo based on the image captured by the imaging unit and the target image in a case where the position of the object is moved in the first mode (See “the robot arm 1 can be commanded to move in the direction of the target site using visual feedback” [0144] by the master computer 90 [0083]. This visual feedback is visual servo in the first mode: see “the robot arm 1 is caused to move above the insertion site with visual serving [sic] techniques, based on position feedback. That is, a displacement vector is calculated from the current position to the target position, and the robot arm 1 is controlled to move along that direction… the path for the robot arm 1 is continuously updated as the image insertion site is tracked by the vein tracker 110f” [0108] and “the sensors are mounted onto the robot arm, and visual servoing techniques are used to position the needle accurately” [0204]. The current and target positions are calculated from the current and target images, respectively; see [0107], [0118], and [0142-0144].).
In Aiiso, see also a first mode corresponding to the visual servo method in [0096-0098] and the benefits of using visual servoing in [0113].
Regarding claim 8, Harris/Aiiso discloses the limitations of claim 7 as addressed above, and Harris additionally discloses
wherein the control unit is configured to execute the visual servo until a difference between the image captured by the imaging unit and the target image becomes smaller than or equal to a certain value (See “Steps 143 through 145 may be carried out repeatedly in a loop until the system has determined it has reached the target insertion site” [0153], where step 143 include real-time tracking of the target with NIR camera 61 [0142] and comparing to a saved target image ([0107] and [0143]). Step 145 includes moving the robot arm 1 to the target insertion site [0151]. See “the robot arm 1 is caused to move above the insertion site with visual serving techniques, based on position feedback. That is, a displacement vector is calculated from the current position to the target position, and the robot arm 1 is controlled to move along that direction. The distance between the insertion site and the location of the laser rangefinder 60 broken up into a path, which the robot arm 1 follows until the robot arm 1 reaches a position suitable to initiate insertion” [0108]. Since “the path for the robot arm 1 is continuously updated as the image insertion site is tracked by the vein tracker 110f, and the updated insertion path is sent to the robot controller 1a subsequent to its calculation to guide the robot arm 1” [0108], the master computer 90 and robot controller 1a (control unit) move the robot arm 1 via visual servo until the tracked image insertion site matches the saved target image. That is, until the difference between the current image and the target image is equal to zero. See also [0147], [0204], and Fig. 24.).
In Aiiso, see also checking if the current position is the target position in [0109-0110].
Regarding claim 16, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and Harris additionally discloses
wherein the imaging unit is supported by the base part (See Figs. 3A and 3B: NIR camera 61 is a part of the main sensor assembly 2, which is in the base part of insertion module 215. See also [0039] and Figs. 1 and 2A.).
In Aiiso, see also Figs. 10 and 11 where imaging units are mounted on the end of the arm 11 [0140-0142].
Regarding claim 17, Harris discloses
A robot device controlling method comprising: a first step in which in a case where a movement control is executed to move a position of an object based on an image captured by an imaging unit and a target image, a control unit moves the position of the object by controlling a robot configured to movably support an end effector to move the position of the end effector (An “original input image is binarized through a series of processing steps and then fed into a trained vein classifier,” and an insertion target is chosen automatically or manually, establishing a ‘target’ image [0143]. The robot arm 1 of system 8 is controlled by master computer 90 and robot controller 1a: “Based on this sensor data 92 and the mode of operation, the master computer 90 sends commands to several primary actuators 93. In an embodiment, the primary actuators 93 include… needle tool 3… as well as the robot arm 1” to control the moving part and the robot [0083]. In a first mode, “the robot arm 1 can be commanded to move in the direction of the target site using visual feedback” of the ‘current’ image captured by NIR camera 61 in the looped steps 143-145, as shown in Fig. 24 [0144]. Through such movement, the position of the butterfly needle 41 (object) held in needle tool 3 of insertion module 215 (end effector) is moved [0040]. Robot arm 1 supports the insertion module 215; see Fig. 1. See NIR camera 61 in Figs. 3A and 3B; see also [0048-0051], [0101], [0124], and [0143]. See also [0041], [0107], [0118], [0120], and [0130-0165].
the end effector including a base part and a moving part configured to move the object with respect to the base part (See Figs. 1 and 2A: end effector: insertion module 215; moving part: needle tool 3; base part: rest of insertion module 215, including the main sensor assembly 2. In needle tool 3, “A stepper motor 43 moves the butterfly needle gripper body 46 along [linear] guide rails 45 to insert the butterfly needle 41” relative to the base part [0067]; see Figs. 11-12. See also [0039], [0068-0071], and Figs. 13-15C.); and
a second step in which in a case where the movement control is executed, the control unit moves the position of the object by controlling the moving part (In a second mode, “step 148 comprises… inserting, using the at least one actuator 93, the medical device tool 212 [e.g., butterfly needle 41] into the patient's vessel at the target insertion site” [0162] via the stepper motor 43 in needle tool 3 (moving part) [0067]. The robot arm 1 of system 8 is controlled by master computer 90 and robot controller 1a: “Based on this sensor data 92 and the mode of operation, the master computer 90 sends commands to several primary actuators 93. In an embodiment, the primary actuators 93 include… needle tool 3… as well as the robot arm 1” to control the moving part and the robot [0083]. See also [0041], [0120], and [0130-0165].).
However, Harris does not explicitly teach where the steps are alternative modes of moving the object.
Aiiso, in the same field of endeavor (robot control for peg-in-hole type tasks), teaches switching between movement modes (See “Based on the information output from the reference work recognition unit 202, the target position calculation unit 203, and the work recognition unit 204, the control unit 205 controls the arm 11, the hand 14, etc. using different control methods depending on whether the opening A2 [in the target image] and the trajectory point B1 [in the current image] can be recognized or not. Furthermore, the control unit 205 switches between using visual servoing [step S112 in Fig. 5; see translated figure below] and using impedance control [step S114 in Fig. 5] based on the result of the determination. Then, the control unit 205 moves the workpiece B [object] held by the hand 14” [0078]. See “the control unit 205 controls the arm 11 by visual servoing” [0095] and “The control unit 205 controls the movable part [arm 11; see [0046]] by impedance control” [0107]. The arm 11 comprises the end effector and the moving part. See also Fig. 3.).
Harris discloses a vision‑guided robotic intravenous insertion method in which an imaging unit (NIR camera 61) captures an input image and a target image, a control unit commands the robot arm to move the end effector toward the target site using visual feedback in a visual‑servo loop, and subsequently actuates a moving part of the end effector (e.g., stepper motor 43 in needle tool 3) to insert the butterfly needle 41 into the patient’s vessel. Aiiso teaches a control unit that determines, based on image recognition results, whether specified positions are recognized and switches control modes accordingly, using visual‑servo control when positions are recognized and impedance (force‑based) control when recognition fails (i.e. “the control unit 205 switches between using visual servoing and using impedance control based on the determination.” [0078]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to apply Aiiso’s known mode‑selection technique to Harris’s known vision‑guided robotic insertion method to improve robustness in cases where visual recognition is degraded, yielding the predictable result that when images reliably identify the target site the robot arm is moved under visual‑servo control, and when recognition fails the moving part of the end effector is actuated under impedance control to complete or adjust the procedure safely.
Regarding claim 18, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and Aiiso further discloses
manufacturing the article using the robot device according to claim 1 (Aiiso teaches inserting a workpiece B (for example, a pin) into a reference workpiece A (workpiece with a hole); see Figs. 3A and 3B. The method to do this is disclosed in [0055] and [0090-0110]. See also the translated Fig. 5 below.)
PNG
media_image1.png
767
628
media_image1.png
Greyscale
Figure A: translated Fig. 5 of Aiiso
Regarding claim 19, Harris/Aiiso discloses the limitations of claim 17 as addressed above, and Harris additionally discloses
A non-transitory computer-readable recording medium storing a program for causing a computer to execute the robot device controlling method according to claim 17 (See “CPU 226 first loads computer-executable process steps from storage, e.g., memory 222, storage medium/media 223, removable media drive, and/or other storage device. CPU 226 can then execute the stored process steps in order to execute the loaded computer-executable process steps” [0172]. See “Persistent storage medium/media 223 can further include program modules and data files used to implement one or more embodiments of the present disclosure” [0173]. See also [0095], [0115], [0133], [0171-0175] and Fig. 28.).
In Aiiso, see also [0086-0087].
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Harris in view of Aiiso, and further in view of Nixon et al. (US 20090171371 A1; hereafter “Nixon”).
Regarding claim 6, Harris/Aiiso discloses the limitations of claim 2 as addressed above, and Harris additionally discloses
wherein the control unit is configured to move the position of the object in the first mode in a case where it is determined that the second mode is not executable… (See “the motion control decision engine can decide to abort the insertion if, for example, the patient moves excessively” [0109]. If the patient moves between steps 147 and 148, the master computer 190 generates an updated insertion path (corresponding to the second movement amount) and commands the robot arm to reposition the butterfly needle 41 (first mode) [0159-0160]. See also [0107-0108], [0111], and [0115-0116].).
However, Harris/Aiiso does not explicitly teach “wherein the control unit is configured to move the position of the object in the first mode in a case where it is determined that the second mode is not executable based on the first movement amount.”
Nixon, in the same field of endeavor (robot control with visual feedback), teaches
wherein the control unit is configured to move the position of the object in [a] first mode in a case where it is determined that [a] second mode is not executable based on [a desired position] (See “A simulated slave processor 308 receives desired slave tool frame position and velocity commands… and limits the desired slave tool frame position, orientation, and velocities to assigned Cartesian Limits… [to make] sure that the generated slave joint positions and velocities do not exceed the actual slave joint's range of motion and maximum velocities (i.e., joint limits)” [0051]. In this case, a second unlimited mode is determined to be inexecutable based on a desired first movement amount exceeding a joint limit, so the control system running on processor 102 [0047] instead executes a first limited mode of controlling the slave joint. See also [0050], [0054-0055], and [0059].)
Harris discloses the first movement amount as the calculated target insertion depth [0158], which corresponds to a desired tool position of the butterfly needle 41 [0162]. In a case where the first movement amount is greater than the butterfly needle 41 can be pushed by stepper motor 43 (i.e., would exceed a joint limit; see limit switches in [0069-0070] of Harris), Nixon teaches limiting that joint and using a different mode of movement [0051]. Harris’s other mode of movement is the first mode, where “a displacement vector is calculated from the current position to the target position, and the robot arm 1 is controlled to move along that direction” [0108].
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vision‑guided robotic insertion device of Harris/Aiiso to perform a different movement mode when another mode is not executable based on the desired movement amount as taught by Nixon. One of ordinary skill in the art would have been motivated to make this modification “to enforce correct and intuitive operation of the tool 138 by keeping it within its dexterous workspace” (Nixon, [0051]).
Claims 9-10 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Harris in view of Aiiso, and further in view of Ishihara et al. (US 20190143525 A1; hereafter “Ishihara”).
Regarding claim 9, Harris/Aiiso discloses the limitations of claim 1 as addressed above, and additionally discloses
the moving part is configured to move the object in at least one coordinate direction in the coordinate system of the end effector (As stated above, in needle tool 3, “A stepper motor 43 moves the butterfly needle gripper body 46 along guide rails 45 to insert the butterfly needle 41” relative to the base part [0067]; see also Figs. 11-12. A coordinate system of the end effector may be defined or aligned such that the butterfly needle 41 is inserted in at least one coordinate direction.)
Harris also discloses tracking the target insertion site in three-dimensional coordinates [0084].
However, Harris/Aiiso does not explicitly teach “wherein the control unit is configured to calculate a coordinate system of the end effector.”
Ishihara, in the same field of endeavor (robot control using visual servo), teaches
wherein the control unit is configured to calculate a coordinate system of the end effector (See “The coordinate conversion unit 312c1 performs processing of converting a relative attitude and a relative position in a first coordinate system (x.sub.m, y.sub.m, z.sub.m) of the first camera 221 into a relative attitude and a relative position in a second coordinate system (x.sub.h, y.sub.h, z.sub.h) of the second camera 222” [0044]. The second coordinate system corresponds to the second camera 222 on the end effector of the actuation system 1 (see Fig. 1), and the transformation between the first coordinate system and the second coordinate system may change [0046]. Therefore, the coordinate conversion unit 312c1, which is in the first actuation control unit 312, calculates a coordinate system of the end effector. Details of the conversion are explained in [0040-0056]. See also [0061] and Figs. 5 and 6.)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vision‑guided robotic insertion device of Harris/Aiiso to perform coordinate conversion as taught by Ishihara. One of ordinary skill in the art would have been motivated to make this modification because “control according to visual servo can be executed in the second coordinate system based on a virtual value in the second coordinate system obtained by coordinate conversion from the taken image of the target object 401 in the first coordinate system. Therefore… a situation such as loss of control due to absence of a taken image is likely to be avoided” (Ishihara, [0061]).
Regarding claim 10, Harris/Aiiso/Ishihara discloses the limitations of claim 9 as addressed above, and Harris additionally discloses
wherein the moving part is configured to linearly move the object with respect to the base part (As stated above, in needle tool 3, “A stepper motor 43 moves the butterfly needle gripper body 46 along [linear] guide rails 45 to insert the butterfly needle 41” relative to the base part [0067]; see also Figs. 11-15C.).
Regarding claim 12, Harris/Aiiso/Ishihara discloses the limitations of claim 10 as addressed above, and Harris additionally discloses
wherein the end effector includes a gripping part configured to grip and support a workpiece, the object is the workpiece, and the moving part is configured to move the gripping part (Object/workpiece: butterfly needle 41; gripping part: needle gripper assembly 200, including butterfly needle gripper body 46 and gripper fingers 42, which grip and support butterfly needle 41. In needle tool 3, “A stepper motor 43 moves the butterfly needle gripper body 46 along guide rails 45 to insert the butterfly needle 41” relative to the base part [0067]; see also Figs. 11-15C.).
Regarding claim 13, Harris/Aiiso/Ishihara discloses the limitations of claim 12 as addressed above, and Harris additionally discloses
wherein the gripping part includes a first finger configured to abut on one side of the workpiece and a second finger configured to abut on another side of the workpiece, with the workpiece being gripped by the first finger and the second finger in a sandwiched manner therebetween (See Fig. 11 where butterfly needle 41 is gripped between two gripper fingers 42 of the needle gripper assembly 200. See also Figs. 15A-15C.).
Claims 11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Harris in view of Aiiso and Ishihara, and further in view of Miike et al. (JP 2016120545 A; hereafter “Miike”).
Regarding claim 11, Harris/Aiiso/Ishihara discloses the limitations of claim 9 as addressed above. However, Harris/Aiiso/Ishihara does not explicitly teach “wherein the moving part is configured to move the object on a plane with respect to the base part.”
Miike, in the same field of endeavor (robotic gripping devices), teaches
wherein the moving part is configured to move the object on a plane with respect to the base part (See “the first to fourth fingers 11 to 14 are movable in the X and Y directions [i.e., on a plane] from the inside to the outside of the imaging area A1” [0023]. These fingers 11-14 support scissors H (object), as in Fig. 4. In Fig. 3(b), the holding unit 6 is part of the base part; the fingers 11-14 move relative to the holding unit 6. See also [0022] and Figs. 2(a) and 2(b).).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vision‑guided robotic insertion device of Harris/Aiiso/Ishihara with the multi-directional finger movement of Miike. One of ordinary skill in the art would have been motivated to make this modification for the benefit of “eliminat[ing] unnecessary movement of the handling device and allow[ing] for faster operation” (Miike, [0010]).
Regarding claim 14, Harris/Aiiso/Ishihara discloses the limitations of claim 13 as addressed above, and Harris additionally discloses
wherein the moving part includes a sliding part configured to slidably supports the first finger and the second finger (See Fig. 11 where the two gripper fingers 42 are slidably supported by butterfly needle gripper body 46 and linear guide rails 45 of needle tool 3. See also [0067], [0071], and Figs. 12 and 15B.),
a first driving part configured to drive the first finger to slide on the sliding part (See “A stepper motor 43 moves the butterfly needle gripper body 46 along guide rails 45 to insert the butterfly needle 41” [0067]. The gripper fingers 42 connect the butterfly needle 41 to the butterfly needle gripper body 46, so the stepper motor 43 drives the first finger to slide. See also [0067], [0071], and Figs. 11, 12, and 15B.).
However, Harris/Aiiso/Ishihara does not explicitly teach “a second driving part configured to drive the second finger to slide on the sliding part.”
Miike, in the same field of endeavor (robotic gripping devices), teaches
wherein the moving part includes a sliding part configured to slidably supports the first finger and the second finger, a first driving part configured to drive the first finger to slide on the sliding part, and a second driving part configured to drive the second finger to slide on the sliding part (See “The drive unit 9 moves the first finger 11 to the fourth finger 14 along the X and Y directions shown in FIG. 1, and is composed of an X-direction first drive unit that moves both the first finger 11 and the second finger 12 in the X direction, an X-direction second drive unit that moves both the third finger 13 and the fourth finger 14 in the X direction, Y-direction first and second drive units that move the first finger 11 and the second finger 12 in the Y direction, respectively” [0022]. Each of the X- and Y-direction drive units “are configured by combining a plurality of linear motors, each of which has a stator extending linearly and a mover provided slidably relative to the stator” [0023]. In Fig. 3(b), “The movable element 17 is provided with one stator 19 extending along the Y direction and two movable elements 20 and 21 that are slidably arranged relative to the stator 19, with the stator 19 and movable element 20 constituting a first Y direction drive unit and the stator 19 and movable element 21 constituting a second Y direction drive unit” [0023]. “The first to fourth fingers 11 to 14 are held by the movable elements 20, 21, 23, and 24, respectively, and the first to fourth fingers 11 to 14 are movable in the X and Y directions” by sliding the moveable elements along the corresponding stators [0023].).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the vision‑guided robotic insertion device of Harris/Aiiso/Ishihara with another finger drive unit as taught by Miike. One of ordinary skill in the art would have been motivated to make this modification for the benefit of “eliminat[ing] unnecessary movement of the handling device and allow[ing] for faster operation” by allowing multi-directional finger movement (Miike, [0010]).
Allowable Subject Matter
Claim 15 would be allowable if rewritten to overcome the rejections under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), 1st paragraph and 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the combination of Harris/Ishihara/Miike discloses the limitations of claim 13 as addressed above, including sliding the first and second finger on the sliding part. However, Harris/Ishihara/Miike does not explicitly teach “wherein if the first finger and the second finger are slid on the sliding part, the control unit is configured to drive the first finger earlier than the second finger in a case where the workpiece is moved from the one side to the other side, and drive the second finger earlier than the first finger in a case where the workpiece is moved from the other side to the one side.”
Wang et al. (“Design and grip force control of dual-motor drive electric gripper with parallel fingers”, 2016; hereafter “Wang”) teaches a two-fingered dual-motor drive gripper (see Fig. 1), where an FPGA controller uses position control for a master finger (either finger can be the master finger) and force control for a slave finger to grip (pg. 698, section V). Therefore, when sliding the fingers, the master finger is driven earlier than the slave finger.
However, Wang does not teach changing which finger is the master finger based on the direction of movement. Thus, Wang does not teach “the control unit is configured to drive the first finger earlier than the second finger in a case where the workpiece is moved from the one side to the other side, and drive the second finger earlier than the first finger in a case where the workpiece is moved from the other side to the one side.”
Examiner has not found prior art, either alone or in a reasonable combination, that teach all of the limitations of claim 15. Accordingly, claim 15 contains allowable subject matter. The closest prior art found in Examiner’s search are Harris/Ishihara/Miike and Wang as detailed above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Moya Ly whose telephone number is (571)272-5832. The examiner can normally be reached Monday-Friday 10:00 am-6:00 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOYA LY/Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658