DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/03/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to because Page 8 Para [0028]: W-axis around the X-axis, a P-axis around the Y-axis, and an R-axis around the Z-axis are not shown in the drawings, Page 15 Para [0055]: face 99 is not shown in the drawings, Page 24 Para [0088]: W-axis , P-axis , and an R-axis are not shown in the drawings, and Page 26 Para [0093] and Para [0096]: W-axis direction, P-axis direction, and an R-axis direction are not shown in the drawings. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
Page 1 Para [0004] Line 2: movement should read moving
Page 10 Para [0036] Line 7: 63 should read 65
Page 30 Para [0113] Line 5: based a flight time should read based on a flight time
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: position information generating unit in claim 1 and 5, face estimating unit in claim 1 and 5, correction amount setting unit in claims 1, 5 and 6, and synthesis unit in claim 6.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 3, 4, 5, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Shinozaki (JP-H09178418-A) in view of Takizawa (JPH08132373A), and further in view of Nilsson et al. (A. Nilsson and P. Holmberg, "Combining a stable 2-D vision camera and an ultrasonic range detector for 3-D position estimation," in IEEE Transactions on Instrumentation and Measurement, vol. 43, no. 2, pp. 272-276, April 1994) (Hereinafter Nilsson).
Regarding Claim 1, Shinozaki teaches a robot device comprising:
a three-dimensional sensor configured to detect a position of a surface of a workpiece (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point …”);
a robot configured to change a relative position between the workpiece and the three-dimensional sensor (See at least Para [0007] “… and even if the three-dimensional position detection device 1 moves together with the arm robot, the recognition is performed sequentially…”, para [0006] “… As a result, since the reference plane used as a reference for grasping the workpiece is accurately and sequentially recognized, the calibration can be performed accurately and quickly, and the workpiece can be moved while the three-dimensional position detection device 1 moves together with the arm robot. Can be recognized. As a result, the arm robot accurately performs the work on the workpiece…”);
a position information generating unit configured to generate three-dimensional position information regarding the surface of the workpiece based on an output of the three-dimensional sensor (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point, and a position detection sensor…”);
a face estimating unit configured to estimate face information related to a face including the surface of the workpiece based on the three-dimensional position information (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point, and a position detection sensor…”); and
a correction amount setting unit configured to set a correction amount for driving the robot (See at least Para [0004] “… A three-dimensional position detection device is configured by including a control device that corrects and detects the position of the reference plane based on the correction distance and information detected by the position detection sensor 3…”), wherein
the robot is configured to change the relative position between the workpiece and the three-dimensional sensor from a first relative position to a second relative position different from the first relative position (See at least para [0006] “… As a result, since the reference plane used as a reference for grasping the workpiece is accurately and sequentially recognized, the calibration can be performed accurately and quickly, and the workpiece can be moved while the three-dimensional position detection device 1 moves together with the arm robot. Can be recognized. As a result, the arm robot accurately performs the work on the workpiece…”), and …
However, Shinozaki does not explicitly spell out …
the correction amount setting unit is configured to set the correction amount for driving the robot at the second relative position based on the face information so that a first face and a second face match each other, the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece.
Takizawa teaches …
the correction amount setting unit is configured to set the correction amount for driving the
robot at the second relative position based on the face information … , the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece (See at least Page 4 Para 7 “Therefore, in FIG. 2, in order to correct the displacement and perform the assembling operation of the component 6, the three-dimensional measurement performed under the projection of the slit light and the analysis of the image by the normal photographing are combined. The position and posture of the hole 54 of the mechanism section 5 are obtained, and the operation of the robot is corrected based on the position and posture.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Takizawa and include the feature of the correction amount setting unit being configured to set the correction amount for driving the robot at the second relative position based on the face information … , the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece, thereby provide precise calculation for accurate robot arm movement for conveying a workpiece (See at least Page 1 Para 6 “The method (1) is a method of mounting a sensor so that a sensor mounting position (more precisely, an origin position of a sensor coordinate system) with respect to a robot hand is known, and a relation between the sensor coordinate system and the robot coordinate system is known.”).
Nilsson teaches … so that a first face and a second face match each other (See at least Page 273 Col 1 “Equations (1) and (2) represent a homogeneous linear system in variables c,~, each object point [X,, yZ, Z,] and matching image point [u,,~,] providing two linear equations of the system. To solve for the values of c,~ the system must be made nonhomogeneous. This is achieved by setting a nonzero variable within the matrix G equal to one. Since the term ~23 has a scaling effect for 3-D homogeneous coordinate points and the calibration is done in a 2-D plane, it is a suitable choice.”) …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Nilsson and include the feature of the correction amount setting unit being configured to set the correction amount for driving the robot at the second relative position based on the face information so that a first face and a second face match each other, the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece, thereby provide precise calculation for accurate robot arm movement for conveying a workpiece (See at least Page 276 Col 1 “V. CONCLUSIONS - … The calibration accuracy of the proposed system configuration is better than h1.4 mm or fl image pixel in the work space 700 x 700 x 650 mm… Accuracy of the position and the orientation estimates of identified objects in world coordinates is within fl.4 mm and k1.0’ respectively…”).
Regarding Claim 3, modified Shinozaki teaches all the elements of claim 1. Shinozaki further teaches the robot device of claim 1, wherein the three-dimensional sensor is attached to the robot (See at least Fig 4 item 3 which is a three-dimensional position sensor attached to the robot, Para [0005] “… In FIG. 1, reference numeral 1 denotes a three-dimensional position detection device fixed to an arm robot (not shown) that holds and transfers a workpiece (not shown) on the reference plane S. The three-dimensional position detection device 1 is supported by a frame (not shown) so as to face the reference surface S above the reference surface S. A rectangular plate-like bracket 2 extending in the horizontal direction is provided at the end of the three-dimensional position detection device 1. A three-dimensional position sensor 3 is fixed to the front end side of the lower surface of the bracket 2. …”), the workpiece is arranged such that a position and orientation of the workpiece are fixed (See at least Fig 5 shows that the position and orientation of the workpiece are fixed), and the robot is configured to change the relative position between the workpiece and the three- dimensional sensor from the first relative position to the second relative position by moving the three-dimensional sensor from a first position to a second position (See at least Para [0006] “… As a result, since the reference plane used as a reference for grasping the workpiece is accurately and sequentially recognized, the calibration can be performed accurately and quickly, and the workpiece can be moved while the three-dimensional position detection device 1 moves together with the arm robot. Can be recognized. As a result, the arm robot accurately performs the work on the workpiece.”).
Regarding Claim 4, modified Shinozaki teaches all the elements of claim 1. Shinozaki further teaches the robot device of claim 1, comprising a work tool attached to the robot and configured to grasp the workpiece (See at least Para [0003] “… Accordingly, it is an object of the present invention to provide a three-dimensional position detection device that detects the position of a robot or a detection object with high accuracy, and a transfer robot that accurately reaches the position of the workpiece and grips the workpiece reliably…”, Para [0004] “…the position detection sensor 11 is provided in the vicinity of the gripping part that grips the workpiece on the reference plane S, and the workpiece detected by the position detection sensor 11 for the gripping part to grip the workpiece is appropriately selected…”), wherein
a position and orientation of the three-dimensional sensor are fixed by a fixing member (See at least Para [0005] “… In FIG. 1, reference numeral 1 denotes a three-dimensional position detection device fixed to an arm robot (not shown) that holds and transfers a workpiece (not shown) on the reference plane S. The three-dimensional position detection device 1 is supported by a frame (not shown) so as to face the reference surface S above the reference surface S. A rectangular plate-like bracket 2 extending in the horizontal direction is provided at the end of the three-dimensional position detection device 1. A three-dimensional position sensor 3 is fixed to the front end side of the lower surface of the bracket 2. …”), and the robot is configured to change the relative position between the workpiece and the three-dimensional sensor from the first relative position to the second relative position by moving the workpiece from a first position to a second position (See at least Para [0009] “…Therefore, when the arm robot 12 cannot be gripped by the hand due to the posture of the workpiece or the like, the workpiece can be displaced to a posture capable of being gripped by tilting or turning as necessary…”).
Regarding Claim 5, modified Shinozaki teaches all the elements of claim 1. Shinozaki further teaches the robot device of claim 1, wherein the robot is configured to change the relative position between the workpiece and the three-dimensional sensor to three or more mutually different relative positions, the position information generating unit is configured to generate the three-dimensional position information regarding the surface of the workpiece at each of the relative positions (See at least Para [0004] “… In the third aspect of the invention, the position detection sensor 3 detects the three-dimensional position of the reference surface S at a plurality of different detection positions, and takes in the position information to the control device…”), the face estimating unit is configured to estimate the face information at each of the relative positions (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point, and a position detection sensor…”), and …
However, Shinozaki does not explicitly spell out …
the correction amount setting unit is configured to set the correction amount for driving the robot at at least one of the relative positions so that faces that are detected at a plurality of relative positions and include the surface of the workpiece match each other within a predetermined determination range.
Nilsson teaches …
the correction amount setting unit is configured to set the correction amount for driving the robot at at least one of the relative positions so that faces that are detected at a plurality of relative positions and include the surface of the workpiece match each other within a predetermined determination range (See at least Page 273 Col 1 “Equations (1) and (2) represent a homogeneous linear system in variables c,~, each object point [X,, yZ, Z,] and matching image point [u,,~,] providing two linear equations of the system. To solve for the values of c,~ the system must be made nonhomogeneous. This is achieved by setting a nonzero variable within the matrix G equal to one. Since the term ~23 has a scaling effect for 3-D homogeneous coordinate points and the calibration is done in a 2-D plane, it is a suitable choice.”, Page 274 Col 2 “Fig. 4. Projection correction due to depth information from an ultrasonic range sensor. 2, is the uncorrected image point of the object A, and rc is the corrected image point of the object due to depth information from the range sensor. zu is the object's height above the work table measured with the range sensor, and H is the height of the camera above the work table.”, Page 274 Col 2 “in the image plane is given, using projection corrections due to depth information from the ultrasonic sensor, see Fig. 4.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Nilsson and include the feature of the correction amount setting unit being configured to set the correction amount for driving the robot at at least one of the relative positions so that faces that are detected at a plurality of relative positions and include the surface of the workpiece match each other within a predetermined determination range, thereby provide precise calculation for accurate robot arm movement for conveying a workpiece (See at least Page 276 Col 1 “V. CONCLUSIONS - … The calibration accuracy of the proposed system configuration is better than h1.4 mm or fl image pixel in the work space 700 x 700 x 650 mm… Accuracy of the position and the orientation estimates of identified objects in world coordinates is within fl.4 mm and k1.0’ respectively…”).
Regarding Claim 7, Shinozaki teaches A control method of a robot device, the control method comprising:
arranging, by a robot, a relative position between a workpiece and a three-dimensional sensor at a first relative position (See at least, Para [0007] “… and even if the three-dimensional position detection device 1 moves together with the arm robot, the recognition is performed sequentially…”, para [0006] “… As a result, since the reference plane used as a reference for grasping the workpiece is accurately and sequentially recognized, the calibration can be performed accurately and quickly, and the workpiece can be moved while the three-dimensional position detection device 1 moves together with the arm robot. Can be recognized. As a result, the arm robot accurately performs the work on the workpiece…”);
generating, by a position information generating unit, three-dimensional position information regarding a surface of the workpiece at the first relative position based on an output of the three-dimensional sensor (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point …”);
arranging, by the robot, the relative position between the workpiece and the three-dimensional sensor at a second relative position different from the first relative position (See at least para [0006] “… As a result, since the reference plane used as a reference for grasping the workpiece is accurately and sequentially recognized, the calibration can be performed accurately and quickly, and the workpiece can be moved while the three-dimensional position detection device 1 moves together with the arm robot. Can be recognized. As a result, the arm robot accurately performs the work on the workpiece…”);
…
estimating, by a face estimating unit, face information related to a face including the surface of the workpiece based on the three-dimensional position information at each of the relative positions (See at least Para [0004] “… there is provided a position detection sensor 3 for detecting a three-dimensional position of a reference plane relative to the reference point, and a position detection sensor…”); and
However, Shinozaki does not explicitly spell out …
creating, by the position information generating unit, three-dimensional position information regarding the surface of the workpiece at the second relative position based on an output of the three-dimensional sensor; …
setting, by a correction amount setting unit, a correction amount for driving the robot at the second relative position based on the face information so that a first face and a second face match each other, the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece.
Takizawa teaches …
creating, by the position information generating unit, three-dimensional position information regarding the surface of the workpiece at the second relative position based on an output of the three-dimensional sensor (See at least Page 2 Para 8 “Preferably, at least three measurement positions are such that movement between the positions can be executed by movement along the positive or negative direction of the coordinate axes of the robot coordinate system. For example, a vector from the first measurement position to the second measurement position is selected in the + direction of the X axis of the robot coordinate system, and a vector from the second measurement position to the
third measurement position is Y direction in the robot coordinate system. Selected in the + direction of the axis. The movement between these measurement positions may be executed by moving between the positions taught to the robot in advance by the regeneration operation.”, Page 2 Para 9 “At each measurement position, three-dimensional position measurement is performed on the same object, and sensor output data expressing the result on a sensor coordinate system is obtained. The same linear conversion relationship exists between the sensor output data and the data expressing each measurement position on the robot coordinate system (the robot position data at each measurement position can be used). A matrix representing this linear transformation is a coordinate transformation matrix.”); …
setting, by a correction amount setting unit, a correction amount for driving the robot at the second relative position based on the face information … , the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece (See at least Page 4 Para 7 “Therefore, in FIG. 2, in order to correct the displacement and perform the assembling operation of the component 6, the three-dimensional measurement performed under the projection of the slit light and the analysis of the image by the normal photographing are combined. The position and posture of the hole 54 of the mechanism section 5 are obtained, and the operation of the robot is corrected based on the position and posture.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Takizawa and include the feature of the correction amount setting unit being configured to set the correction amount for driving the robot at the second relative position based on the face information so that a first face and a second face match each other, the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece, thereby provide precise calculation for accurate robot arm movement for conveying a workpiece (See at least Page 1 Para 6 “The method (1) is a method of mounting a sensor so that a sensor mounting position (more precisely, an origin position of a sensor coordinate system) with respect to a robot hand is known, and a relation between the sensor coordinate system and the robot coordinate system is known.”).
Nilsson teaches … so that a first face and a second face match each other (See at least Page 273 Col 1 “Equations (1) and (2) represent a homogeneous linear system in variables c,~, each object point [X,, yZ, Z,] and matching image point [u,,~,] providing two linear equations of the system. To solve for the values of c,~ the system must be made nonhomogeneous. This is achieved by setting a nonzero variable within the matrix G equal to one. Since the term ~23 has a scaling effect for 3-D homogeneous coordinate points and the calibration is done in a 2-D plane, it is a suitable choice.”) …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Nilsson and include the feature of the correction amount setting unit being configured to set the correction amount for driving the robot at the second relative position based on the face information so that a first face and a second face match each other, the first face being detected at the first relative position and including the surface of the workpiece, the second face being detected at the second relative position and including the surface of the workpiece, thereby provide precise calculation for accurate robot arm movement for conveying a workpiece (See at least Page 276 Col 1 “V. CONCLUSIONS - … The calibration accuracy of the proposed system configuration is better than h1.4 mm or fl image pixel in the work space 700 x 700 x 650 mm… Accuracy of the position and the orientation estimates of identified objects in world coordinates is within fl.4 mm and k1.0’ respectively…”).
Claim(s) 2 are rejected under 35 U.S.C. 103 as being unpatentable over Shinozaki (JP-H09178418-A) in view of Takizawa (JPH08132373A), Nilsson et al. (A. Nilsson and P. Holmberg, "Combining a stable 2-D vision camera and an ultrasonic range detector for 3-D position estimation," in IEEE Transactions on Instrumentation and Measurement, vol. 43, no. 2, pp. 272-276, April 1994) (Hereinafter Nilsson), and further in view of Tonogai et a. (US 20210039257 A1) (Hereinafter Tonogai).
Regarding Claim 2, modified Shinozaki teaches all the elements of claim 1.
However, Shinozaki does not explicitly spell out the robot device of claim 1, wherein the robot is configured to change a relative orientation between the workpiece and the three-dimensional sensor from a first relative orientation to a second relative orientation.
Tonogai teaches the robot device of claim 1, wherein the robot is configured to change a relative orientation between the workpiece and the three-dimensional sensor from a first relative orientation to a second relative orientation (See at least Para [0008] “[1] An example of a workpiece picking device according to the present disclosure is configured to take out stacked workpieces. The workpiece picking device includes a sensor that measures three-dimensional positions of the workpieces; a hand that grasps the workpieces; a robot that moves the hand to and from a grasping position; and a control device that controls the sensor, the hand, and the robot. Besides, the control device has: a position orientation calculation part that calculates, on the basis of measurement results of the three-dimensional positions and using a predetermined calculation parameter, positions and orientations of the workpieces, and calculates the workpiece number in which positions and orientations are detected; a grasping orientation calculation part that calculates, on the basis of the calculation results of the positions and orientations and using a predetermined calculation parameter, a grasping orientation of the hand when the hand grasps the workpieces; a route calculation part that calculates, using a predetermined calculation parameter, a route through which the hand moves to the grasping orientation; a sensor control part that controls the operation of the sensor on the basis of measurement parameters when measuring the three-dimensional positions; a hand control part that controls the operation of the hand on the basis of the grasping orientation; a robot control part that controls the operation of the robot on the basis of the route; a situation determination part that determines situations of the workpieces on the basis of the measurement results of the three-dimensional positions and the calculation results of the workpiece number; and a parameter modification part that modifies a parameter including at least one of a measurement parameter when measuring the three-dimensional positions, a calculation parameter of the positions and orientations, a calculation parameter of the grasping orientation, and a calculation parameter of the route, when the determination results of situations of the workpieces satisfy a predetermined condition.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Tonogai and include the feature of robot being configured to change a relative orientation between the workpiece and the three-dimensional sensor from a first relative orientation to a second relative orientation, thereby provide robot gripping accuracy (See at least Para [0068] “… Accordingly, even when the number of picking times increases, it is possible to reliably detect the workpiece and realize an accurate picking operation.”).
Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Shinozaki (JP-H09178418-A) in view of Takizawa (JPH08132373A), Nilsson et al. (A. Nilsson and P. Holmberg, "Combining a stable 2-D vision camera and an ultrasonic range detector for 3-D position estimation," in IEEE Transactions on Instrumentation and Measurement, vol. 43, no. 2, pp. 272-276, April 1994) (Hereinafter Nilsson), and further in view of Kurahashi et al. (KR20150063921A) (Hereinafter Kurahashi).
Regarding Claim 6, modified Shinozaki teaches all the elements of claim 1.
Although Shinozaki teaches detecting the three-dimensional position of a reference surface at a plurality of different detection positions (See at least Para [0004] “… the position detection sensor 3 detects the three-dimensional position of the reference surface S at a plurality of different detection positions …”), however, Shinozaki does not explicitly spell out the robot device of claim 1, comprising a synthesis unit configured to synthesize a plurality of pieces of the three-dimensional position information regarding the surface of the workpiece acquired at a plurality of relative positions, wherein the synthesis unit is configured to synthesize the three-dimensional position information generated at the first relative position and the three-dimensional position information generated at the second relative position corrected based on the correction amount set by the correction amount setting unit.
Kurahashi teaches the robot device of claim 1, comprising a synthesis unit configured to
synthesize a plurality of pieces of the three-dimensional position information regarding the surface of the workpiece acquired at a plurality of relative positions, wherein the synthesis unit is configured to synthesize the three-dimensional position information generated at the first relative position and the three-dimensional position information generated at the second relative position corrected based on the correction amount set by the correction amount setting unit (See at least Page 8 Para 3 “… The synthesizing means 1031 generally obtains a plurality of synthesis distance information. Synthesis here is, for example, combining a plurality of pieces of first distance information into one…”, Page 36 Para 4 “For example, since the correction information for adjusting the rotation center of the work to the center of the work can be obtained, accurate correction information can be obtained.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to combine the device of Shinozaki with the teachings of Kurahashi and include the feature of a synthesis unit configured to synthesize a plurality of pieces of the three-dimensional position information regarding the surface of the workpiece acquired at a plurality of relative positions, wherein the synthesis unit is configured to synthesize the three-dimensional position information generated at the first relative position and the three-dimensional position information generated at the second relative position corrected based on the correction amount set by the correction amount setting unit, thereby provide calculation accuracy for precise robot gripping (See at least Page 36 Para 4 “For example, since the correction information for adjusting the rotation center of the work to the center of the work can be obtained, accurate correction information can be obtained.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Ban et al. (US 20040080758 A1) teaches determining three-dimensional position and posture of a surface measuring point
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAHEDA HOQUE/Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658