DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 11/12/2025 have been fully considered but they are not persuasive. In light of applicant’s arguments, and applicant’s amendments changing the scope of the claims, a new rejection of the independent claims is made in view of Islam, Sivaneth, and Suzuki9005 (each cited in the previous office action).
Applicant argues on page 7 that the newly amended claims overcome the previous rejection under 35 U.S.C. 101 because the reflect an improvement in the functioning of a computer. However, as will be outlined in the rejections below, the currently amended claims appear to only apply the claimed subject matter to a computer.
Applicant argues on pages 8-10 that Islam does not describe calibration of the robot based on the measured position of the robot in the first state and second state. However, as will be shown in the rejections to follow, Islam’s calibration verification captures a reference image which is used to compare positioning errors at a later time. In other words, a pose first state is captured in the reference image, and a pose in a second state is later compared to the pose of the first state to determine the error. Sivaneth details the steps of calibration using error matrices (mechanism error parameters) once it is determined that calibration is required.
Applicant argues on page 10 that Islam does not describe calculating position of the robot in the 3D reference coordinate system. Howver, Suzuki9005, as well as Sivaneth show applicability of the above to a 3D coordinate system. The dimensions of the coordinate system are application specific, and choosing a 3D or 2D coordinate system does not change the overall functionality of the proposed combination.
Applicant argues on page 11 that Islam does not disclose the parameter includes a parameter relating to an error of a component of the robot. However, Islam alludes to calibration being required due to issues with robot components in at least [0131] (cited and reproduced below). Additionally, Sivaneth specifically states that the error matrices are determined in order to account for errors in the actuation of the joints and end effector of the robotic system ([col6, ln 15-29], cited and reproduced below). These matrices are derived by comparing actual position against the desired position of the robot.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 1 is directed to a calibration apparatus. Therefore, claim 1 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites:
A calibration apparatus configured to calibrate a mechanism error parameter for adjusting control of a robot based on an operation program,
the calibration apparatus comprising:
a plurality of members acting as reference points and installed at a region where the robot is installed;
a three-dimensional measuring device configured to measure positions of the reference points; and a processor configured to:
set a three-dimensional reference coordinate system in a three-dimensional space, based on the positions of the reference points measured by the three-dimensional measuring device,
acquire a position of the robot in the three-dimensional reference coordinate system and calculate a mechanism error parameter based on the position of the robot in the three-dimensional reference coordinate system,
wherein a first state is a state in which the robot having a first mechanism error parameter set therein is driven according to a command value of the operation program,
a second state is a state in which the robot is driven according to the same command value of the operation program after the first state,
the mechanical error parameter includes a parameter relating to an error of a component in the robot, and
the processor is configured to calculate a second mechanism error parameter different from the first mechanism error parameter so that a position of the robot in the three-dimensional reference coordinate system in the second state matches a position of the robot in the three-dimensional reference coordinate system in the first state, when the robot is driven by a command value of an operation program
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “set a three-dimensional reference coordinate system in the three-dimensional space based on the positions of the reference points …” encompasses a human arbitrarily setting a coordinate system based on measured data points. The limitation of “calculate a mechanism error parameter based on the position of the robot …” similarly encompasses one performing calculations based on measured position data. The limitations of “calculate the mechanism error parameter based on the position of the robot…” and “calculate a second mechanism error parameter… so that a position of the robot … in the second state matches a position of the robot … in the first state …” encompass a person evaluating an acquired position and calculating a deviation or correction amount between a commanded and a desired position, and calculating a new parameter based on the results of this deviation or correction amount. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
A calibration apparatus configured to calibrate a mechanism error parameter for adjusting control of a robot based on an operation program,
the calibration apparatus comprising:
a plurality of members acting as reference points and installed at a region where the robot is installed;
a three-dimensional measuring device configured to measure positions of the reference points; and a processor configured to:
set a three-dimensional reference coordinate system in a three-dimensional space, based on the positions of the reference points measured by the three-dimensional measuring device,
acquire a position of the robot in the three-dimensional reference coordinate system and calculate a mechanism error parameter based on the position of the robot in the three-dimensional reference coordinate system,
wherein a first state is a state in which the robot having a first mechanism error parameter set therein is driven according to a command value of the operation program,
a second state is a state in which the robot is driven according to the same command value of the operation program after the first state,
the mechanical error parameter includes a parameter relating to an error of a component in the robot, and
the processor is configured to calculate a second mechanism error parameter different from the first mechanism error parameter so that a position of the robot in the three-dimensional reference coordinate system in the second state matches a position of the robot in the three-dimensional reference coordinate system in the first state, when the robot is driven by a command value of an operation program
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitation of “a plurality of members acting as reference points… a three-dimensional measuring device configured to measure position of the reference points…” and “a processor configured to… acquire a position of the robot…” the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor, 3D measuring device) to perform the process. In particular, the acquiring step is recited at a high level of generality (i.e. as a general means of gathering position data for use in the setting and calculating steps) and amounts to mere data gathering, which is a form of insignificant extra-solution activity. Both the processor and 3D measuring device are recited at a high level of generality and are operating in their ordinary capacities, and do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. The additional limitation is no more than mere instructions to apply the exception using a computer. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “processor” and “three-dimensional measuring device” amounts to nothing more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Hence, the claim is not patent eligible.
Dependent claims 2-12 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-13 are not patent eligible under the same rationale as provided for in the rejection of independent claim 1.
Claim 2 further defines the reference coordinate system, and does not add any additional meaningful limitations.
Claims 3-4 includes additional abstract ideas (e.g. calculate a transformation matrix, calculate a theoretical position) and additional data gathering (e.g. acquire a plurality of positions), and do not add any additional meaningful limitations.
Claim 5 includes additional abstract ideas (evaluate accuracy, determine a need for calibrating) and is therefore not patent eligible.
Claim 6 includes additional abstract ideas (determine accuracy…, determine calibration is needed) and is therefore not patent eligible.
Claim 7 includes additional abstract ideas (determine calibration is needed) and is therefore not patent eligible.
Claim 8 includes additional abstract ideas (determine calibration is needed) and is therefore not patent eligible.
Claims 9-12 further defines the first and second states, and does not add any additional meaningful limitations.
Independent claim 14 recites subject matter included in claim 1 and dependent claim 5, and is therefore not patent eligible.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 and 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the mechanical error parameter" in line 20. There is insufficient antecedent basis for this limitation in the claim. Examiner proceeds on the assumption that “the mechanical error parameter” corresponds to “a mechanism error parameter” introduced in line 14.
Claims 2-12 depend from claim 1 and are therefore rejected under the same rationale.
Claim 14 recites substantially similar limitations to claim 1 and is rejected for the same reason.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 9, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Islam (US-20200306977-A1) in view of Suzuki9005 (US 20140229005 A1) and Sivaneth (US 12030184 B2).
Claim 1
Islam teaches
A processor configured to …
acquire a position of the robot in the … reference coordinate system
(Islam - [0004] … The control circuit of the robot control system may be configured to perform the calibration verification by: a) performing a first calibration operation (e.g., a first camera calibration) to determine calibration information (e.g., camera calibration information), b) outputting a first movement command … to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol, the reference image coordinate being a coordinate at which the verification symbol appears in the reference image,)
and calculate a mechanism error parameter based on the position of the robot in the three-dimensional reference coordinate system,
(Islam - [0005] … j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration …)
Islam teaches the aspect of periodically checking calibration of the robot by moving the end effector to a known position (i.e. moving the robot according to the same command value in a first state and a second state), and thus suggests the limitations of
wherein a first state is a state in which the robot having a first mechanism error parameter set therein is driven according to a command value of the operation program,
a second state is a state in which the robot is driven according to the same command value of the operation program after the first state,
(Islam - [0004] … The control circuit of the robot control system may be configured to perform the calibration verification by: a) performing a first calibration operation (e.g., a first camera calibration) to determine calibration information (e.g., camera calibration information), b) outputting a first movement command … to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol, the reference image coordinate being a coordinate at which the verification symbol appears in the reference image,)
EXAMINER NOTE: Robot (in a first state) is moved to the reference location (a known position) to capture a reference image to be used later for comparison in determining positioning errors.
(Islam - [0111] For example, if the reference image (e.g., 1120) is stored along with one or more parameter values of a movement command used to generate the pose associated with the reference image, or more specifically the pose of the robot arm appearing in the reference image, the one or more parameter values and the model may be used to estimate a pose for the robot arm (e.g., 1153) when the robot arm is moved according to the movement command. ... The estimated pose may be used to estimate a location of a verification symbol (e.g., 1130A) on the robot arm (e.g., 1153), and the estimated location may be used to estimate where the verification symbol is likely to appear in the reference image (e.g., 1120).)
EXAMINER NOTE: Parameter values correspond to command values. The robot moves according to the parameter values during calibration.
(Islam - [0005] In an embodiment, the control circuit is configured to perform the calibration verification further by: f… g) outputting a third movement command to the communication interface, wherein the communication interface is configured to communicate the third movement command to the robot to cause the robot arm to move the verification symbol to at least the reference location during the idle period, h) receiving via the communication interface an additional image of the verification symbol from the camera, which is configured to capture the additional image of the verification symbol at least at the reference location during the idle period, the additional image being a verification image for the verification, i) determining a verification image coordinate used for the verification, the verification image coordinate being a coordinate at which the verification symbol appears in the verification image, j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration operation…)
EXAMINER NOTE: The robot (in a second state) returns to the reference location (driven according to the same command value) to determine the amount of positioning error with the current calibration information.
Islam also indicates that calibration may change due to errors of robot components, which suggests
the mechanical error parameter includes a parameter relating to an error of a component in the robot,
(Islam - [0131] … In some cases, the change in the environment of the camera or to the robot operation system may be a change in a relationship between arm portions of the robot arm (e.g., between the links 1154A-1154E of the robot arm 1153), or in the arm portions themselves. For example, one of the arm portions (e.g., link 1154D or the robot hand 1155) may become bent or otherwise deformed or damaged … The calibration verification discussed above may provide a quick and efficient technique for detecting the change to a camera (e.g., 1170) and/or a change to an environment of the camera or to the robot operation system 1100. … )
While Islam outlines the conditions for checking calibration, and the outcomes of the calibration procedure, Islam does not provide minute details as to how the calibration is performed once it is determined that calibration is required. However, Sivaneth teaches one known way to perform robot calibration, and teaches
… calculate a mechanism error parameter based on the position of the robot in the three-dimensional reference coordinate system,
the processor is configured to calculate a second mechanism error parameter different from the first mechanism error parameter so that a position of the robot in the three-dimensional reference coordinate system in the second state matches a position of the robot in the three-dimensional reference coordinate system in the first state, when the robot is driven by a command value of an operation program.
(Sivaneth - [col 5, ln 61 thru col 6, ln 3] To reduce positioning/pose errors of a working robot in its entire working space in real time, the concept of an error matrix can be introduced. The error matrix can indicate the difference between a controller-desired pose (i.e., the pose programmed by the robotic controller) of the robot end-effector and the actual pose of the end-effector (which can be captured by the cameras and converted from the camera space to the robot-base space using the transformation matrix) and can vary as the position of the end-effector changes in the working space.
[col 9, ln 13-31] … after the initial training at operation 418, the controller can generate a test desired pose (operation 420) and move the TCP of the gripper according to the test desired pose (operation 422). The 3D machine-vision module measures the pose of the gripper TCP in the camera space (operation 424). The measured pose in the camera space can be converted to the robot-base space using the transformation matrix to obtain a test instructed pose (operation 426). The neural network can predict/infer an error matrix corresponding to the test instructed pose (operation 428). In addition, the system can compute an error matrix using the test desired pose and the test instructed pose (operation 430). The predicted error matrix and the computed error matrix can be compared to determine whether the difference is smaller than a predetermined threshold (operation 432). If so, the training is completed. If not, additional training samples are to be collected by going back to operation 404. The threshold can vary depending on the positioning accuracy needed for the robotic operation.)
EXAMINER NOTE: The error matrix (mechanism error parameter) is updated so that the desired and instructed poses match. The robot is moved to a test pose (similar to Islam's system), a new error matrix is computed, and the error is compared. This process is repeated until the error is small enough (position in first state and second state match) and the new error matrix (second mechanism error parameter) is used for operations.
Sivaneth also teaches
the mechanical error parameter includes a parameter relating to an error of a component in the robot, and
(Sivaneth - [col 6, ln 15-29] In one example, the robotic controller may send a command to move the end-effector to desired TCP pose H.sub.td. However, due to errors (e.g., errors in the actuation of the joints and end-effector) in the robotic system, when the controller instructs the robotic arm to achieve this pose, the resulting pose is often different from H.sub.td. The actual pose of the end-effector measured by the 3D machine-vision module and transformed from the camera space to the robot-base space can be instructed pose H.sub.ti. Hence, given an instructed pose (i.e., a pose known to the camera), if error matrix E({right arrow over (r)}) is known, one can compute the desired pose that can be used by the controller to instruct the robotic arm to move the end-effector to the instructed pose, thus achieving the eye (camera)-to-hand (robotic controller) coordination.)
Islam's robot coordinate system is said to be arbitrarily place relative to the robot
(Islam - [0053] In an embodiment, the camera calibration information determined from the camera calibration describes a relationship between the camera 270 and the robot 250, or more specifically a relationship between the camera 270 and a world point 294 that is stationary relative to the base 252 of the robot 250. The world point 294 may represent a world or other environment in which the robot 250 is located, and may be any imaginary point that is stationary relative to the base 252.)
Islam may not teach the a plurality of members acting as reference points, but Suzuki9005 teaches
a plurality of members acting as reference points and installed at a region where the robot is installed;
(Suzuki9005 - [0068] In step S4, as illustrated in FIG. 3, the user installs a calibration plate (reference member) 10 in a common field which is commonly included in the measurement area of the camera 3 when a plurality of positions and orientations set in each position and orientation group is taken by the robot body 2. The calibration plate 10 is, for example, a planar plate having a plurality of (at least three) circular patterns 10a and 10b of a predetermined size displayed thereon. The camera 3 can measure positions and orientations of the calibration plate 10 having six degrees of freedom.
[0069] Only one of the plurality of circular patterns of the calibration plate 10 is a hollow circular pattern 10a, which is placed at a corner, and others are solid circular patterns 10b. The existence of the hollow circular pattern 10a provides a totally asymmetrical arrangement about a point, and enables unique identification of the position and orientation of the calibration plate 10 having six degrees of freedom. A calibration plate coordinate system P is set to the calibration plate 10 with reference to the calibration plate 10.)
EXAMINER NOTE: The camera detects the features (members) of the calibration plate and sets the coordinate system P accordingly.
a three-dimensional measuring device configured to measure positions of the reference points; and
(Suzuki9005 - [0114] The robot systems 1 according to the first exemplary embodiment and the second exemplary embodiment is described to use the monocular camera 3 as a visual sensor. However, the visual sensor is not limited thereto and may be, for example, a stereo camera or a laser scanner)
a processor configured to:
set a three-dimensional reference coordinate system in a three-dimensional space, based on the positions of the reference points measured by the three-dimensional measuring device,
(Suzuki9005 - [0069] Only one of the plurality of circular patterns of the calibration plate 10 is a hollow circular pattern 10a, which is placed at a corner, and others are solid circular patterns 10b. The existence of the hollow circular pattern 10a provides a totally asymmetrical arrangement about a point, and enables unique identification of the position and orientation of the calibration plate 10 having six degrees of freedom. A calibration plate coordinate system P is set to the calibration plate 10 with reference to the calibration plate 10.)
EXAMINER NOTE: The camera detects the features (members) of the calibration plate and sets the coordinate system P accordingly.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Suzuki9005’s suggestion to utilize reference members in order to provide a well-known means for specifying the arbitrary world point described by Islam, and to incorporate Sivaneth's error matrix calculations in order to provide a known way of performing calibrations shown to reduce errors (Sivaneth - [col 5, ln 61 thru col 6, ln 3], cited above)
Claim 2
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 1 as outlined above. Islam further teaches
wherein the three-dimensional reference coordinate system is an immovable coordinate system defined so as not to depend on an operation of the robot and an installation state of the robot.
(Islam - [0053] In an embodiment, the camera calibration information determined from the camera calibration describes a relationship between the camera 270 and the robot 250, or more specifically a relationship between the camera 270 and a world point 294 that is stationary relative to the base 252 of the robot 250. The world point 294 may represent a world or other environment in which the robot 250 is located, and may be any imaginary point that is stationary relative to the base 252.)
EXAMINER NOTE: The calibration is carried out relative to a world point stationary relative to the robot. The world point acts as an origin of a reference coordinate system.
Claim 3
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 1 as outlined above. Islam alone may not explicitly teach the following limitations in combination. However, Sivaneth teaches
wherein the processor is configured to calculate a transformation matrix by which one coordinate value among a coordinate value of a base coordinate system set for the robot and a coordinate value of the three-dimensional reference coordinate system is converted into the other coordinate value,
(Sivaneth - [col 2, ln 28-32] In a variation on this embodiment, the system can further include a coordinate-transformation module configured to transform a pose determined by the machine-vision module from a camera-centered coordinate system to a robot-centered coordinate system.)
the command value being specified by the coordinate value of the base coordinate system,
(Sivaneth - [col 7, ln 64-67] (28) The controller of the robotic arm can generate a number of predetermined poses in the robot-base space (operation 308) and sequentially move the end-effector to those poses (operation 310).)
the processor is configured to acquire, in the first state, a plurality of positions of the robot in the three- dimensional reference coordinate system when the robot is driven based on a plurality of command values including the command value,
(Sivaneth - [col 8, ln 1-3] At each pose, the 3D machine-vision system can capture images of the calibration target and determine the pose of the calibration target in the camera space (operation 312).)
EXAMINER NOTE: The robot pose in the camera space (reference coordinates) is captured for each predetermined pose. See also passages cited with reference to claim 1.
calculate the transformation matrix, in the first state, based on the plurality of command values and the plurality of positions of the robot in the three-dimensional reference coordinate system,
(Sivaneth - [col 8, ln 3-15] The transformation matrix can then be derived based on poses generated in the robot-base space and the machine-vision-determined poses in the camera space (operation 314). Various techniques can be used to determine the transformation matrix. For example, equation (4) can be solved based on the predetermined poses in the robot-base space and the camera space using various techniques, including but not limited to: linear least square or SVD techniques, Lie-theory-based techniques, techniques based on quaternion and non-linear minimization or dual quaternion, techniques based on Kronecker product and vectorization, etc.)
acquire, in the second state, the plurality of positions of the robot in the three-dimensional reference coordinate system when the robot is driven based on the plurality of command values,
calculate a theoretical position of the robot in the base coordinate system based on the position of the robot in the three- dimensional reference coordinate system and the transformation matrix when the robot is driven by each of the plurality of command values in the second state, and
calculate the mechanism error parameter so that the command value matches the theoretical position of the robot in the base coordinate system.
(Sivaneth - [col 5, ln 61 thru col 6, ln 3] (21) To reduce positioning/pose errors of a working robot in its entire working space in real time, the concept of an error matrix can be introduced. The error matrix can indicate the difference between a controller-desired pose (i.e., the pose programmed by the robotic controller) of the robot end-effector and the actual pose of the end-effector (which can be captured by the cameras and converted from the camera space to the robot-base space using the transformation matrix) and can vary as the position of the end-effector changes in the working space.
…
[col 6, ln 15-29] In one example, the robotic controller may send a command to move the end-effector to desired TCP pose Htd. However, due to errors (e.g., errors in the actuation of the joints and end-effector) in the robotic system, when the controller instructs the robotic arm to achieve this pose, the resulting pose is often different from H.sub.td. The actual pose of the end-effector measured by the 3D machine-vision module and transformed from the camera space to the robot-base space can be instructed pose Hti. Hence, given an instructed pose (i.e., a pose known to the camera), if error matrix E({right arrow over (r)}) is known, one can compute the desired pose that can be used by the controller to instruct the robotic arm to move the end-effector to the instructed pose, thus achieving the eye (camera)-to-hand (robotic controller) coordination.)
EXAMINER NOTE: The robot is commanded to move to a desired pose by the controller (command value in the base coordinate system) and the camera records the pose in the camera (reference) coordinate system. Using the transformation matrix, these poses are compared in a common coordinate system and the error matrix (mechanism error parameter) is derived and used such that the desired pose matches the transformed camera pose (theoretical pose). Note that according to Figure 4 the error matrix is constructed by carrying out this process multiple times, indicating a plurality of poses.
As stated above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Sivaneth’s error correction techniques in order to more precisely reduce positioning/pose errors of the robot in real time.
(Sivaneth – [col 5, ln 61 thru col 6, ln 5] To reduce positioning/pose errors of a working robot in its entire working space in real time, the concept of an error matrix can be introduced. The error matrix can indicate the difference between a controller-desired pose (i.e., the pose programmed by the robotic controller) of the robot end-effector and the actual pose of the end-effector (which can be captured by the cameras and converted from the camera space to the robot-base space using the transformation matrix) and can vary as the position of the end-effector changes in the working space. In some embodiments, the error matrix can be expressed as the transformation from the instructed pose to the desired pose in the robot-base space:)
Claim 4
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 3 as outlined above. As shown above, the cited combination also teaches
wherein the processor is configured to calculate the transformation matrix so as to minimize a distance between the command value in the base coordinate system and the theoretical position of the robot in the base coordinate system, which is calculated by the transformation matrix from the position of the robot in the three-dimensional reference coordinate system, in the first state.
(Sivaneth - [col 8, ln 3-15] The transformation matrix can then be derived based on poses generated in the robot-base space and the machine-vision-determined poses in the camera space (operation 314). Various techniques can be used to determine the transformation matrix. For example, equation (4) can be solved based on the predetermined poses in the robot-base space and the camera space using various techniques, including but not limited to: linear least square or SVD techniques, Lie-theory-based techniques, techniques based on quaternion and non-linear minimization or dual quaternion, techniques based on Kronecker product and vectorization, etc.
EXAMINER NOTE: The above techniques such as linear least squares are techniques which minimize error. When used to calculate transformation matrices such as the one described above, these errors correspond to deviations between commanded values and theoretical positions. )
Claim 5
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 1 as outlined above. Islam further teaches
wherein the processor is configured to evaluate accuracy of the position of the robot with respect to the command value of the operation program,
and in response to the robot being driven with the command value, determine a need for a calibrating the mechanism error parameter, based on the position of the robot in the three- dimensional reference coordinate system in the first state and the position of the robot in the three-dimensional reference coordinate system in the second state,
(Islam - [0004] b) outputting a first movement command to the communication interface, wherein the communication interface is configured to communicate the first movement command to the robot to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol …)
EXAMINER NOTE: First movement command is communicated to robot. According ot this command, the robot moves to a reference location, and the camera determines coordinates of verification symbol. As established above with respect to claim 1, movement commands include parameter values (command values).
(Islam - [0005] In an embodiment, the control circuit is configured to perform the calibration verification further by: f… g) outputting a third movement command to the communication interface, wherein the communication interface is configured to communicate the third movement command to the robot to cause the robot arm to move the verification symbol to at least the reference location during the idle period, h) receiving via the communication interface an additional image of the verification symbol from the camera, which is configured to capture the additional image of the verification symbol at least at the reference location during the idle period, the additional image being a verification image for the verification, i) determining a verification image coordinate used for the verification, the verification image coordinate being a coordinate at which the verification symbol appears in the verification image, j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration operation…)
EXAMINER NOTE: Third movement command is communicated to the robot. According to this command, the robot moves to the reference location and the camera again determines the coordinates of the verification symbol. The difference between these coordinates is determined via the deviation parameter value
(Islam - [0005] … k) determining whether the deviation parameter value exceeds a defined threshold, and l) performing, in response to a determination that the deviation parameter value exceeds the defined threshold, a second calibration operation (e.g., a second camera calibration operation) to determine updated calibration information (e.g., updated camera calibration information).)
EXAMINER NOTE: If the deviation parameter exceeds a threshold, the calibration procedure is carried out.
Claim 9
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 1 as outlined above. Islam further teaches
wherein the first state is a reference state of the robot before calibration of the mechanism error parameter is performed.
(Islam - [0004] b) outputting a first movement command to the communication interface, wherein the communication interface is configured to communicate the first movement command to the robot to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol …)
EXAMINER NOTE: First movement command is communicated to robot. According to this command, the robot moves to a reference location, and the camera determines coordinates of verification symbol. As established above with respect to claim 1, movement commands include parameter values (command values).
(Islam - [0005] In an embodiment, the control circuit is configured to perform the calibration verification further by: f… g) outputting a third movement command to the communication interface, wherein the communication interface is configured to communicate the third movement command to the robot to cause the robot arm to move the verification symbol to at least the reference location during the idle period, h) receiving via the communication interface an additional image of the verification symbol from the camera, which is configured to capture the additional image of the verification symbol at least at the reference location during the idle period, the additional image being a verification image for the verification, i) determining a verification image coordinate used for the verification, the verification image coordinate being a coordinate at which the verification symbol appears in the verification image, j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration operation…)
EXAMINER NOTE: Third movement command is communicated to the robot. According to this command, the robot moves to the reference location and the camera again determines the coordinates of the verification symbol. The difference between these coordinates is determined via the deviation parameter value
(Islam - [0005] … k) determining whether the deviation parameter value exceeds a defined threshold, and l) performing, in response to a determination that the deviation parameter value exceeds the defined threshold, a second calibration operation (e.g., a second camera calibration operation) to determine updated calibration information (e.g., updated camera calibration information).)
EXAMINER NOTE: If the deviation parameter exceeds a threshold, the calibration procedure is carried out.
Claim 14
Islam teaches
a processor configured to: …
acquire a position of the robot in the … reference coordinate system
(Islam - [0004] … The control circuit of the robot control system may be configured to perform the calibration verification by: a) performing a first calibration operation (e.g., a first camera calibration) to determine calibration information (e.g., camera calibration information), b) outputting a first movement command … to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol, the reference image coordinate being a coordinate at which the verification symbol appears in the reference image,)
and evaluate accuracy of the position of the robot with respect to a command value of the operation program,
(Islam - [0005] … j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration …)
Islam teaches the aspect of periodically checking calibration of the robot by moving the end effector to a known position (i.e. moving the robot according to the same command value in a first state and a second state), and thus suggests the limitations of
wherein a first state is a state in which the robot having a first mechanism error parameter set therein is driven according to a command value of the operation program,
a second state is a state in which the robot is driven according to the same command values of the operation program after the first state,
(Islam - [0004] … The control circuit of the robot control system may be configured to perform the calibration verification by: a) performing a first calibration operation (e.g., a first camera calibration) to determine calibration information (e.g., camera calibration information), b) outputting a first movement command … to cause the robot arm to move the verification symbol, during or after the first calibration operation, to a location within the camera field of view, the location being a reference location of one or more reference locations for verification of the first calibration operation … d) determining a reference image coordinate for the verification symbol, the reference image coordinate being a coordinate at which the verification symbol appears in the reference image,)
EXAMINER NOTE: Robot (in a first state) is moved to the reference location (a known position) to capture a reference image to be used later for comparison in determining positioning errors.
(Islam - [0111] For example, if the reference image (e.g., 1120) is stored along with one or more parameter values of a movement command used to generate the pose associated with the reference image, or more specifically the pose of the robot arm appearing in the reference image, the one or more parameter values and the model may be used to estimate a pose for the robot arm (e.g., 1153) when the robot arm is moved according to the movement command. ... The estimated pose may be used to estimate a location of a verification symbol (e.g., 1130A) on the robot arm (e.g., 1153), and the estimated location may be used to estimate where the verification symbol is likely to appear in the reference image (e.g., 1120).)
EXAMINER NOTE: Parameter values correspond to command values. The robot moves according to the parameter values during calibration.
(Islam - [0005] In an embodiment, the control circuit is configured to perform the calibration verification further by: f… g) outputting a third movement command to the communication interface, wherein the communication interface is configured to communicate the third movement command to the robot to cause the robot arm to move the verification symbol to at least the reference location during the idle period, h) receiving via the communication interface an additional image of the verification symbol from the camera, which is configured to capture the additional image of the verification symbol at least at the reference location during the idle period, the additional image being a verification image for the verification, i) determining a verification image coordinate used for the verification, the verification image coordinate being a coordinate at which the verification symbol appears in the verification image, j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration operation…)
EXAMINER NOTE: The robot (in a second state) returns to the reference location (driven according to the same command value) to determine the amount of positioning error with the current calibration information.
Islam also indicates that calibration may change due to errors of robot components, which suggests
the mechanical error parameter includes a parameter relating to an error of a component in the robot, and
(Islam - [0131] … In some cases, the change in the environment of the camera or to the robot operation system may be a change in a relationship between arm portions of the robot arm (e.g., between the links 1154A-1154E of the robot arm 1153), or in the arm portions themselves. For example, one of the arm portions (e.g., link 1154D or the robot hand 1155) may become bent or otherwise deformed or damaged … The calibration verification discussed above may provide a quick and efficient technique for detecting the change to a camera (e.g., 1170) and/or a change to an environment of the camera or to the robot operation system 1100. … )
While Islam outlines the conditions for checking calibration, and the outcomes of the calibration procedure, Islam does not provide minute details as to how the calibration is performed once it is determined that calibration is required. However, Sivaneth teaches one known way to perform robot calibration, and teaches
the processor is configured to determine a need for calibrating the mechanism error parameter, based on the position of the robot in the three-dimensional reference coordinate system in the first state and the position of the robot in the three-dimensional reference coordinate system in the second state,
(Sivaneth - [col 5, ln 61 thru col 6, ln 3] To reduce positioning/pose errors of a working robot in its entire working space in real time, the concept of an error matrix can be introduced. The error matrix can indicate the difference between a controller-desired pose (i.e., the pose programmed by the robotic controller) of the robot end-effector and the actual pose of the end-effector (which can be captured by the cameras and converted from the camera space to the robot-base space using the transformation matrix) and can vary as the position of the end-effector changes in the working space.
[col 9, ln 13-31] … after the initial training at operation 418, the controller can generate a test desired pose (operation 420) and move the TCP of the gripper according to the test desired pose (operation 422). The 3D machine-vision module measures the pose of the gripper TCP in the camera space (operation 424). The measured pose in the camera space can be converted to the robot-base space using the transformation matrix to obtain a test instructed pose (operation 426). The neural network can predict/infer an error matrix corresponding to the test instructed pose (operation 428). In addition, the system can compute an error matrix using the test desired pose and the test instructed pose (operation 430). The predicted error matrix and the computed error matrix can be compared to determine whether the difference is smaller than a predetermined threshold (operation 432). If so, the training is completed. If not, additional training samples are to be collected by going back to operation 404. The threshold can vary depending on the positioning accuracy needed for the robotic operation.)
EXAMINER NOTE: The error matrix (mechanism error parameter) is updated so that the desired and instructed poses match. The robot is moved to a test pose (similar to Islam's system), a new error matrix is computed, and the error is compared. This process is repeated until the error is small enough (position in first state and second state match) and the new error matrix (second mechanism error parameter) is used for operations. If the error is already small enough, there is no need for further training (no need for calibration).
Sivaneth also teaches
the mechanical error parameter includes a parameter relating to an error of a component in the robot, and
(Sivaneth - [col 6, ln 15-29] In one example, the robotic controller may send a command to move the end-effector to desired TCP pose H.sub.td. However, due to errors (e.g., errors in the actuation of the joints and end-effector) in the robotic system, when the controller instructs the robotic arm to achieve this pose, the resulting pose is often different from H.sub.td. The actual pose of the end-effector measured by the 3D machine-vision module and transformed from the camera space to the robot-base space can be instructed pose H.sub.ti. Hence, given an instructed pose (i.e., a pose known to the camera), if error matrix E({right arrow over (r)}) is known, one can compute the desired pose that can be used by the controller to instruct the robotic arm to move the end-effector to the instructed pose, thus achieving the eye (camera)-to-hand (robotic controller) coordination.)
Islam's robot coordinate system is said to be arbitrarily place relative to the robot
(Islam - [0053] In an embodiment, the camera calibration information determined from the camera calibration describes a relationship between the camera 270 and the robot 250, or more specifically a relationship between the camera 270 and a world point 294 that is stationary relative to the base 252 of the robot 250. The world point 294 may represent a world or other environment in which the robot 250 is located, and may be any imaginary point that is stationary relative to the base 252.)
Islam may not teach a plurality of members acting as reference points, but Suzuki9005 teaches
a plurality of members acting as reference points and installed at a region where the robot is installed;
(Suzuki9005 - [0068] In step S4, as illustrated in FIG. 3, the user installs a calibration plate (reference member) 10 in a common field which is commonly included in the measurement area of the camera 3 when a plurality of positions and orientations set in each position and orientation group is taken by the robot body 2. The calibration plate 10 is, for example, a planar plate having a plurality of (at least three) circular patterns 10a and 10b of a predetermined size displayed thereon. The camera 3 can measure positions and orientations of the calibration plate 10 having six degrees of freedom.
[0069] Only one of the plurality of circular patterns of the calibration plate 10 is a hollow circular pattern 10a, which is placed at a corner, and others are solid circular patterns 10b. The existence of the hollow circular pattern 10a provides a totally asymmetrical arrangement about a point, and enables unique identification of the position and orientation of the calibration plate 10 having six degrees of freedom. A calibration plate coordinate system P is set to the calibration plate 10 with reference to the calibration plate 10.)
EXAMINER NOTE: The camera detects the features (members) of the calibration plate and sets the coordinate system P accordingly.
a three-dimensional measuring device configured to measure positions of the reference points; and
(Suzuki9005 - [0114] The robot systems 1 according to the first exemplary embodiment and the second exemplary embodiment is described to use the monocular camera 3 as a visual sensor. However, the visual sensor is not limited thereto and may be, for example, a stereo camera or a laser scanner)
a processor configured to:
set a three-dimensional reference coordinate system in a three-dimensional space, based on the positions of the reference points measured by the three-dimensional measuring device,
(Shen9005 - [0069] Only one of the plurality of circular patterns of the calibration plate 10 is a hollow circular pattern 10a, which is placed at a corner, and others are solid circular patterns 10b. The existence of the hollow circular pattern 10a provides a totally asymmetrical arrangement about a point, and enables unique identification of the position and orientation of the calibration plate 10 having six degrees of freedom. A calibration plate coordinate system P is set to the calibration plate 10 with reference to the calibration plate 10.)
EXAMINER NOTE: The camera detects the features (members) of the calibration plate and sets the coordinate system P accordingly.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Suzuki9005’s suggestion to utilize reference members in order to provide a well-known means for specifying the arbitrary world point described by Islam, and obvious to incorporate Sivaneth's error matrix calculations in order to provide a known way of performing calibrations shown to reduce errors (Sivaneth - [col 5, ln 61 thru col 6, ln 3], cited above)
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Islam, Sivaneth, and Suzuki9005 as applied to claim 5 above, and further in view of Bergantz (US-20210291375-A1).
Claim 6
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 1 as outlined above. Islam further teaches
in response to the accuracy of the position of the robot deviating from the predetermined determination range, determine that calibration of the mechanism error parameter is needed,
(Islam - [0005] … j) determining a deviation parameter value based on an amount of deviation between the reference image coordinate and the verification image coordinate, the reference image coordinate and the verification image coordinate both associated with the reference location, wherein the deviation parameter value is indicative of a change since the first calibration …)
The cited combination may not explicitly teach the following limitations in combination. However, Bergantz teaches
wherein the processor is configured to determine whether or not the accuracy of the position of the robot deviates from a predetermined determination range per predetermined period, and in response to the accuracy of the position of the robot deviating from the predetermined determination range, determine that calibration of the mechanism error parameter is needed,
(Bergantz - [0119] FIG. 11 is flow chart for a method 1100 of determining whether a transfer sequence is out of calibration, according to embodiments of the present disclosure. A transfer sequence may be calibrated, and the actual position and/or orientation that is achieved for the transfer sequence may slowly drift over time after such calibration. This may be due to wear on one or more robots, for example. … To detect such drift and/or sudden changes, calibration operations may be performed periodically.)
EXAMINER NOTE: Because the calibration is performed periodically, the deviation is checked per period.
(Bergantz - [0121] At block 1115, the system controller compares the characteristic error values between the two or more times that the calibration procedure was performed. … For example, the system controller may determine whether there is a difference between the originally computed characteristic error value(s) and newly computed characteristic error value(s). ... The system controller may determine, based on such comparisons, any drift in the computed characteristic error values or any sudden change in the characteristic error values. If a difference is determined and that difference exceeds a difference threshold, then the method proceeds to block 1120 and system controller determines that the system has changed …)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Bergantz’s suggestion to determine deviation over time in order to detect drift due to component wear.
Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Islam, Sivaneth, and Suzuki9005 as applied to claim 5 above, and further in view of Shen (US-20200198146-A1).
Claim 7
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 5 as outlined above. The cited combination may not explicitly teach the following limitations in combination. However, Shen teaches
wherein the processor is configured to, in response to replacement of components constituting the robot being detected, determine that calibration of the mechanism error parameter is needed.
(Shen - [0047] As shown in FIG. 3, determination is made manually by user of the robot arm 2 or automatically by the robot controller 20 in the robot arm 2 to know whether the tool 22 of the robot arm 2 needs calibration (step S20). In one embodiment, user/the robot controller 20 determines that the tool 22 needs calibration when the tool 22 is replaced, the using time of the tool 22 exceeds a first threshold, or the preciseness of the tool 22 is smaller than a second threshold.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Shen’s suggestion to recalibrate after replacement of the tool or robot in order to precisely know the position of the tool and enhance precision of the robot arm.
(Shen - [0045] If the tool 22 needs calibration, the calibration system 1 uses the robot controller 20 to control the robot arm 2 to move/rotate such an absolute position of the TWP 221 of the tool 22 in the robot arm coordinate system {B} can be obtained through above calibration method. (step S16) After obtaining the absolute position of the TWP 221 of the tool 22 in the robot arm coordinate system {B}, the robot controller 20 can precisely know the position of the tool 22 and the operation preciseness of the robot arm 2 can be enhanced.)
Claim 8
The combination of Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 5 as outlined above. The cited combination may not explicitly teach the following limitations in combination. However, Shen teaches
processor is configured to, in response to replacement of the robot being detected, determine that calibration of the mechanism error parameter is needed,
(Shen - [0043] Refer now to FIG. 2, which shows the flowchart of the calibration method according to a first embodiment of the present invention. At first, user confirms whether the robot arm 2/imaging device 3 in the calibration system 1 needs re-installation or replacement (step S10). If the calibration system 1 is first-time set up, or any one of the robot arm 2 and the imaging device 3 is replaced, the transformation relationship between the robot arm coordinate system {B} and the imaging device coordinate system {I} needs to establish at first (step S12), namely, establish the transformation matrix.
[0053] Refer to FIG. 5, this figure shows the establishing flowchart according to the first embodiment of the present invention. As mentioned above, to accurately obtain the absolute position of the TWP 221 on the robot arm coordinate system {B}, the imaging device coordinate system {I} and the transformation matrix need to be established. Therefore, the user of the robot arm 2 first assembles or replaces the robot arm 2 and/or imaging device 3 in the calibration system 1 (step S40). Only when the robot arm 2 and/or imaging device 3 is first time assembled or replaced, the steps in FIG. 5 need to be executed to re-establish the imaging device coordinate system {I} and the transformation matrix. At this time, the tool 22 is not assembled to the robot arm 2 yet.
[0047] As shown in FIG. 3, determination is made manually by user of the robot arm 2 or automatically by the robot controller 20 in the robot arm 2 to know whether the tool 22 of the robot arm 2 needs calibration (step S20). In one embodiment, user/the robot controller 20 determines that the tool 22 needs calibration when the tool 22 is replaced, the using time of the tool 22 exceeds a first threshold, or the preciseness of the tool 22 is smaller than a second threshold.)
EXAMINER NOTE: Because the tool is not yet installed on the robot when the entire robot is set up/replaced, the calibration would necessarily need to be carried out once the tool is installed.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Islam, Sivaneth, and Suzuki9005 as applied to claim 9 above, and further in view of Shen and Bergantz.
Claim 10
Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 9 as outlined above. While examiner believes Islam inherently teaches wherein the first state is a state immediately after the robot is installed, Islam does not explicitly state this. However, it is notoriously old and well-known to establish an initial calibration when a robot is installed, as evidenced by Shen
(Shen - [0043] Refer now to FIG. 2, which shows the flowchart of the calibration method according to a first embodiment of the present invention. At first, user confirms whether the robot arm 2/imaging device 3 in the calibration system 1 needs re-installation or replacement (step S10). If the calibration system 1 is first-time set up, or any one of the robot arm 2 and the imaging device 3 is replaced, the transformation relationship between the robot arm coordinate system {B} and the imaging device coordinate system {I} needs to establish at first (step S12), namely, establish the transformation matrix.
[0053] Refer to FIG. 5, this figure shows the establishing flowchart according to the first embodiment of the present invention. As mentioned above, to accurately obtain the absolute position of the TWP 221 on the robot arm coordinate system {B}, the imaging device coordinate system {I} and the transformation matrix need to be established. Therefore, the user of the robot arm 2 first assembles or replaces the robot arm 2 and/or imaging device 3 in the calibration system 1 (step S40). Only when the robot arm 2 and/or imaging device 3 is first time assembled or replaced, the steps in FIG. 5 need to be executed to re-establish the imaging device coordinate system {I} and the transformation matrix. At this time, the tool 22 is not assembled to the robot arm 2 yet.
[0047] As shown in FIG. 3, determination is made manually by user of the robot arm 2 or automatically by the robot controller 20 in the robot arm 2 to know whether the tool 22 of the robot arm 2 needs calibration (step S20). In one embodiment, user/the robot controller 20 determines that the tool 22 needs calibration when the tool 22 is replaced, the using time of the tool 22 exceeds a first threshold, or the preciseness of the tool 22 is smaller than a second threshold.)
Islam may not explicitly teach the following limitations in combination. However, Bergantz teaches
the second state is a state when at least some components of the robot deteriorate by using the robot.
(Bergantz - [0119] FIG. 11 is flow chart for a method 1100 of determining whether a transfer sequence is out of calibration, according to embodiments of the present disclosure. A transfer sequence may be calibrated, and the actual position and/or orientation that is achieved for the transfer sequence may slowly drift over time after such calibration. This may be due to wear on one or more robots, for example. … To detect such drift and/or sudden changes, calibration operations may be performed periodically.
[0121] At block 1115, the system controller compares the characteristic error values between the two or more times that the calibration procedure was performed. … For example, the system controller may determine whether there is a difference between the originally computed characteristic error value(s) and newly computed characteristic error value(s). ... The system controller may determine, based on such comparisons, any drift in the computed characteristic error values or any sudden change in the characteristic error values. If a difference is determined and that difference exceeds a difference threshold, then the method proceeds to block 1120 and system controller determines that the system has changed …)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to further modify Islam’s system with Shen’s suggestion to recalibrate after replacement of the tool or robot in order to precisely know the position of the tool and enhance precision of the robot arm, and Bergantz’s suggestion to perform calibration periodically in order to detect drift due to wear.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Islam, Sivaneth, and Suzuki9005 as applied to claim 9 above, and further in view of Shen and Suzuki (US-20160059419-A1)
Claim 11
Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 9 as outlined above. Islam is silent as to the replacement status of the robot in the first and second states. However, Suzuki teaches
wherein the first state is a state before the robot is replaced
(Suzuki - [0077] The N calibration positions and orientations .sup.RH.sub.T[i] generated as a result of the teaching performed in step S13 are stored in the memory (the ROM 42 or the RAM 43) as calibration position and position data. The teaching need not be performed again, and if the robot arm 1 or the fixed camera 3 is replaced and calibration is performed again, for example, results of a past teaching operation may be used.)
And Shen teaches
and the second state is a state after replacement of the robot, in which a new robot is installed.
(Shen - [0043] Refer now to FIG. 2, which shows the flowchart of the calibration method according to a first embodiment of the present invention. At first, user confirms whether the robot arm 2/imaging device 3 in the calibration system 1 needs re-installation or replacement (step S10). If the calibration system 1 is first-time set up, or any one of the robot arm 2 and the imaging device 3 is replaced, the transformation relationship between the robot arm coordinate system {B} and the imaging device coordinate system {I} needs to establish at first (step S12), namely, establish the transformation matrix.
[0053] Refer to FIG. 5, this figure shows the establishing flowchart according to the first embodiment of the present invention. As mentioned above, to accurately obtain the absolute position of the TWP 221 on the robot arm coordinate system {B}, the imaging device coordinate system {I} and the transformation matrix need to be established. Therefore, the user of the robot arm 2 first assembles or replaces the robot arm 2 and/or imaging device 3 in the calibration system 1 (step S40). Only when the robot arm 2 and/or imaging device 3 is first time assembled or replaced, the steps in FIG. 5 need to be executed to re-establish the imaging device coordinate system {I} and the transformation matrix. At this time, the tool 22 is not assembled to the robot arm 2 yet.
[0047] As shown in FIG. 3, determination is made manually by user of the robot arm 2 or automatically by the robot controller 20 in the robot arm 2 to know whether the tool 22 of the robot arm 2 needs calibration (step S20). In one embodiment, user/the robot controller 20 determines that the tool 22 needs calibration when the tool 22 is replaced, the using time of the tool 22 exceeds a first threshold, or the preciseness of the tool 22 is smaller than a second threshold.)
EXAMINER NOTE: Because the tool is not yet installed on the robot when the entire robot is set up/replaced, the calibration would necessarily need to be carried out once the tool is installed.
It would It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Islam’s calibration with Shen’s suggestion to recalibrate after replacement of the tool or robot in order to precisely know the position of the tool and enhance precision of the robot arm, and Suzuki’s suggestion to store reference locations in order to eliminate the need to re-teach reference locations when installing a new robot.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Islam, Sivaneth, and Suzuki9005 as applied to claim 9 above, and further in view of Kim (US 20040010345 A1) and Suzuki9005.
Claim 12
Islam, Sivaneth, and Suzuki9005 teaches the limitations of claim 9 as outlined above. Islam may not explicitly teach the following limitations in combination. However, Kim teaches
the first state is a state before some components of the robot are replaced, and
the second state is a state after some components of the robot are replaced.
(Kim - [0025] FIG. 4 is a control flowchart of a method of calibrating the robot, according to the present invention. As shown in FIG. 4, the control unit 100 performs an initial calibration using a self-calibration program of the robot arm 13, which is stored in the storage unit 140 at operation S100.
[0026] The control unit 100 stores correction data obtained by the performance of the initial calibration in the storage unit 140 at operation S101. The control unit 100 operates the motor 120 through the motor driving unit 110 to allow the robot arm 13 to move to a contact position at operation S102. At this time, the control unit 100 inputs the moving displacement of the robot arm 13, which is fed back through the encoder 130.)
EXAMINER NOTE: Robot is moved to a contact position (reference location, first state)
(Kim - [0028] If it is determined that the robot arm 13 has reached the contact position at operation S103, the control unit 100 stops the movement of the robot arm 13 by stopping the motor 120 through the motor driving unit 110 at operation Si04. The control unit 100 stores the moving displacement of the robot arm 13 obtained through the encoder 130 in the storage unit 140 as a first moving displacement at operation S105.
EXAMINER NOTE: The displacement obtained in moving to the contact position is recorded (reference coordinates).
Kim does not specify a coordinate system. However, it is routine and well-known in the art to utilize coordinate transformations to convert values of one arbitrary coordinate system to another, as evidenced by Suzuki9005
(Suzuki9005 - [0032] Specifically, by multiplying the relative position and orientation .sup.AH.sub.B of the coordinate system B with respect to the coordinate system A by a relative position and orientation .sup.BH.sub.C of the coordinate system C with respect to the coordinate system B, a relative position and orientation .sup.AH.sub.C of the coordinate system C with respect to the coordinate system A can be obtained.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to further modify Islam’s system with Kim’s suggestion to recalibrate after part changes to compensate for changes in offsets resulting from changing parts, and Suzuki9005’s teaching of matrix transformations in order to specify calibration parameters in any arbitrary coordinate system. The choice of coordinate system in the context of this claim is arbitrary and may be considered a matter of design choice.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES MILLER WATTS whose telephone number is (703)756-1249. The examiner can normally be reached 7:30-5:30 M-TH.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at 571-270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES MILLER WATTS III/Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657