DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 12/31/2025 has been received and made of record. In response to the Non-Final Office Action dated on 09/26/2025. Claims 1-24 are pending in the current application. Claims 1, 4, 9, 12 and 17-24 have been amended.
Response to Arguments
Applicant’s arguments filed on 12/31/2025 have been fully considered.
In the Arguments/Remarks:
Re: Rejection of the Claims Under 35 U.S.C. 112(b)
Applicant’s arguments regarding the rejection of claims 8, 16 and 24 under 35 U.S.C. 112(b) after further consideration are non-persuasive. Applicant’s arguments, beginning on page 8, recite “although the term “special-shaped object” is not expressly called out with a definition, a person of ordinary skill in the art would be able to interpret the meaning in view of the Specification.”. Examiner respectfully disagrees. Examiner submits that the term “special-shaped object” is a relative term which is also subjective person-to-person even within the same art. Therefore, the rejections of claims 8, 16 and 24 under 35 U.S.C. 112(b) still apply.
Re: Rejection of the Claims Under 35 U.S.C. 102(a)(1)
Applicant’s amendments are directed towards the newly amended claim limitations not previously examined by the examiner. For the sake of brevity examiner will address the arguments in regards to claim 1, however the arguments apply similarly to its corresponding dependent claims as well as independent claims 9 and 17 as well as their corresponding dependent claims.
Applicant states that the newly amended limitations incorporate “using a probe” and Troy (US 2019/0242971 A1) fails to disclose the use of a probe. However, examiner respectfully disagrees. Examiner submits using the broadest reasonable interpretation (BRI) of the claim language, that Troy still discloses or suggests “using a probe”. According to Cambridge Dictionary (see webpage attached) one of the definitions of a probe is “a device that is put inside something to test or record information”. Examiner submits that the cameras disclosed by Troy operate by this definition as a probe. Paragraph 103 of Troy discloses “The relative object localization process also applies to consumer-level applications, where 3-D data is almost never available. For the average consumer the most common measurement instrument is a simple tape measure, which is difficult to use for moderate to complex measurement activities. The consumer-level applications could include indoor or outdoor uses, such as: measuring a room”. Examiner further submits that in this example Troy is disclosing using the LPS and its cameras as a probe for measuring the room. Examiner notes after further consideration that the applicant’s arguments directed towards the newly amended limitations are non-persuasive and the rejection of the claims 1-24 under 35 U.S.C. 102(a)(1) still apply and are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8, 16 and 24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 8, 16 and 24, Each of the corresponding dependent claims 8, 16 and 24 recite the term “special-shaped object”. The term “special-shaped object” is a relative term which renders the claim indefinite. The term “special-shaped object” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Appropriate correction and/or clarification is earnestly solicited.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Troy (US 2019/0242971 A1).
Regarding claim 1, Troy teaches a method for manipulating a robot comprising: obtaining, using a probe, a first set of data related to at least one of position and orientation of at least three calibration objects, the at least three calibration objects being non-collinear to each other in an object coordinate system when a target object is in a first state, and the at least three calibration objects being in a fixed relation to the target object [(see at least Fig.1, paragraphs 39-42) As in 39 “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.” As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a second set of data related to at least one of position and orientation of the at least three calibration objects when the target object is in a second state different from the first state [(see at least Fig.1, paragraph 42) “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a transformation relationship between the first set of data and the second set of data [(see at least Figs.2-3, paragraph 43-44) As in 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).” As in 44 “Thereafter the offset position and orientation of the target object 10 (this will be discussed in detail later) are computed (step 58) by the LPS computer, which retrieves and processes the stored measurement data. The results are displayed to the user or sent to other application(s) (step 60).”]; determining a calibrated object coordinate system based on the object coordinate system and the transformation relationship [(see at least paragraph 41) “Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]; and controlling the robot to process the target object in a predetermined way under the calibrated object coordinate system. [(see at least Figs.6-7, paragraphs 75-78) As in 76 “The LPS seen in FIG. 7 can be used to determine the relative offset between a part 90 and the base 84 of a robotic arm 86 that may carry an end effector 88 on a distal end thereof. The robot controller 80 controls the robotic arm 86 and operates the end effector 88 for performing machining or other operations on the part 90. The LPS computer 48 communicates with the robot controller 80 through a cable 78. The robot controller 80 is preferably a computer programmed with motion control software whereby the location of end effector 88 can be controlled as a function of object location data output by the LPS computer 48.” As in 77 “The location offset between base 84 and part 90 can be determined in the manner previously described with reference to FIG. 6. First, the LPS computer 48 determines the X, Y, Z values for measured points 92a-92c on part 90 and the X, Y. Z values for measured points 94a-94c on base 84 when base 84 and part 90 are at their respective initial locations. After base 84 and part 90 have moved to new locations, again LPS is used to determine the X, Y, Z values for measured points 92a-92c and 94a-94c. The LPS system 48 the uses a relative localization process to produce data representing the location of the part 90 relative to the base 84 at their new locations. This data is output to the robot controller 80 via cable 78”]
Regarding claim 2, Troy teaches wherein each of the at least three calibration objects is one of a spherical structure, a hemispherical structure, a square structure, a rectangular structure or a triangle structure. [(see at least Fig.1- elements 14a-14c, paragraph 39) “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”] Examiner notes that the calibration points are spherical in shape as depicted in Fig.1.
Regarding claim 3, Troy teaches wherein the at least three calibration objects are arranged on at least one of the target object and a fixture to which the target object is fixed. [(see at least Fig.1, paragraph 40) “The measured distances to the calibration points 14a-14c may be used in coordination with the pan and tilt angles from the pan-tilt mechanism 42 to solve for the camera position and orientation relative to the target object 10. A method for generating an instrument to target calibration transformation matrix (sometimes referred to as the camera pose) is disclosed in U.S. Pat. No. 7,859,655. Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]
Regarding claim 4, Troy teaches wherein the object coordinate system is a simulation coordinate system, and the first set of data are obtained from the simulation coordinate system; or wherein the object coordinate system is a physical coordinate system, and the first set of data are determined by the probe of the robot in the physical coordinate system. [(see at least Fig.7, paragraphs 36-43) As in 36 “The LPS comprises a single video camera 40 and a laser range meter (not shown) on a controllable pan-tilt mechanism 42 with angle measurement capability mounted on a tripod 44. The video camera 40 may have automated (remotely controlled) zoom capabilities. The video camera 40 may additionally include an integral crosshair generator to facilitate precise locating of a point within an optical image field display of the video camera. The video camera 40 and pan-tilt mechanism 42 may be operated by a computer (not shown in FIG. 1, but see LPS computer 48 in FIG. 6). The computer communicates with the video camera 40 and the pan-tilt mechanism 42 through a video/control cable. Alternatively, the computer may communicate with video camera 40 and pan-tilt mechanism 42 through a wireless communication pathway.” As in 41 “Once the position and orientation of the video camera 40 with respect to the target object 30 have been determined and a camera pose transformation matrix has been generated, camera pan data (angle of rotation of the video camera 40 about the azimuth axis) and tilt data (angle of rotation of the video camera 40 about the elevation axis) may be used in conjunction with the calculated position and orientation of the video camera 40 to determine the X, Y and Z coordinates of any point of interest on the target object 30 in the coordinate system of the initial location of the target object 10.” As 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).”]
Regarding claim 5, Troy teaches wherein, when the target object is in the second state, at least one of a position and an orientation of the target object is changed with respect to that of the first state. [(see at least Fig.1, paragraphs 41-43) As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]
Regarding claim 6, Troy teaches wherein the transformation relationship comprises a transformation matrix between the first set of data and the second set of data; and wherein the transformation matrix comprises a translation matrix and a rotation matrix. [(see at least Figs.3-4, paragraphs 48-55) As in 48 “The position and orientation offset determination process starts with specifying or placing a set of three non-collinear points on the target object to be measured by LPS. In one embodiment of the reference frame definition process, one of the points in the set of points may be defined as the origin, another specified as one of the coordinate axes (such as the x-axis), and the third point may be used in specifying the direction of one of the orthogonal axes (such as the z-axis) using a vector cross product calculation. A third axis that is orthogonal to the other two will be determined using another vector cross product step. These steps are shown in Eqs. (1) below:”]
Regarding claim 7, Troy teaches wherein the predetermined way comprises a path; and wherein the path and an origin point of the object coordinate system meet a first relation; wherein the path and an origin point of the calibrated object coordinate system meet the first relation. [(see at least paragraph 45, Clm 1) As in 45 “Some use cases for this process include setting the position and orientation offset for a workpiece in a manufacturing workcell or inspection application. An example of the use of this process could be for object-to-robot alignment in a robotic workcell, where large parts or a robot are moved relative to each other before starting an assembly or inspection task. In one such application the robot may be programmed to move its end effector along a specific path relative to the workpiece (target object) in the workcell. But the location of the workpiece relative to the robot may be variable, and subsequent workpieces of the same type may not be in the same location as the initial workpiece that was used during initial programming of the robot. In this situation, if the difference between the location of the current workpiece and the location of the initial workpiece were known, the offset could be sent to the robot and used as a base or origin location offset in the robot control program. The process described above with reference to FIG. 3 can be used to address this type of application.”]
Regarding claim 8, Troy teaches wherein the target object is a special-shaped object. [(see at least paragraph 39) “For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”]
Regarding claim 9, Troy teaches an electronic device, comprising: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions executable by the at least one processing unit, the instructions, when executed by the at least one processing unit, [(see at least paragraph 105) “As used in the claims, the term “computer system” should be construed broadly to encompass a system having at least one computer or processor, and which may have multiple computers or processors that communicate through a network or bus. As used in the preceding sentence, the terms “computer” and “processor” both refer to devices comprising a processing unit (e.g., a central processing unit) and some form of memory (i.e., computer-readable medium) for storing a program which is readable by the processing unit.”] causing the device to perform acts comprising: obtaining, using a probe, a first set of data related to at least one of position and orientation of at least three calibration objects, the at least three calibration objects being non-collinear to each other in an object coordinate system when a target object is in a first state, and the at least three calibration objects being in a fixed relation to the target object [(see at least Fig.1, paragraphs 39-42) As in 39 “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.” As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a second set of data related to at least one of position and orientation of the at least three calibration objects when the target object is in a second state different from the first state [(see at least Fig.1, paragraph 42) “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a transformation relationship between the first set of data and the second set of data [(see at least Figs.2-3, paragraph 43-44) As in 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).” As in 44 “Thereafter the offset position and orientation of the target object 10 (this will be discussed in detail later) are computed (step 58) by the LPS computer, which retrieves and processes the stored measurement data. The results are displayed to the user or sent to other application(s) (step 60).”]; determining a calibrated object coordinate system based on the object coordinate system and the transformation relationship [(see at least paragraph 41) “Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]; and controlling the robot to process the target object in a predetermined way under the calibrated object coordinate system. [(see at least Figs.6-7, paragraphs 75-78) As in 76 “The LPS seen in FIG. 7 can be used to determine the relative offset between a part 90 and the base 84 of a robotic arm 86 that may carry an end effector 88 on a distal end thereof. The robot controller 80 controls the robotic arm 86 and operates the end effector 88 for performing machining or other operations on the part 90. The LPS computer 48 communicates with the robot controller 80 through a cable 78. The robot controller 80 is preferably a computer programmed with motion control software whereby the location of end effector 88 can be controlled as a function of object location data output by the LPS computer 48.” As in 77 “The location offset between base 84 and part 90 can be determined in the manner previously described with reference to FIG. 6. First, the LPS computer 48 determines the X, Y, Z values for measured points 92a-92c on part 90 and the X, Y. Z values for measured points 94a-94c on base 84 when base 84 and part 90 are at their respective initial locations. After base 84 and part 90 have moved to new locations, again LPS is used to determine the X, Y, Z values for measured points 92a-92c and 94a-94c. The LPS system 48 the uses a relative localization process to produce data representing the location of the part 90 relative to the base 84 at their new locations. This data is output to the robot controller 80 via cable 78”]
Regarding claim 10, Troy teaches wherein each of the at least three calibration objects is one of a spherical structure, a hemispherical structure, a square structure, a rectangular structure or a triangle structure. [(see at least Fig.1- elements 14a-14c, paragraph 39) “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”] Examiner notes that the calibration points are spherical in shape as depicted in Fig.1.
Regarding claim 11, Troy teaches wherein the at least three calibration objects are arranged on at least one of the target object and a fixture to which the target object is fixed. [(see at least Fig.1, paragraph 40) “The measured distances to the calibration points 14a-14c may be used in coordination with the pan and tilt angles from the pan-tilt mechanism 42 to solve for the camera position and orientation relative to the target object 10. A method for generating an instrument to target calibration transformation matrix (sometimes referred to as the camera pose) is disclosed in U.S. Pat. No. 7,859,655. Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]
Regarding claim 12, Troy teaches wherein the object coordinate system is a simulation coordinate system, and the first set of data are obtained from the simulation coordinate system; or wherein the object coordinate system is a physical coordinate system, and the first set of data are determined by the probe of the robot in the physical coordinate system. [(see at least Fig.7, paragraphs 36-43) As in 36 “The LPS comprises a single video camera 40 and a laser range meter (not shown) on a controllable pan-tilt mechanism 42 with angle measurement capability mounted on a tripod 44. The video camera 40 may have automated (remotely controlled) zoom capabilities. The video camera 40 may additionally include an integral crosshair generator to facilitate precise locating of a point within an optical image field display of the video camera. The video camera 40 and pan-tilt mechanism 42 may be operated by a computer (not shown in FIG. 1, but see LPS computer 48 in FIG. 6). The computer communicates with the video camera 40 and the pan-tilt mechanism 42 through a video/control cable. Alternatively, the computer may communicate with video camera 40 and pan-tilt mechanism 42 through a wireless communication pathway.” As in 41 “Once the position and orientation of the video camera 40 with respect to the target object 30 have been determined and a camera pose transformation matrix has been generated, camera pan data (angle of rotation of the video camera 40 about the azimuth axis) and tilt data (angle of rotation of the video camera 40 about the elevation axis) may be used in conjunction with the calculated position and orientation of the video camera 40 to determine the X, Y and Z coordinates of any point of interest on the target object 30 in the coordinate system of the initial location of the target object 10.” As 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).”]
Regarding claim 13, Troy teaches wherein, when the target object is in the second state, at least one of a position and an orientation of the target object is changed with respect to that of the first state. [(see at least Fig.1, paragraphs 41-43) As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]
Regarding claim 14, Troy teaches wherein the transformation relationship comprises a transformation matrix between the first set of data and the second set of data; and wherein the transformation matrix comprises a translation matrix and a rotation matrix. [(see at least Figs.3-4, paragraphs 48-55) As in 48 “The position and orientation offset determination process starts with specifying or placing a set of three non-collinear points on the target object to be measured by LPS. In one embodiment of the reference frame definition process, one of the points in the set of points may be defined as the origin, another specified as one of the coordinate axes (such as the x-axis), and the third point may be used in specifying the direction of one of the orthogonal axes (such as the z-axis) using a vector cross product calculation. A third axis that is orthogonal to the other two will be determined using another vector cross product step. These steps are shown in Eqs. (1) below:”]
Regarding claim 15, Troy teaches wherein the predetermined way comprises a path; and wherein the path and an origin point of the object coordinate system meet a first relation; wherein the path and an origin point of the calibrated object coordinate system meet the first relation. [(see at least paragraph 45, Clm 1) As in 45 “Some use cases for this process include setting the position and orientation offset for a workpiece in a manufacturing workcell or inspection application. An example of the use of this process could be for object-to-robot alignment in a robotic workcell, where large parts or a robot are moved relative to each other before starting an assembly or inspection task. In one such application the robot may be programmed to move its end effector along a specific path relative to the workpiece (target object) in the workcell. But the location of the workpiece relative to the robot may be variable, and subsequent workpieces of the same type may not be in the same location as the initial workpiece that was used during initial programming of the robot. In this situation, if the difference between the location of the current workpiece and the location of the initial workpiece were known, the offset could be sent to the robot and used as a base or origin location offset in the robot control program. The process described above with reference to FIG. 3 can be used to address this type of application.”]
Regarding claim 16, Troy teaches wherein the target object is a special-shaped object. [(see at least paragraph 39) “For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”]
Regarding claim 17, Troy teaches a non-transitory computer readable storage medium having computer readable program instructions stored thereon which, when executed by a processing unit, [(see at least paragraph 105) “As used in the claims, the term “computer system” should be construed broadly to encompass a system having at least one computer or processor, and which may have multiple computers or processors that communicate through a network or bus. As used in the preceding sentence, the terms “computer” and “processor” both refer to devices comprising a processing unit (e.g., a central processing unit) and some form of memory (i.e., computer-readable medium) for storing a program which is readable by the processing unit.”] cause the processing unit to perform acts comprising: obtaining, using a probe, a first set of data related to at least one of position and orientation of at least three calibration objects, the at least three calibration objects being non-collinear to each other in an object coordinate system when a target object is in a first state, and the at least three calibration objects being in a fixed relation to the target object [(see at least Fig.1, paragraphs 39-42) As in 39 “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.” As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a second set of data related to at least one of position and orientation of the at least three calibration objects when the target object is in a second state different from the first state [(see at least Fig.1, paragraph 42) “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]; determining a transformation relationship between the first set of data and the second set of data [(see at least Figs.2-3, paragraph 43-44) As in 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).” As in 44 “Thereafter the offset position and orientation of the target object 10 (this will be discussed in detail later) are computed (step 58) by the LPS computer, which retrieves and processes the stored measurement data. The results are displayed to the user or sent to other application(s) (step 60).”]; determining a calibrated object coordinate system based on the object coordinate system and the transformation relationship [(see at least paragraph 41) “Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]; and controlling the robot to process the target object in a predetermined way under the calibrated object coordinate system. [(see at least Figs.6-7, paragraphs 75-78) As in 76 “The LPS seen in FIG. 7 can be used to determine the relative offset between a part 90 and the base 84 of a robotic arm 86 that may carry an end effector 88 on a distal end thereof. The robot controller 80 controls the robotic arm 86 and operates the end effector 88 for performing machining or other operations on the part 90. The LPS computer 48 communicates with the robot controller 80 through a cable 78. The robot controller 80 is preferably a computer programmed with motion control software whereby the location of end effector 88 can be controlled as a function of object location data output by the LPS computer 48.” As in 77 “The location offset between base 84 and part 90 can be determined in the manner previously described with reference to FIG. 6. First, the LPS computer 48 determines the X, Y, Z values for measured points 92a-92c on part 90 and the X, Y. Z values for measured points 94a-94c on base 84 when base 84 and part 90 are at their respective initial locations. After base 84 and part 90 have moved to new locations, again LPS is used to determine the X, Y, Z values for measured points 92a-92c and 94a-94c. The LPS system 48 the uses a relative localization process to produce data representing the location of the part 90 relative to the base 84 at their new locations. This data is output to the robot controller 80 via cable 78”]
Regarding claim 18, Troy teaches wherein each of the at least three calibration objects is one of a spherical structure, a hemispherical structure, a square structure, a rectangular structure or a triangle structure. [(see at least Fig.1- elements 14a-14c, paragraph 39) “The local positioning system shown in FIG. 1 further comprises three-dimensional localization software which is loaded into the computer (not shown). For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”] Examiner notes that the calibration points are spherical in shape as depicted in Fig.1.
Regarding claim 19, Troy teaches wherein the at least three calibration objects are arranged on at least one of the target object and a fixture to which the target object is fixed. [(see at least Fig.1, paragraph 40) “The measured distances to the calibration points 14a-14c may be used in coordination with the pan and tilt angles from the pan-tilt mechanism 42 to solve for the camera position and orientation relative to the target object 10. A method for generating an instrument to target calibration transformation matrix (sometimes referred to as the camera pose) is disclosed in U.S. Pat. No. 7,859,655. Using the measured data, the calibration process computes the 4×4 homogeneous transformation matrix that defines the position and orientation of the video camera 40 (and laser range meter) relative to the target object 10.”]
Regarding claim 20, Troy teaches wherein the object coordinate system is a simulation coordinate system, and the first set of data are obtained from the simulation coordinate system; or wherein the object coordinate system is a physical coordinate system, and the first set of data are determined by the probe of the robot in the physical coordinate system. [(see at least Fig.7, paragraphs 36-43) As in 36 “The LPS comprises a single video camera 40 and a laser range meter (not shown) on a controllable pan-tilt mechanism 42 with angle measurement capability mounted on a tripod 44. The video camera 40 may have automated (remotely controlled) zoom capabilities. The video camera 40 may additionally include an integral crosshair generator to facilitate precise locating of a point within an optical image field display of the video camera. The video camera 40 and pan-tilt mechanism 42 may be operated by a computer (not shown in FIG. 1, but see LPS computer 48 in FIG. 6). The computer communicates with the video camera 40 and the pan-tilt mechanism 42 through a video/control cable. Alternatively, the computer may communicate with video camera 40 and pan-tilt mechanism 42 through a wireless communication pathway.” As in 41 “Once the position and orientation of the video camera 40 with respect to the target object 30 have been determined and a camera pose transformation matrix has been generated, camera pan data (angle of rotation of the video camera 40 about the azimuth axis) and tilt data (angle of rotation of the video camera 40 about the elevation axis) may be used in conjunction with the calculated position and orientation of the video camera 40 to determine the X, Y and Z coordinates of any point of interest on the target object 30 in the coordinate system of the initial location of the target object 10.” As 43 “FIG. 2 shows steps of a method in accordance with one embodiment for measuring the location offset or difference when either the target object 10 or the LPS is moved. First, the system determines the X, Y, Z values for the three measured points 14a-14c when the target object 10 and the LPS are at their initial locations (step 22). Then the system determines the X, Y, Z values for the three measured points 14a-14c after either the target object 10 or the LPS has been moved to a new location (step 24). These values are input into a relative localization process 20 which processes the X, Y. Z values for the initial and moved locations and then outputs relative location data (step 26).”]
Regarding claim 21, Troy teaches wherein, when the target object is in the second state, at least one of a position and an orientation of the target object is changed with respect to that of the first state. [(see at least Fig.1, paragraphs 41-43) As in 42 “In accordance with the setup shown in FIG. 1, either the target object 10 can be moved from Object Position 1 to Object Position 2 while the LPS is stationary, or the LPS can be moved from Measurement Instrument Position 1 to Measurement Instrument Position 2 while the target object 10 is stationary. In either case, relative location data (position and orientation) can be acquired.”]
Regarding claim 22, Troy teaches wherein the transformation relationship comprises a transformation matrix between the first set of data and the second set of data; and wherein the transformation matrix comprises a translation matrix and a rotation matrix. [(see at least Figs.3-4, paragraphs 48-55) As in 48 “The position and orientation offset determination process starts with specifying or placing a set of three non-collinear points on the target object to be measured by LPS. In one embodiment of the reference frame definition process, one of the points in the set of points may be defined as the origin, another specified as one of the coordinate axes (such as the x-axis), and the third point may be used in specifying the direction of one of the orthogonal axes (such as the z-axis) using a vector cross product calculation. A third axis that is orthogonal to the other two will be determined using another vector cross product step. These steps are shown in Eqs. (1) below:”]
Regarding claim 23, Troy teaches wherein the predetermined way comprises a path; and wherein the path and an origin point of the object coordinate system meet a first relation; wherein the path and an origin point of the calibrated object coordinate system meet the first relation. [(see at least paragraph 45, Clm 1) As in 45 “Some use cases for this process include setting the position and orientation offset for a workpiece in a manufacturing workcell or inspection application. An example of the use of this process could be for object-to-robot alignment in a robotic workcell, where large parts or a robot are moved relative to each other before starting an assembly or inspection task. In one such application the robot may be programmed to move its end effector along a specific path relative to the workpiece (target object) in the workcell. But the location of the workpiece relative to the robot may be variable, and subsequent workpieces of the same type may not be in the same location as the initial workpiece that was used during initial programming of the robot. In this situation, if the difference between the location of the current workpiece and the location of the initial workpiece were known, the offset could be sent to the robot and used as a base or origin location offset in the robot control program. The process described above with reference to FIG. 3 can be used to address this type of application.”]
Regarding claim 24, Troy teaches wherein the target object is a special-shaped object. [(see at least paragraph 39) “For example, the three-dimensional localization software may be of a type that uses multiple non-collinear calibration points 14a-14c on the target object 10 to define the location (position and orientation) of video camera 40 relative to the target object 10. Typically calibration points are selected which correspond to features that can be easily located on the target object 10.”]
The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED YOUSEF ABUELHAWA whose telephone number is (571)272-3219. The examiner can normally be reached Monday-Friday 8:30-5:00 with flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED YOUSEF ABUELHAWA/Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656