DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is a Final Office Action on the merits. Claims 1-5 are currently pending and are addressed below.
Response to Amendment
1. The amendment filed 10/24/2025 has been entered. Claims 1-5 remain pending in the application.
Response to Arguments
2. Applicant’s arguments filed 10/24/2025 have been fully considered but they are not persuasive.
Regarding the rejection made under 35 USC 102, the Applicant’s arguments have been fully considered but are not persuasive. Applicant Argues on page 4 of the remarks that Shirakyan does not disclose that the second calibration range is entirely included in the first calibration range. The examiner respectfully disagrees. Shirakyan teaches a method and system for registration and calibration of robotic arms and sensor that execute a first calibration and second calibration range. Shirakyan further teaches executing the second calibration in a plurality of positions included in the second calibration range which is entirely included in the first calibration range that is set with a higher density. Shirakyan states in paragraph [0040]: “The number and placement of the positions 302-316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within the workspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled.” Additionally, Shirakyan states in paragraph [0052]: “According to an example, a pattern used for recalibration can allow for sampling a portion of the workspace 106, whereas a previously used pattern allowed for sampling across the workspace 106. By way of another example, a pattern used for recalibration can allow for more densely sampling a given volume of the workspace 106.” Furthermore, Shirakyan states in paragraph [0053]: “Additionally or alternatively, the calibration component 122 can cause the entire workspace 106 to be resampled responsive to the mapping error exceeding the threshold error value.” Thus, Shirakyan teaches recalibration a portion (second calibration range) of the workspace 106 (first calibration range). Therefore, under BRI, a portion (second calibration range) of the workspace (first calibration range) is entirely included in the workspace (first calibration range).
Therefore, the prior art meets the claim limitations, and the Applicant’s arguments are not persuasive.
Information Disclosure Statement
3. The applicant filed an IDS on 10/10/2025. It has been annotated and considered.
Claim Rejections - 35 USC § 102
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
5. Claims 1-5 is/are rejected under 35 U.S.C. 102(a)(2)/(a)(1) as being anticipated by Shirakyan et al. (US 20160059417, hereinafter Shirakyan).
Regarding claim 1, Shirakyan teaches a robot control device (see at least Figs. 1-4) comprising;
a controller configured to control a robot (see at least Fig. 1 and 0022]: “Referring now to the drawings, FIG. 1 illustrates a system 100 that controls a depth sensor 102 and a robotic arm 104 that operate in a workspace 106. The robotic arm 104 can include an end effector. Moreover, the system 100 includes a control system 108. The control system 108 can control the depth sensor 102 and the robotic arm 104; more particularly, the control system 108 can automatically control in-situ calibration and registration of the depth sensor 102 and the robotic arm 104 in the workspace 106.”), the controller configured to execute a first calibration of the robot in a plurality of first calibration positions included in a first calibration range set in an operating space of the robot (see at least [0037]: “During calibration (e.g., recalibration) of the depth sensor 102 and the robotic arm 104, the end effector 202 can be caused to non-continuously traverse through the workspace 106 based on a pattern, where the end effector 202 is stopped at positions within the workspace 106 according to the pattern. For example, the end effector 202 of the robotic arm 104 can be placed at regular intervals in the workspace 106. However, other patterns are intended to fall within the scope of the hereto appended claims (e.g., interval size can be a function of measured mapping error for a given volume in the workspace 106, differing preset intervals can be set in the pattern for a given type of depth sensor, etc.). Further, the depth sensor 102 can detect coordinates of a position of the end effector 202 (e.g., a calibration target on the end effector 202) in the workspace 106 in the sensor coordinate frame, while the robotic arm 104 can detect coordinates of the position of the end effector 202 (e.g., the calibration target) in the workspace 106 in the arm coordinate frame. Thus, pairs of corresponding points in the sensor coordinate frame and the arm coordinate frame can be captured when the depth sensor 102 and the robotic arm 104 are calibrated (e.g., recalibrated).”); and
execute a second calibration of the robot in a plurality of second calibration positions that is included in a second calibration range which is entirely included in the first calibration range and that is set with a higher density than the at least one first calibration position (see at least [0040]: “The number and placement of the positions 302-316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within the workspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled. The foregoing can reduce an amount of time for performing calibration, while enhancing accuracy of a resulting transformation function.”; [0052]: “Recalibration performed by the calibration component 122, for instance, can include causing the end effector to non-continuously traverse through the workspace 106 based upon a pattern, where the end effector is stopped at positions within the workspace 106 according to the pattern. It is to be appreciated that the pattern used for recalibration can be substantially similar to or differ from a previously used pattern (e.g., a pattern used for calibration, a pattern used for prior recalibration, etc.). According to an example, a pattern used for recalibration can allow for sampling a portion of the workspace 106, whereas a previously used pattern allowed for sampling across the workspace 106.”; [0053]: “Responsive to the mapping error being greater than the threshold error value, the monitor component 126 can cause the calibration component 122 to recalibrate the depth sensor 102 and the robotic arm 104. For example, the calibration component 122 can cause a volume of the workspace 106 that includes the location to be resampled or more densely sampled responsive to the mapping error exceeding the threshold error value.” Shirakyan teaches recalibration a portion (second calibration range) of the workspace 106 (first calibration range) which is entirely included in the first calibration range and that is set with a higher density.).
Regarding claim 2, Shirakyan teaches the limitations of claim 1. Shirakyan further teaches wherein the second calibration position is set, in a space in which the robot performs work, with a density that is determined based on accuracy of the work of the robot (see at least [0040]: “The number and placement of the positions 302-316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within the workspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled. The foregoing can reduce an amount of time for performing calibration, while enhancing accuracy of a resulting transformation function.”).
Regarding claim 3, Shirakyan teaches the limitations of claim 1. Shirakyan further teaches wherein the controller executes the first calibration and the second calibration based on a captured image of the operating space (see at least [0023]: “The control system 108 can create a mapping between coordinates extracted from a depth image generated by the depth sensor 102 and a corresponding Cartesian position of the robotic arm 104 (e.g., a position of the end effector of the robotic arm 104) in the workspace 106. Coordinates extracted from a depth image generated by the depth sensor 102 can be referred to herein as coordinates in a sensor coordinate frame (e.g., sensor coordinates in the sensor coordinate frame). Moreover, a Cartesian position of the robotic arm 104 in the workspace 106 can be referred to herein as coordinates in an arm coordinate frame (e.g., arm coordinates in the arm coordinate frame).”; [0035]: “The calibration component 122, at each position from the positions within the workspace 106 at which the end effector is stopped, can collect a sensor calibration point for the position of the end effector within the workspace 106 detected by the depth sensor 102 and an arm calibration point for the position of the end effector within the workspace 106 detected by the robotic arm 104. The sensor calibration point for the position can include coordinates of the end effector at the position within the workspace 106 in the sensor coordinate frame. According to an example, the coordinates of the end effector included as part of the sensor calibration point can be coordinates of a centroid (e.g., of a given portion of the end effector, of an object mechanically attached to the end effector, etc.), where the centroid can be computed based on image moments of a standard deviation image from the depth sensor 102.”).
Regarding claim 4, Shirakyan teaches the limitations of claim 1. Shirakyan further teaches a robot control system comprising: the robot control device according to claim 1; and the robot (see at least Fig. 4 and [0056]: “Turning to FIG. 4, illustrated is another system 400 that includes the control system 108 that controls the depth sensor 102 and the robotic arm 104 during calibration and registration. The control system 108 can include the interface component 116, the sample selection component 118, the interpolation component 120, the calibration component 122, the initialization component 124, the monitor component 126, and the data repository 110 as described herein.”).
Regarding claim 5, Shirakyan teaches a robot control method (see at least Figs. 8-9) comprising:
executing, a first calibration of the robot in a plurality of first calibration positions included in a first calibration range set in an operating space of the robot (see at least [0037]: “During calibration (e.g., recalibration) of the depth sensor 102 and the robotic arm 104, the end effector 202 can be caused to non-continuously traverse through the workspace 106 based on a pattern, where the end effector 202 is stopped at positions within the workspace 106 according to the pattern. For example, the end effector 202 of the robotic arm 104 can be placed at regular intervals in the workspace 106. However, other patterns are intended to fall within the scope of the hereto appended claims (e.g., interval size can be a function of measured mapping error for a given volume in the workspace 106, differing preset intervals can be set in the pattern for a given type of depth sensor, etc.). Further, the depth sensor 102 can detect coordinates of a position of the end effector 202 (e.g., a calibration target on the end effector 202) in the workspace 106 in the sensor coordinate frame, while the robotic arm 104 can detect coordinates of the position of the end effector 202 (e.g., the calibration target) in the workspace 106 in the arm coordinate frame. Thus, pairs of corresponding points in the sensor coordinate frame and the arm coordinate frame can be captured when the depth sensor 102 and the robotic arm 104 are calibrated (e.g., recalibrated).”); and
executing, a second calibration of the robot in a plurality of second calibration positions that is included in a second calibration range which is entirely included in the first calibration range and that is set with a higher density than the at least one first calibration position (see at least [0040]: “The number and placement of the positions 302-316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within the workspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled. The foregoing can reduce an amount of time for performing calibration, while enhancing accuracy of a resulting transformation function.”; [0052]: “Recalibration performed by the calibration component 122, for instance, can include causing the end effector to non-continuously traverse through the workspace 106 based upon a pattern, where the end effector is stopped at positions within the workspace 106 according to the pattern. It is to be appreciated that the pattern used for recalibration can be substantially similar to or differ from a previously used pattern (e.g., a pattern used for calibration, a pattern used for prior recalibration, etc.). According to an example, a pattern used for recalibration can allow for sampling a portion of the workspace 106, whereas a previously used pattern allowed for sampling across the workspace 106.”; [0053]: “Responsive to the mapping error being greater than the threshold error value, the monitor component 126 can cause the calibration component 122 to recalibrate the depth sensor 102 and the robotic arm 104. For example, the calibration component 122 can cause a volume of the workspace 106 that includes the location to be resampled or more densely sampled responsive to the mapping error exceeding the threshold error value.” Shirakyan teaches recalibration a portion (second calibration range) of the workspace 106 (first calibration range) which is entirely included in the first calibration range and that is set with a higher density.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIEN MINH LE whose telephone number is (571)272-3903. The examiner can normally be reached Monday to Friday (8:30am-5:30pm eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.M.L./Examiner, Art Unit 3656
/KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656