DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/16/2024 is in compliance
with the provisions of 37 CFR 1.97 and has been considered by the examiner.
Claim Objections
Claims 4 and 13-14 are objected to because of the following informalities:
Claim 4 should be dependent on claim 3
In claim 13 line 1, “method of 10” should read “method of claim 10”
Claim 14 should be dependent on claim 13
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 10-11, 18 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ramirez Luna et al. (US 2019/0327394 A1).
Regarding claim 1, Ramirez Luna discloses a method, comprising: determining 3D positions of an object within a reference coordinate system based on a plurality of stereo-images comprising the object (paragraph 0502: “The stereoscopic visualization camera 300 is set to visualize the spheres at the calibration target simultaneously and determine their position through the use of parallax in the stereoscopic image. The processor 4102 and/or the robotic arm controller 4106 records the positions of the spheres in an initial coordinate system, for example, X, Y, and Z with respect to a fiducial in the camera 300 (i.e. "camera space)”); generating a transformation matrix based on the determined 3D positions of the object with respect to a reference position (paragraph 0502: “The processor 4102 and/or the robotic arm controller 4106 are configured to perform a coordinate transformation between the camera space and robot space based on the positions of the spheres of the calibration target as recorded by the camera, and as the positions of the robotic arm 506 and/or coupling plate 3304”); and calibrating one or more motion control systems based on the determined transformation matrix (paragraph 0496: “The calibration procedure for the robotic arm 506”).
Regarding claim 2, Ramirez Luna discloses the method of claim 1, further comprising: acquiring the plurality of stereo-images at a fixed distance between the object and an imaging sensor (paragraph 0501: “The processor 4102 and/or the robotic arm controller 4106 move the stereoscopic visualization camera 300 to a start position, which may include a stow position, a reorientation position, or a surgical position. The stereoscopic visualization camera 300 then moves the camera from the start position to a position that approximately visualizes a calibration target located on the stationary base 3404 of the robotic arm 506”).
Regarding claim 10, Ramirez Luna discloses a method, comprising: determining a 3D position of an object within a reference coordinate system based on a stereo-image of the object (paragraph 0502: “The stereoscopic visualization camera 300 is set to visualize the spheres at the calibration target simultaneously and determine their position through the use of parallax in the stereoscopic image. The processor 4102 and/or the robotic arm controller 4106 records the positions of the spheres in an initial coordinate system, for example, X, Y, and Z with respect to a fiducial in the camera 300 (i.e. "camera space)”); generating a 3D offset value between the determined 3D position and a reference location of the object (paragraph 0502: “The processor 4102 and/or the robotic arm controller 4106 are configured to perform a coordinate transformation between the camera space and robot space based on the positions of the spheres of the calibration target as recorded by the camera, and as the positions of the robotic arm 506 and/or coupling plate 3304”); updating the 3D position using the 3D offset value (paragraphs 0507-0508: “The three-dimensional space shown in FIG. 49 is modeled using a sequence of ten homogeneous transformations, which may include matrix multiplications…to calculate position of the frames R1 to R10 to determine the three-dimensional position of the robotic arm 506, the coupling plate 3304, and/or the camera 300”); and positioning the object based on the corrected 3D position (paragraph 0506: “the processor 4102 and/or the robotic arm controller 4106 may use the mathematical model to determine, for example, a current position of the robotic arm 506 and/or camera 300, which may be used for calculating how joints are to be rotated based on intended movement provided by an operator”).
Regarding claim 11, Ramirez Luna discloses the method of claim 10, wherein the 3D offset value is obtained by: determining a plurality of 3D positions of the object within the reference coordinate system based on analysis of a plurality of stereo-images comprising the object (paragraph 0502: “The stereoscopic visualization camera 300 is set to visualize the spheres at the calibration target simultaneously and determine their position through the use of parallax in the stereoscopic image. The processor 4102 and/or the robotic arm controller 4106 records the positions of the spheres in an initial coordinate system, for example, X, Y, and Z with respect to a fiducial in the camera 300 (i.e. "camera space)”); generating a transformation matrix based on the determined 3D positions of the object with respect to a reference position (paragraph 0502: “The processor 4102 and/or the robotic arm controller 4106 are configured to perform a coordinate transformation between the camera space and robot space based on the positions of the spheres of the calibration target as recorded by the camera, and as the positions of the robotic arm 506 and/or coupling plate 3304”); and determining the 3D offset value in accordance with the transformation matrix (paragraphs 0507-0508: “The three-dimensional space shown in FIG. 49 is modeled using a sequence of ten homogeneous transformations, which may include matrix multiplications…to calculate position of the frames R1 to R10 to determine the three-dimensional position of the robotic arm 506, the coupling plate 3304, and/or the camera 300”).
Regarding claim 18, Ramirez Luna discloses the method of claim 11, wherein determining the transformation matrix comprises: determining a set of calibration displacement values from the reference position for the plurality of stereo-images (paragraph 0502: “The processor 4102 and/or the robotic arm controller 4106 are configured to perform a coordinate transformation between the camera space and robot space based on the positions of the spheres of the calibration target as recorded by the camera, and as the positions of the robotic arm 506 and/or coupling plate 3304”).
Regarding claim 20, Ramirez Luna discloses the method of claim 11, further comprising: acquiring one or a plurality of stereo-images comprising the object at a fixed distance between the object and an imaging sensor (paragraph 0501: “The processor 4102 and/or the robotic arm controller 4106 move the stereoscopic visualization camera 300 to a start position, which may include a stow position, a reorientation position, or a surgical position. The stereoscopic visualization camera 300 then moves the camera from the start position to a position that approximately visualizes a calibration target located on the stationary base 3404 of the robotic arm 506”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 3-6, 9, 12-14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Ramirez Luna in view of Shimamura et al. (US 2012/0183205 A1).
Regarding claim 3, Ramirez Luna discloses the method of claim 1. However, Ramirez Luna fails to explicitly disclose each of the plurality of stereo-images includes a first image portion and a second image portion, wherein determining the 3D positions comprises determining a first position value for the first image portion and a second position value for the second image portion for each of the plurality of stereo-images; and for each of the plurality of stereo-images, determining a 3D position of the object based on the first and second position values. In the related art of stereo imaging, Shimamura discloses each of the plurality of stereo-images includes a first image portion (Shimamura FIG. 3A: target portion 64a) and a second image portion (Shimamura FIG. 3A: target portion 62a), wherein determining the 3D positions comprises determining a first position value for the first image portion (Shimamura paragraph 0010: “a tracking point corresponding to a position of the 2D image of the target portion in the one of the images may be obtained”) and a second position value for the second image portion for each of the plurality of stereo-images (Shimamura paragraph 0010: “a corresponding point of the tracking point in another of the images that constitutes the stereo image at each of the times may be extracted”); and for each of the plurality of stereo-images, determining a 3D position of the object based on the first and second position values (Shimamura paragraph 0010: “the stereo measurement may be executed relative to the tracking point and the corresponding point to thereby obtain the 3D coordinates of the target portion”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna to incorporate the teachings of Shimamura to efficiently and accurately measure the 3D displacement of a target portion of an object based on successively captured images of the object (Shimamura paragraph 0013).
Regarding claim 4, Ramirez Luna, modified by Shimamura, discloses the method of claim 3, wherein generating the transformation matrix comprises: determining a displacement value from the reference position for each of the first image portion and the second image portion for each of the plurality of stereo-images (Shimamura FIG. 4: 2D displacement vectors 70, 72).
Regarding claim 5, Ramirez Luna, modified by Shimamura, discloses the method of claim 4, wherein determining 3D positions comprises template matching (Shimamura paragraph 0033: “In the pattern matching processing, for example, a brightness distribution pattern of the correlation template resembles which part of a subsequent orthographically projected image 60b is determined”).
Regarding claim 6, Ramirez Luna, modified by Shimamura, discloses the method of claim 5, wherein the template matching is performed via a pre-annotated template (Ramirez Luna paragraph 0234: “a calibration routine may determine positions of the set 724 and 730 on a rail corresponding to when character "E" on a template at the target site 700 is displayed in right and left images as having a height of 10 pixels”).
Regarding claim 9, Ramirez Luna, modified by Shimamura, discloses the method of claim 4, wherein determining 3D positions comprises using a corner or edge detection method (Ramirez Luna paragraph 0373: “The example processor 1562 may measure and verify optimal focus by monitoring a signal relating to the focus of one or both of the right and left images...The signal changes as focus changes and may be determined from…an edge detection analysis program”).
Regarding claim 12, Ramirez Luna discloses the method of claim 10, wherein the stereo-image is acquired via a first light beam and via a second light beam (Ramirez Luna paragraph 0135: “To illuminate the target site 700, the example stereoscopic visualization camera 300 includes one or more lighting sources…The example light sources 708 are configured to generate light, which is projected to the target scene 700”). However, Ramirez Luna fails to explicitly disclose the stereo-image includes a first image portion and a second image portion. In related art, Shimamura discloses the stereo-image includes a first image portion (Shimamura FIG. 3A: target portion 64a) and a second image portion (Shimamura FIG. 3A: target portion 62a). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna to incorporate the teachings of Shimamura to efficiently and accurately measure the 3D displacement of a target portion of an object based on successively captured images of the object (Shimamura paragraph 0013).
Regarding claim 13, Ramirez Luna discloses the method of claim 10. However, Ramirez Luna fails to explicitly disclose the stereo-image includes a first image portion and a second image portion, wherein determining the 3D position comprises: determining a first position value for the first image portion and a second position value for the second image portion. In related art, Shimamura discloses the stereo-image includes a first image portion (Shimamura FIG. 3A: target portion 64a) and a second image portion (Shimamura FIG. 3A: target portion 62a), wherein determining the 3D position comprises: determining a first position value for the first image portion (Shimamura paragraph 0010: “a tracking point corresponding to a position of the 2D image of the target portion in the one of the images may be obtained”) and a second position value for the second image portion (Shimamura paragraph 0010: “a corresponding point of the tracking point in another of the images that constitutes the stereo image at each of the times may be extracted, and the stereo measurement may be executed relative to the tracking point and the corresponding point to thereby obtain the 3D coordinates of the target portion”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna to incorporate the teachings of Shimamura to efficiently and accurately measure the 3D displacement of a target portion of an object based on successively captured images of the object (Shimamura paragraph 0013).
Regarding claim 14, Ramirez Luna, modified by Shimamura, discloses the method of claim 13, wherein determining the 3D offset value comprises: determining a displacement value based on a difference between the first position value and the second position value and respective values of the reference location of the object (Shimamura FIG. 4: 2D displacement vectors 70, 72).
Regarding claim 17, Ramirez Luna, modified by Shimamura, discloses the method of claim 14, wherein the displacement value for each of the first image portion and the second image portion is determined by: using a corner or edge detection method (Ramirez Luna paragraph 0373: “The example processor 1562 may measure and verify optimal focus by monitoring a signal relating to the focus of one or both of the right and left images...The signal changes as focus changes and may be determined from…an edge detection analysis program”).
Claim(s) 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ramirez Luna and Shimamura in view of Matta et al. (US 2019/0367012 A1).
Regarding claim 7, Ramirez Luna, modified by Shimamura, discloses the method of claim 6, wherein each of the first image portion and the second image portion is being matched with the pre-annotated template (Shimamura FIG. 3, paragraph 0033: “using as a correlation template the partial images 66a, 68a containing the 2D images 62a, 64a, respectively, pattern matching processing is executed relative to an orthographically projected image 60b”). However, Ramirez Luna and Shimamura fail to explicitly disclose the pre-annotated template comprises at least two corners of the template pre-annotated for determining a center position of the object. In the related art of template matching, Matta discloses the pre-annotated template comprises at least two corners of the template pre-annotated for determining a center position of the object (Matta paragraph 0048: “At 409, the process finds the center of the marker for which the coordinates of the template region and corners obtained from template matching and Harris detection respectively are used. As shown in FIG. 3(d), four points surrounding the center are obtained by mathematical operation on the dimensions of the template region window to define a square-shaped search window for the center. The corner coordinates output from Harris detection are iterated over, to find the coordinate that lies in this search window and the point obtained is the center of the marker”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna and Shimamura to incorporate the teachings of Matta to determine the center position of the object for further calculations, such as the calculation of the 2D displacement vectors (Shimamura FIG. 4, paragraph 0034).
Regarding claim 15, Ramirez Luna, modified by Shimamura, discloses the method of claim 14, wherein the displacement value for each of the first image portion and the second image portion is determined by template matching performed via pre-annotated template and each of the first image portion and the second image portion is being matched with the pre-annotated template (Shimamura FIG. 3, paragraph 0033: “using as a correlation template the partial images 66a, 68a containing the 2D images 62a, 64a, respectively, pattern matching processing is executed relative to an orthographically projected image 60b”). However, Ramirez Luna and Shimamura fail to explicitly disclose the pre-annotated template comprises at least two corners of the template pre-annotated for determining a center position of the object. In related art, Matta discloses the pre-annotated template comprises at least two corners of the template pre-annotated for determining a center position of the object (Matta paragraph 0048: “At 409, the process finds the center of the marker for which the coordinates of the template region and corners obtained from template matching and Harris detection respectively are used. As shown in FIG. 3(d), four points surrounding the center are obtained by mathematical operation on the dimensions of the template region window to define a square-shaped search window for the center. The corner coordinates output from Harris detection are iterated over, to find the coordinate that lies in this search window and the point obtained is the center of the marker”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna and Shimamura to incorporate the teachings of Matta to determine the center position of the object for further calculations, such as the calculation of the 2D displacement vectors (Shimamura FIG. 4, paragraph 0034).
Claim(s) 8, 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ramirez Luna and Shimamura in view of Qiu (US 2021/0034911 A1).
Regarding claim 8, Ramirez Luna, modified by Shimamura, discloses the method of claim 4, wherein the displacement value for each of the first image portion and the second image portion is determined by: matching an area of interest (AOI) in each of the first image portion and the second image portion with a mask template of the object (Shimamura FIG. 3, paragraph 0033: “using as a correlation template the partial images 66a, 68a containing the 2D images 62a, 64a, respectively, pattern matching processing is executed relative to an orthographically projected image 60b”); determining a center position for each of the first image portion and the second image portion upon matching the AOIs of the first image portion and the second image portion (Shimamura FIG. 4, paragraph 0038: “The 3D coordinate calculation processing unit 32…executes stereo measuring processing relative to the tracking point and the corresponding point to obtain the 3D coordinates of the target portion at the respective times”); and calculating the displacement value by determining a difference between the center position and the reference position of the object in each of the first image portion and the second image portion (Shimamura FIG. 4: 2D displacement vectors 70, 72). However, Ramirez Luna and Shimamura fail to explicitly disclose using a fast Fourier transform (FFT) template method to match the area of interest (AOI) with a mask template of the object. In the related art of template matching, Qiu discloses using a fast Fourier transform (FFT) template method to match the area of interest (AOI) with a mask template of the object (Qiu paragraph 0009: “image data for the template image and multi-directional image searching area within the source image may be transformed from the 2-dimensional (2D) domain to 1D representations using…fast Fourier transform (FFT)…Searching for the template image may thus be performed in the searching area along the multiple (e.g., vertical and horizontal) directions of the multi-directional searching pattern by correlating the appropriate 1D representations of the template image and the searching area within the source image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna and Shimamura to incorporate the teachings of Qiu to facilitate highly efficient searching within the source image (Qiu paragraph 0009).
Regarding claim 16, Ramirez Luna, modified by Shimamura, discloses the method of claim 14, wherein the displacement value for each of the first image portion and the second image portion is determined by: matching an area of interest (AOI) in each of the first image portion and the second image portion with a mask template of the object (Shimamura FIG. 3, paragraph 0033: “using as a correlation template the partial images 66a, 68a containing the 2D images 62a, 64a, respectively, pattern matching processing is executed relative to an orthographically projected image 60b”); determining a center position for each of the first image portion and the second image (Shimamura FIG. 4, paragraph 0038: “The 3D coordinate calculation processing unit 32…executes stereo measuring processing relative to the tracking point and the corresponding point to obtain the 3D coordinates of the target portion at the respective times”); and calculating the displacement value by determining a difference between the center position and the reference position of the object in each of the first image portion and the second image portion (Shimamura FIG. 4: 2D displacement vectors 70, 72). However, Ramirez Luna and Shimamura fail to explicitly disclose using a fast Fourier transform (FFT) template method to match the area of interest (AOI) with a mask template of the object. In related art, Qiu discloses using a fast Fourier transform (FFT) template method to match the area of interest (AOI) with a mask template of the object (Qiu paragraph 0009: “image data for the template image and multi-directional image searching area within the source image may be transformed from the 2-dimensional (2D) domain to 1D representations using…fast Fourier transform (FFT)…Searching for the template image may thus be performed in the searching area along the multiple (e.g., vertical and horizontal) directions of the multi-directional searching pattern by correlating the appropriate 1D representations of the template image and the searching area within the source image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna and Shimamura to incorporate the teachings of Qiu to facilitate highly efficient searching within the source image (Qiu paragraph 0009).
Regarding claim 19, Ramirez Luna discloses the method of claim 18, wherein calculating the set of calibration displacement values utilizes a corner or edge detection method (Ramirez Luna paragraph 0373: “The example processor 1562 may measure and verify optimal focus by monitoring a signal relating to the focus of one or both of the right and left images...The signal changes as focus changes and may be determined from…an edge detection analysis program”). However, Ramirez Luna fails to explicitly disclose using a fast Fourier transform (FFT) template method to match an area of interest (AOI) in each of the plurality of stereo-images with a mask template of the object; determining a center position for each of the plurality of stereo-images; and calculating the set of calibration displacement values by determining a difference between the center position for each of the plurality of stereo-images and the reference position.
In related art, Shimamura discloses determining a center position for each of the plurality of stereo-images (Shimamura FIG. 4, paragraph 0038: “The 3D coordinate calculation processing unit 32…executes stereo measuring processing relative to the tracking point and the corresponding point to obtain the 3D coordinates of the target portion at the respective times”); and calculating the set of calibration displacement values by determining a difference between the center position for each of the plurality of stereo-images and the reference position (Shimamura FIG. 4: 2D displacement vectors 70, 72). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna to incorporate the teachings of Shimamura to efficiently and accurately measure the 3D displacement of a target portion of an object based on successively captured images of the object (Shimamura paragraph 0013). However, Ramirez Luna, modified by Shimamura, still fails to explicitly disclose using a fast Fourier transform (FFT) template method to match an area of interest (AOI) in each of the plurality of stereo-images with a mask template of the object.
In related art, Qiu discloses using a fast Fourier transform (FFT) template method to match an area of interest (AOI) in each of the plurality of stereo-images with a mask template of the object (Qiu paragraph 0009: “image data for the template image and multi-directional image searching area within the source image may be transformed from the 2-dimensional (2D) domain to 1D representations using…fast Fourier transform (FFT)…Searching for the template image may thus be performed in the searching area along the multiple (e.g., vertical and horizontal) directions of the multi-directional searching pattern by correlating the appropriate 1D representations of the template image and the searching area within the source image”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ramirez Luna and Shimamura to incorporate the teachings of Qiu to facilitate highly efficient searching within the source image (Qiu paragraph 0009).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lee et al. (US 2019/0080464 A1) discloses a stereo matching method and apparatus for generating a disparity map.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.Z./ Examiner, Art Unit 2677
/ANDREW W BEE/ Supervisory Patent Examiner, Art Unit 2677