DETAILED ACTION
Notice of AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objection
2. Claim 11 is objected to because it depends on a cancelled claim. Appropriate correction is required.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613) and in further view of Don et al (US Pub: 2017/0007196).
Regarding claim 1, Von Berg et al teaches: A method for posture recognition of a human body part to be imaged with an X-ray machine [p0002], comprising the steps of: inputting preset position information of the body part to be detected [p0053]; adjusting the body part into a field of view of a camera; using the camera to capture a natural image of the body part [p0064]; using a depth camera to capture a depth image of the body part [p0065]; establishing a spatial attitude deviation of the body part relative to a spatial coordinate system where the X-ray tube is located [p0013]; determining a desired position for the body part based on the spatial attitude deviation [p0056]; providing the desired position for the body part in real time to an operator [p0082]; and adjusting the body part to a new position based on the desired position [p0077, p0078]; wherein the camera is an RGB camera [p0007 (Color capture suggest a RGB camera.)].
For a redundant teaching in the same field of endeavor, Don et al also explicitly focus on x-ray tube: adjusting the body part into a field of view of a camera; using the camera to capture a natural image of the body part [abstract]; using a depth camera to capture a depth image of the body part [p0075, p0076]; establishing a spatial attitude deviation of the body part relative to a spatial coordinate system where the X-ray tube is located [p0057]; determining a desired position for the body part based on the spatial attitude deviation [p0056]; providing the desired position for the body part in real time to an operator [p0057]; and adjusting the body part to a new position based on the desired position [p0066]; wherein the camera is an RGB camera [p0032].
Therefore, the combined teaching of the two would have made the claimed invention obvious to a skilled in the art to allow operator to see positioning view in real time and properly adjust position of a body part based on spatial attitude deviation using RGB depth camera for reducing positioning errors and improving image quality.
Claim 19 has been analyzed and rejected with regard to claim 1.
5. Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613) and Don et al (US Pub: 2017/0007196); and in further view of Cherevatsky et al (US Patent: 10,699,421).
Regarding clam 2 (Original), the rationale applied to the rejection of claim 1 has been incorporated herein. Von Berg et al in view of Don et al does not determine a center point of body part. In the same field of endeavor, Cherevatsky et al teaches: The method of claim 1 further comprising the step of obtaining positioning information of the body part, determining a center point of the body part; and determining a rectangle surrounding the center point [col 9: lines 18-31]. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to determine a rectangle surrounding a center point of ROI to define boundary for robust localization.
Regarding claim 3 (Original), the rationale applied to the rejection of claim 2 has been incorporated herein. Cherevatsky et al further teaches: The method of claim 2 further comprising the step of obtaining the spatial coordinates of the center point, and using the internal parameters of the camera and internal parameters of the depth camera to create a transformation matrix to transform the center point to an RGB space coordinate system [col 24: lines 34-58, col 25: lines 32-46 (Forms 3D cloud points from depth frame using parameters of depth camera such as camera intrinsic property and extrinsic of a scene is a way of transformation.)].
6. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), and Cherevatsky et al (US Patent: 10,699,421); and in further view of Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012).
Regarding claim 4 (Original), the rationale applied to the rejection of claim 2 has been incorporated herein. Von Berg et al in view of Don et al and Cherevatsky et al does not specify spatial coordinates of rectangle corners. In the same field of endeavor, Herrera et al teaches: The method of claim 3 further comprising the step of obtaining spatial coordinates of corners of the rectangle [page 2059: 1.3, 2.2, page 2060: 2.3, page 2061: 3.1 (Compute 3D corners in depth camera coordinates and transform into RGB camera frame.)]. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to define corners for real time guidance.
Regarding claim 12 (Original), the rationale applied to the rejection of claim 4 has been incorporated herein. Cherevatsky et al inherently teaches: The method of claim 4 wherein the corners of the rectangle are calculated as follows:P1=(x-w/2 ,y-h/2) P2=(x+w/2 ,y-h/2) P3=(x+w/2 ,y+h/2) P4=(x-w/2 ,y+h/2) wherein h and w are respectively height and width of the rectangle [col 5: lines 6-36 (Seed box centered on a predicted position with buffers around that position)].
7. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), Cherevatsky et al (US Patent: 10,699,421), and Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012); and in further view of JP’584 (JP Pub: 2016529584).
Regarding claim 5 (Original), the rationale applied to the rejection of claim 4 has been incorporated herein. Von Berg et al in view of Don et al, Cherevatsky et al, and Herrera et al does not fit a reference plane. In the same field of endeavor, JP’584 teaches: The method of claim 4 further comprising the step of performing regional sampling on the natural image to fit a reference plane where the human body surface is located [p0006, p0019, p0021, p0026, p0031, (Sampling strip of depth data related to a captured image and fit a reference surface/plane model from the strip depth samples.)]. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to choose a sample region from a RGB image and use depth to compute 3D points to fit a reference plane for reliably sampling a ROI with properly defined region consistency.
Regarding claim 6 (Original), the rationale applied to the rejection of claim 5 has been incorporated herein. JP’584 in view of Herreta et al further teaches: The method of claim 5 further comprising the step of mapping the corners of the rectangle to the reference plane to obtain a mapping point [see rejections for claims 4 and 5]. JP’584 provides a reference plane estimated from sampled depth data and produces plan parameters; and Herrera et al points on plane along a viewing ray to estimate depth for selected corners. Specifically, use Herrera’s calibration to compute the viewing ray and the intersection point with JP’584’s reference plane represents the mapping point on the plane. Therefore, it would have been obvious for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to determine a mapping point for each rectangle corner for improving computation.
8. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), Cherevatsky et al (US Patent: 10,699,421), and Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012); and JP’584 (JP Pub: 2016529584); and in further view of Olson et al (AprilTag: A robust and flexible visual fiducial system, 05/2011).
Regarding claim 7 (Original), the rationale applied to the rejection of claim 6 has been incorporated herein. Von Berg et al in view of Don et al, Cherevatsky et al, Herrera et al, and JP’584 does not specify an origin of the coordinate. In the same field of endeavor, Olson et al teaches: The method of claim 6 further comprising the step of establishing a new spatial coordinate system with O' as the origin of the coordinate using the equation [page 4: B (p04), C]. Olson et al detects a planar quad and obtains its four corners, defines a local tag coordinate system with origin at the center and axes along the planar tag, and computes the tag’s pose from corner derived homography, which corresponds to establishing a new coordinate system from mapped corner points and deriving unit axes. Therefore, even though Olson et al does not print exact midpoint formular, it teaches the same concept of defining axes from corner geometry and normalize to obtain unit axes; and the combined teaching of all would have been obvious to a skilled in the art to derive orthonormal axes and an origin from four coplanar corner points to the mapped rectangle corners obtained for real time positioning guidance. Notice, there would be no patentable weight assessed on mathematical equations.
PNG
media_image1.png
265
677
media_image1.png
Greyscale
9. Claims 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), Cherevatsky et al (US Patent: 10,699,421), and Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012); and JP’584 (JP Pub: 2016529584), and Olson et al (AprilTag: A robust and flexible visual fiducial system, 05/2011); and in further view of Jens (EP Pub: 3157435) and Lovberg et al (US Patent: 9,779,502).
Regarding claim 8 (Original), the rationale applied to the rejection of claim 7 has been incorporated herein. Von Berg et al in view of Don et al, Cherevatsky et al, Herrera et al, JP’584, and Olson et al does not specify a rotation matrix.
In the same field of endeavor, Jens teaches: The method of claim 7 further comprising the step of determining a spatial coordinate system of the human body part to be detected [p0045], determining a rotation matrix relative to a coordinate system of an x-ray tube [p0013 (Compare to a target position relative to an imaging arrangement.)], and determining an attitude deviation of the human body part to be detected relative to the x-ray tube [p0013].
Jens does not explicitly disclose Euler angle. In the same field of endeavor, Lovberg et al teaches: determining a rotation matrix relative to a coordinate system of an x-ray tube and determining the Euler angle of the spatial coordinate system of the human body to be detected, [col 7: lines 56-67]; and determining an attitude deviation of the human body part to be detected relative to the x-ray tube [col 25: lines 59-67, col 26: lines 1-20]. Therefore, the combined teaching of Jens and Lovberg et al would have made a predictable result like attitude deviation output obvious to a skilled in the art.
Regarding claim 11 (Original), the rationale applied to the rejection of claim 8 has been incorporated herein. Lovberg et al further teaches: The method of claim 8 further comprising the step of ignoring the error if the error belongs to a preset error threshold range [col 2: lines 12-20]. Therefore, it would have been always for an ordinary skilled in the art before the effective filing date of the claimed invention to combine the teaching of all to ignore an error if it is in a desired range per design choice.
10. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), Cherevatsky et al (US Patent: 10,699,421), and Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012), and JP’584 (JP Pub: 2016529584); and in further view of Jin et al (Sensor Fusion for Fiducial Tags: Highly Robust Pose Estimation from Single Frame RGBD, 2017) and Ray (Ray Tracing Notes CS445 Computer Graphics, Fall 2010).
Regarding claim 13 (Original), the rationale applied to the rejection of claim 6 has been incorporated herein. Von Berg et al in view of Don et al, Cherevatsky et al, Herrera et al, and JP’584 does not disclose setting boundary vertices. In the same field of endeavor, Jin et al teaches: The method of claim 6 wherein the step of mapping the corners comprises the following steps: setting boundary vertices of the reference plane of the human body surface to U1, U2, U3, U4, as follows: wherein wd is the width and hc is the height of a flat panel imaging detector [page 5772: Depth Plane Fitting; page 5773: B p02]. Jin et al teaches fitting plane and projecting corners onto the plane, and setting boundary vertices from known rectangle dimensions.
And Orr et al teaches the mapping math in ray plane intersection in [page 7: 3.2]. Jin et al gives operational step of mapping corner points onto a fitted plane to get p1..p4 and the idea of defining corner coordinates from known rectangle whereas Orr et al gives the plane equation and ray plane intersection mapping. Therefore, the combined teaching defines a bounded planar rectangle region and map its corners onto the plane to obtain the corresponding 3D mapping points.
PNG
media_image2.png
236
588
media_image2.png
Greyscale
11. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Von Berg et al (US Pub: 2018/0116613), Don et al (US Pub: 2017/0007196), Cherevatsky et al (US Patent: 10,699,421), and Herrera et al (Joint Depth and Color Camera Calibration with Distortion Correction, 10/2012), JP’584 (JP Pub: 2016529584), and Olson et al (AprilTag: A robust and flexible visual fiducial system, 05/2011); and in further view of Jin et al (Sensor Fusion for Fiducial Tags: Highly Robust Pose Estimation from Single Frame RGBD, 2017) and Ray (Ray Tracing Notes CS445 Computer Graphics, Fall 2010).
Regarding claim 14 (Original), the rationale applied to the rejection of claim 7 has been incorporated herein. Von Berg et al in view of Don et al, Cherevatsky et al, Herrera et al, JP’584, and Olson et al does not the claimed coordinate frame construction from the fitted plane. In the same field of endeavor, Jin et al teaches the reference plane, its normal and the rotation matrix. Jin et al fits a plane from depth data and represents it with Hessian normal parameters, computes a rigid body pose whose rotation component is a rotation matrix R [page 5773: p02, B]; whereas Ray shows building an orthonormal frame from vectors using cross products and uses a dot product sign check to ensure the normal faces the desired direction [page 3: 2.2]. Therefore, given Jin et al’s fitted reference plane and its normal and pose rotation matrix, and Ray’s standard geometry for turning a plane normal into a orthonormal coordinate frame, the combined teaching would have made the following operation and orientation output obvious.
The method of claim 7, further comprising the steps of:
PNG
media_image3.png
65
581
media_image3.png
Greyscale
PNG
media_image4.png
818
716
media_image4.png
Greyscale
Allowable Subject Matter
12. Claim15-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Contact
13. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FAN ZHANG whose telephone number is (571)270-3751. The examiner can normally be reached on Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Tieu can be reached on 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Fan Zhang/
Patent Examiner, Art Unit 2682