Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 5, 7, 12-15 and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by EP 4439459 A1 to Huang.
Claim 1. Huang teaches one or more processors comprising processing circuitry to: [FIG. 10] The processing system 71 may e.g. include one or more of a CPU ("Central Processing Unit"), a DSP ("Digital Signal Processor")
extract at least one pair of cross-camera view images from one or more pairs of image frames from at least one pair of cameras having at least partially overlapping fields of view; FIG. 1A cameras – 2,
[0003] multiple cameras with overlapping fields of view and calculating the calibration data
[0035] …cameras 2 may be synchronized to capture a respective image… The cameras 2 are oriented with overlapping fields of view 20 and thus produce at least partly overlapping images of the scene 5.
associate at least one feature point of a feature detected from a first view image of the at least one pair of cross-camera view images with at least one matching feature point from a second view image of the at least one pair of cross-camera view images; [0036] the images captured by the respective camera 2 … to determine one or more keypoints of one or more objects in the respective image
FIG. 1A shows multiple views, and each keypoint LOC – [0037] the ODD represents each camera (view) by a respective view identifier (1-3 in FIG. 1B) … comprises a keypoint position for each keypoint detected for the respective object
[0041] a so-called symmetric epipolar distance (SED) is calculated to quantify the quality of the calibration data using the observations from the two views. The SED for two corresponding (associated) keypoints in first and second views
compute for the second view image at least one epipolar line based at least on a location of the at least one feature point within the first view image; [0041] FIG. 3B shows an example of reprojection in epipolar geometry. As known in the art, using epipolar geometry, a keypoint in one view may be reprojected as a line in another view by the use of the calibration data.
and determine a calibration validation score for the at least one pair of cameras [0059] a candidate track pair in [CTP] is also denoted CTP_i. It is to be understood that, for each CTP_i, keypoints are associated between the first and second views.
[0061] In step 307, [CI] is operated on each CTP_i in [CTP] to generate a set of match scores, [MS]_i, for this CTP_i. Each match score in [MS]_i is generated by operating a respective calibration item in [CI] on the associated keypoints of CTP_i.
based at least on a deviation between a location of the at least one matching feature point and the at least one epipolar line. [0061] The match score is generated to represent the ability of the respective calibration item to spatially match the two tracks in CTP_i to each other. In other words, the match score represents a spatial difference (or equivalently, spatial similarity) when one of the tracks in CTP_i is mapped onto the other track in CTP_i by use of the calibration item.
Claim 2. Huang teaches wherein the one or more processors are further to: associate the at least one feature point of the feature detected from the first view image with the at least one matching feature point from the second view image based at least on a vector representing at least one feature descriptor of the at least one feature point. [0048] …the respective preliminary 3D pose may be represented by a pose vector of scene point locations.
Claim 3. Huang teaches wherein the one or more processors are further to: compute the at least one epipolar line based at least on one or more extrinsic camera calibration parameters associated with the at least one pair of cameras. FIG. 3 teaches the reprojection of the epipolar line of the extrinsic calibration of cameras of FIG. 2
Claim 5. Huang teaches wherein a first camera of the at least one pair of cameras captures a different angle of view than a second camera of the at least one pair of cameras. FIG 1, camera 2 -shows different angle of view,
[0003] process individual video streams from the cameras for detection of keypoints, identify correspondence between keypoints in different views
Claim 7. Huang teaches wherein the one or more processors are further to: aggregate a plurality of calibration validation scores for the at least one pair of cameras to produce a composite validation score based at least on a series of pairs of cross-camera view images captured over a span of time. FIG 4A elements 300A, 301, 302, 305, 307 and 308
[0066] The use of movement tracks improves the ability of the method 300 to discriminate between different objects in the time sequences of images and also facilitates the matching of keypoints in image data from different cameras.
Claim 12. Huang teaches wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for three-dimensional assets; a system for performing deep learning operations; a system for performing remote operations; a system for performing real-time streaming; a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content;a system implemented using an edge device; a system implemented using a robot; a system for performing conversational Al operations; a system implementing one or more language models; a system implementing one or more large language models (LLMs); a system for generating synthetic data; a system for generating synthetic data using Al;a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. [0002] [0097] teaches various applications.
Claim 13. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale. [FIG. 2] the system
Claim 14. Reviewed and analyzed in the same way as claim 2. See the above analysis and rationale.
Claim 15. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale.
Claim 20. Huang teaches A method comprising: [FIG. 2] the method
generating an indication of calibration validation for at least one pair of cameras based at least on associating at least one feature point of a feature detected from a first view image with at least one matching feature point from a second view image [0059] a candidate track pair in [CTP] is also denoted CTP_i. It is to be understood that, for each CTP_i, keypoints are associated between the first and second views.
[0061] In step 307, [CI] is operated on each CTP_i in [CTP] to generate a set of match scores, [MS]_i, for this CTP_i. Each match score in [MS]_i is generated by operating a respective calibration item in [CI] on the associated keypoints of CTP_i. The match score is generated to represent the ability of the respective calibration item to spatially match the two tracks in CTP_i to each other.
and computing a deviation between a location of the at least one matching feature point and at least one epipolar line computed for the second view image based at least on the at least one feature point within the first view image. [0061]…In other words, the match score represents a spatial difference (or equivalently, spatial similarity) when one of the tracks in CTP_i is mapped onto the other track in CTP_i by use of the calibration item.
[0074] the reprojection distance is a symmetric epipolar distance (SED), and the match score is an average of SEDs for the associated keypoints between the first and second movement tracks
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 4439459 A1 to Huang in view of EP 4141482 A1 to Miao et al., hereinafter, “Miao”.
Claim 4. Huang fails to explicitly teach adjust one or more operations of an ego-machine based at least on the calibration validation score. However, Miao, in the field of autonomous vehicle camera calibration, teaches wherein the one or more processors are further to perform an operation comprising at least one of: adjust one or more operations of an ego-machine based at least on the calibration validation score; [0066] in response to the confidence score being below the threshold, performing an action assessment of the AV. The method may include, based on the action assessment, causing the AV to perform an action. The action may include one or more of the following: recalibrating the camera, altering a trajectory of the AV, or altering a velocity of the AV.
and generate an output indicating at least when the calibration validation score does not satisfy a validation criteria. Miao [0005] In response to the confidence score being below a threshold, the system will generate a signal indicating that the camera is not calibrated.
Huang is in the field of multi-camera multi-images calibration. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Huang with the teachings of Miao [0003] because the calibration of an AV's cameras improves the accuracy of the images captured by the cameras and, therefore, also improves the accuracy of any object detection analysis performed on the images.
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over EP 4439459 A1 to Huang in view of Color calibration for multi-camera imaging systems to Gurbuz et al., hereinafter, “Gurbuz”.
Claim 6. Huang fails to explicitly teach process at least one of the first view image and the second view image to correct for one or more distortions to increase a similarity of appearance of one or more features between the first view image and the second view image. Gurbuz, in the field of calibrating multicamera images, teaches wherein the one or more processors are further to: process at least one of the first view image and the second view image to correct for one or more distortions to increase a similarity of appearance of one or more features between the first view image and the second view image. [III. Multi-Camera Color Calibration] teaches the different viewing angles and minimize the color difference and discrepancies.
Huang is in the field of multi-camera multi-images calibration. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Huang with the teachings of Gurbuz [Abstract] to minimize the color differences between the captured multi-view images.
Claim 16. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale.
Allowable Subject Matter
Claims 8-11 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The innovation that makes claim 8 allowable is “isolate one or more calibration anomalies to at least a first camera of the at least one pair of cameras based at least on one or more calibration validation scores”.
The innovation that makes claims 9 and 17 allowable is “compute at least one sensitivity metric for the calibration validation score based at least on applying a range of perturbations to at least one extrinsic calibration parameter used to compute the at least one epipolar line”.
Claim 10 is allowable because it is a dependent of claim 9.
The innovation that makes claims 11 and 18 allowable is “compute at least one sensitivity metric for the calibration validation score based at least on applying a range of perturbations to at least one extrinsic calibration parameter used to compute the at least one epipolar line; generate validation score sensitivity data based at least on the at least one sensitivity metric computed by applying the range of perturbations”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661