DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on 11/26/2024.
Claims 1-6 are currently pending and have been examined.
Claims 1-6 are currently rejected.
This action is made NON-FINAL.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Soon et. al. (US 2023/0089832), herein Soon in view of Han (US 2025/0270726), herein Han.
Regarding claim 1:
Soon teaches:
A roadway sign (the calibration targets can be used as a road sign for vehicles with LiDAR systems or other systems that can be used with the calibration targets. [0134]) comprising:
materials that reflect [specific] wavelengths of light (The calibration target can be configured to receive at least one beam from a LiDAR system and reflect the beam back to the LiDAR system. [0032]) detectable by autonomous vehicle sensors (Autonomous system 202 includes a sensor suite that includes one or more devices such as cameras 202a, LiDAR sensors 202b [0046]) for use in calibration of said sensors (calibration and/or validation of sensors included on vehicles [0028]).
Han also teaches:
A roadway sign (road signs [0003]) comprising:
Soon does not explicitly teach, however Han teaches:
materials that reflect specific wavelengths of light (a coating or coating system applied to objects, typically within the light spectrum just outside of the visible (wavelength of 400-700 nm), usually 700 nm to 2500 nm or 800 nm to 2500 nm, and specifically 905 nm to 1550 nm for standardized LiDAR wavelengths [0017])
It would have been obvious to one of ordinary skill in the art at the time of the effective filing date of the claimed invention to have modified Soon to include the teachings as taught by Han with a reasonable expectation of success. Both Soon and Han are in the same field of endeavor in dealing with road signs that are mean to be used by LIDAR systems in autonomous vehicles. Additionally, Han teaches the benefit of “When an object is LiDAR visible, there is a higher navigation accuracy for the vehicle. [Han, 0003]”. This would motivate one having ordinary skill in the art at the time of effective filing to have combined the road sign coating of Han with the road sign calibration method of Soon to arrive at the claimed invention.
Regarding claim 2:
Soon in view of Han teaches all the limitations of claim 1, upon which this claim is dependent.
Han further teaches:
said material is titanium dioxide-based paint (at least one pigment used in the NIR reflective layer is rutile titanium dioxide [0027]).
Regarding claim 3:
Soon in view of Han teaches all the limitations of claim 1, upon which this claim is dependent.
Soon further teaches:
a pattern recognizable by machine learning algorithms (FIG. 12 shows patterns, designs, or materials that can be used with any of the calibration targets described above [0131]); wherein
Han further teaches:
the pattern is not recognizable to the human eye (In another embodiment, the coatings system (200, 300) described and shown herein may be transparent [0037]).
Regarding claim 4:
Soon in view of Han teaches all the limitations of claim 3, upon which this claim is dependent.
Soon further teaches:
the pattern is a checkerboard having a first set of squares alternating with a second set of squares (fig. 8b);
Han further teaches:
the first set of squares in the checkerboard coated in titanium dioxide-based paint (at least one NIR transmitting layer at least partially coating the at least one converter layer [0007]; at least one pigment of the NIR reflective layer is an inorganic pigment. In some embodiments, the inorganic pigment is titanium dioxide [0025]; examiner notes that one would have been motivated to use the “partial” NIR coverage to be placed in that of the checkerboard pattern as taught by Soon.); and
the second set of squares coated in a non-reflective paint (at least one NIR transmitting layer at least partially coating the at least one converter layer [0007]).
Regarding claim 5:
Soon in view of Han teaches all the limitations of claim 3, upon which this claim is dependent.
Han further teaches:
an image visible to a human (examiner notes that the original image of the road sign would still visibly show its information in addition to the pattern output by the NIR coating.) superimposed over said pattern recognizable by machine learning algorithms (an exterior coating system applied to objects such as vehicles, road signs [0003]).
Regarding claim 6:
Soon in view of Han teaches all the limitations of claim 1, upon which this claim is dependent.
Soon further teaches:
A method for calibrating autonomous vehicle sensors (methods related to calibration courses and calibration targets [abstract]) using the sign of claim 1, the method comprising:
determining that by autonomous vehicle sensor, that a roadway sign target is in view (the sensors detect the one or more calibration targets [0081]); and
scanning the target (Starting at block 1302, a LiDAR system can direct at least one beam towards a calibration target. Moving to block 1304, the calibration target can receive the at least one beam. Moving to block 1306, the calibration target can reflect the at least one beam back to the LiDAR system. Moving to block 1308, the LiDAR system can receive the reflected at least one beam. [0133]); and
generating a point cloud (sensors 202b generates an image (e.g., a point cloud, a combined point cloud, and/or the like) representing the objects included in a field of view of LiDAR sensors 202b [0049]); and
comparing measured distances and angles of the point cloud (to calibrate LiDAR sensors, a calibration target 504 is provided by the calibration course 500 at a position that allows the calibration target 504 to simultaneously be detectable by at least two different LiDAR sensors included on the vehicle 516. For example, in some instances, calibration of LiDAR sensors comprises correlating or establishing a relationship between the output of two or more LiDAR sensors. Each LiDAR sensor can be configured to generate a point cloud representative of the environment detected by the LiDAR sensor. When a known calibration target 504 is detected within the point clouds of two different LiDAR sensors on the vehicle 516, this data can be used to calibrate the LiDAR sensors. Accordingly, in some embodiments, one or more of the calibration targets 504 that are configured for the calibration or validation of LiDAR sensors should be positioned such that they are detectable within a region of overlap between two LiDAR sensors. [0099]); and
determining the sensor’s intrinsic and extrinsic parameters (The surface patterns or textures can serve as camera targets and can assist in performing camera intrinsic calibration and/or camera extrinsic calibration. The calibration can be completed relative to other sensors. [0132]); and
calibrating the sensor (one or more calibration targets 504 can be configured for calibration of radar sensors on the vehicle 516 [0101]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shotan (US 2024/0393443) discloses methods for localizing light detection and ranging (lidar) calibration targets. An example method includes generating a point cloud of a region based on data from a light detection and ranging (lidar) device. The point cloud may include points representing at least a portion of a calibration target. The method also includes determining a presumed location of the calibration target. Further, the method includes identifying, within the point cloud, a location of a first edge of the calibration target. In addition, the method includes performing a comparison between the identified location of the first edge of the calibration target and a hypothetical location of the first edge of the calibration target within the point cloud if the calibration target were positioned at the presumed location. Still further, the method includes revising the presumed location of the calibration target based on at least the comparison.
Ji (US 2024/0221218) discloses a system captures a first frame and a second frame for an environment of an autonomous driving vehicle (ADV) from at least a first and a second cameras mounted on the ADV. The system determines at least two points in the first frame having corresponding points in the second frame. The system determines distance and angle measurement information from the first camera to the at least two points and from the second camera to the corresponding points. The system determines actual positioning angles of the first and second cameras with respect to an orientation of the ADV based on the distance and angle measurement information and pixel information in the first and second frames. The actual positioning angles are used to compensate misalignments in positioning angles for the first and second cameras.
Lehning (US 2024/0183962) discloses method for position calibration serves to fuse images of a camera and a LIDAR sensor. The camera records an image of a calibration board, wherein the pose of the calibration board relative to the camera can be determined based on known patterns. The LIDAR sensor records an image of the calibration board, wherein a pose of the calibration board relative to the LIDAR sensor can be determined based on additional reflection regions on the calibration board. Based on both poses, images that are recorded by the camera and/or the LIDAR sensor can respectively be converted into a common coordinate system or into the coordinate system of the other image in the following. Objects that are detected in one image can thereby be verified in another image.
Zhang (US 2023/0150518) discloses enable efficient calibration of a sensing system of an autonomous vehicle (AV). In one implementation, disclosed is a method and a system to perform the method, the system including the sensing system configured to collect sensing data and a data processing system, operatively coupled to the sensing system. The data processing system is configured to identify reference point(s) in an environment of the AV, determine multiple estimated locations of the reference point(s), and adjust parameters of the sensing system based on a loss function representative of differences of the estimated locations.
Abari (US 2022/0066002) discloses Improved calibration of a vehicle sensor based on static objects detected within an environment being traversed by the vehicle is disclosed. A first sensor such as a LiDAR can be calibrated to a global coordinate system via a second pre-calibrated sensor such as a GPS IMU. A static object present in the environment is detected such as signage. A type of the detected object is determined from static map data. Point cloud data representative of the static object is captured by the first sensor and a first transformation matrix for performing a transformation from a local coordinate system of the first sensor to a local coordinate system of the second sensor is iteratively redetermined until a desired calibration accuracy is achieved. Transformation to the global coordinate system is then achieved via application of the first transformation matrix followed by a second known transformation matrix.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott R Jagolinzer whose telephone number is (571)272-4180. The examiner can normally be reached M-Th 8AM - 4PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at (571)272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Scott R. Jagolinzer
Examiner
Art Unit 3665
/S.R.J./Examiner, Art Unit 3665 /CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665