DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Amendments made to specification and drawings have overcome all previously held objections.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 7, 17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Alvarez (US 20200226790 A1) in view of Long (WO 2022233049 A1)
With respect to claim 1, Alvarez teaches a method of calibration for a vehicle (“Various aspects of this disclosure generally relate to sensor calibration for autonomous driving systems and/or the detection of sensor calibration.” Paragraph 0001), the method comprising: obtaining speed data from at least one vehicle controller of the vehicle during travel (“Measurement devices 116 may include other devices for measuring vehicle-state parameters, such as a velocity sensor (e.g., a speedometer) for measuring a velocity of the vehicle” paragraph 0061); from a light detection and ranging (LiDAR) of the vehicle (“light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10), collecting first (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10 ) and second pluralities of LIDAR data frames (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10 ); calculating a first yaw angle using the first plurality of LIDAR data frames (“for each image, the pattern is detected by one or more sensors (e.g., camera, image sensors, LIDAR, Radar, etc.). Using the detected pattern, one or more processors assess data representing the detected pattern and determine therefrom the extrinsic calibration pose relative to the pattern. The one or more processors may read out the pose (e.g., yaw, pitch, roll angles) of the calibration pattern relative to the underlying host vehicle platform from a code on or associated with the calibration structure.” Paragraph 0117 and “Temporary misalignments of the vehicle heading with the road orientation may be accounted for by averaging across multiple frames. Again, the one or more processors may calculate the intersection of the optical axis of the sensor with the normalized pattern plane. This may yield the yaw and pitch angle of the sensor pose relative to the road orientation. The one or more processors may infer the roll angle by comparing the corner positions of the normalized calibration pattern.” Paragraph 0117); calculating a second yaw angle using the second plurality of LIDAR data frames (“In Example 71, a sensor calibration detection device, including: one or more processors, configured to: determine a difference between first sensor data detected during movement of a first sensor along a route of travel and stored second sensor data; and if the difference is outside of a predetermined range, switch from a first operational mode to a second operational mode.” Paragraph 0221); determining whether the first and second yaw angles satisfy a consistency criterion (“In Example 71, a sensor calibration detection device, including: one or more processors, configured to: determine a difference between first sensor data detected during movement of a first sensor along a route of travel and stored second sensor data; and if the difference is outside of a predetermined range, switch from a first operational mode to a second operational mode.” Paragraph 0221); in response to the consistency criterion being satisfied, determining a third yaw angle for the LiDAR using the first and second yaw angles (“In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150); and calibrating the LiDAR using the third yaw angle (“In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150).
Alvarez does not teach determining that the speed data satisfies a speed threshold or a speed threshold itself.
Long teaches speed data thresholds and how they influence sensor behavior (“Here, the high-speed motion scene may be a scene where the allowable driving speed of the vehicle is higher than a preset threshold, for example, the preset threshold may be 60km/h, and the scene where the speed limit is higher than 60km/h may be a high-speed motion scene, such as a highway , expressways, roads on viaducts, etc. Since high-speed motion scenes usually have relatively simple road conditions, such as no pedestrians or few pedestrians, no crossroads or few crossroads, etc., you can focus on the middle area within the detection range and increase the point cloud density in the middle area. In order to improve the recognition accuracy of objects in the middle area, the range of the middle area can also be increased to improve the measurement accuracy of objects in the middle area. The middle area may be an area located in the middle of the detection range. In one example, the middle area may be an area where the center point coincides with the center point of the detection range.” pages 9-10 last paragraph of page 9, first paragraph of page 10 and “Here, the low-speed motion scene may include a scene where the vehicle speed is lower than a preset threshold, such as urban areas, villages, and parks. When the vehicle is driving in a low-speed motion scene, due to the complex road conditions of the low-speed motion scene, such as many pedestrians and small distances between vehicles, a more comprehensive perception of the environment is required. Therefore, the detection range can be increased in the horizontal and /or the scanning angle in the vertical direction, ie increasing the horizontal FOV and/or vertical FOV of the detection device. Moreover, the point clouds within the detection range can be evenly distributed to give enough attention to each area in the scene. In addition, considering that the objects in the low-speed motion scene are relatively close, and the laser energy needs to be limited by safety regulations, the range corresponding to the detection range can be reduced, that is, the detection range can be scanned with a lower luminous power.” Page 10 paragraph 5).
Long is analogous art in the same field of endeavor as the claimed invention. Long is directed at adjusting vehicle sensors (detection devices) such as lidar sensors (“Adjust the attributes of the detection range of the detection device mounted on the vehicle body according to the road condition information, where the attributes of the detection range at least include the position of the detection range and/or the distribution of point clouds within the detection range;” page 2 paragraph 4 and “In one example, the detection device may be a lidar or a laser ranging device.” Page 5 paragraph 8). It would have been obvious for a person of ordinary skill before the filing date of the claimed invention to combine the sensor calibration system of Alvarez with the sensor adjustments of Long by utilizing Long’s teachings of sensor detection and vehicle perception at different vehicle speeds, in combination with Alvarez’s lidar images, with the expectation that doing so would lead to the ability of the combined system’s lidar to adapt to different road conditions (“The embodiment of the present application provides a control method, which can adjust the attributes of the detection range of the detection device according to the road condition information of the currently driving road, so that the detection range of the detection device can adapt to the current road conditions” last paragraph of page 4) thus improving the lidar data used in the calibration process described in Alvarez.
With respect to claim 2, Alvarez and Long teach the method of claim 1. Alverez further teaches obtaining vehicle yaw rate data from the at least one vehicle controller during the travel (“gyroscope” paragraph 0045), and collecting first and second pluralities of Lidar data frames (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10 ), but does not teach determining that the vehicle yaw rate data satisfies a yaw rate threshold.
Long teaches yaw rate data satisfying a yaw rate threshold (“For example, if the calculated slope of the slope is positive and greater than the preset upper limit of the slope, it can be determined that the slope corresponds to the uphill. If the slope of the slope is negative and less than the preset lower limit of the slope, it can be determined that the slope corresponds to the downhill.” Page 8 paragraph 3), and vehicle yaw rate affecting calibrated sensor position (“In one embodiment, whether there is a slope on the road ahead may be determined according to the calibrated position and attitude of the detection device and at least one frame of point cloud frames scanned by the detection device. Here, the position of the detection device may be the installation position of the detection device relative to the vehicle—three-dimensional coordinate information (x, y, z), and the posture of the detection device may be the installation posture of the detection device relative to the vehicle—three-dimensional rotation information (row , pitch, yaw). The calibrated position and attitude of the detection device can accurately reflect the installation position and installation attitude of the detection device relative to the vehicle, thereby ensuring that the point cloud information obtained by scanning is true and accurate.” Second to last paragraph on page 11) and perception (“When the determined road condition information indicates that there is a slope on the road ahead, in one embodiment, the point cloud density of the region of interest within the detection range may be increased. Specifically, if the slope of the road ahead corresponds to an uphill, the region of interest may be located in the upper part of the detection range, that is, the point cloud density in the upper area of the detection range may be increased. If the slope of the road ahead corresponds to a downslope, the region of interest can be located in the lower part of the detection range, that is, the density of the point cloud in the lower part of the detection range can be increased. By increasing the point cloud density of the region of interest, the vehicle's perception accuracy of objects on the slope can be improved” page 8 paragraph 5.)
With respect to claim 3, Alverez and Long teach the method of claim 2. Alvarez further teaches discarding lidar data (“In Example 75, the sensor calibration detection device of Example 69 or 74, in which the second operational mode includes discarding the first sensor data.” Paragraph 0225) if the calibration process sends adjustments instruct ructions to the sensor (“In Example 69, the sensor calibration system of any one of Examples 46 to 68, in which the calibration instruction includes instructions to cause the sensor to be adjusted by the calibration adjustment.” Paragraph 0219).
Long teaches sensor adjustments necessitated by speed threshold (“Here, the high-speed motion scene may be a scene where the allowable driving speed of the vehicle is higher than a preset threshold, for example, the preset threshold may be 60km/h, and the scene where the speed limit is higher than 60km/h may be a high-speed motion scene, such as a highway, expressways, roads on viaducts, etc. Since high-speed motion scenes usually have relatively simple road conditions, such as no pedestrians or few pedestrians, no crossroads or few crossroads, etc., you can focus on the middle area within the detection range and increase the point cloud density in the middle area. In order to improve the recognition accuracy of objects in the middle area, the range of the middle area can also be increased to improve the measurement accuracy of objects in the middle area. The middle area may be an area located in the middle of the detection range. In one example, the middle area may be an area where the center point coincides with the center point of the detection range.” pages 9-10 last paragraph of page 9, first paragraph of page 10 and “Here, the low-speed motion scene may include a scene where the vehicle speed is lower than a preset threshold, such as urban areas, villages, and parks. When the vehicle is driving in a low-speed motion scene, due to the complex road conditions of the low-speed motion scene, such as many pedestrians and small distances between vehicles, a more comprehensive perception of the environment is required. Therefore, the detection range can be increased in the horizontal and /or the scanning angle in the vertical direction, ie increasing the horizontal FOV and/or vertical FOV of the detection device. Moreover, the point clouds within the detection range can be evenly distributed to give enough attention to each area in the scene. In addition, considering that the objects in the low-speed motion scene are relatively close, and the laser energy needs to be limited by safety regulations, the range corresponding to the detection range can be reduced, that is, the detection range can be scanned with a lower luminous power.” Page 10 paragraph 5) or yaw rate threshold (“For example, if the calculated slope of the slope is positive and greater than the preset upper limit of the slope, it can be determined that the slope corresponds to the uphill. If the slope of the slope is negative and less than the preset lower limit of the slope, it can be determined that the slope corresponds to the downhill.” Page 8 paragraph 3 and “…The calibrated position and attitude of the detection device can accurately reflect the installation position and installation attitude of the detection device relative to the vehicle, thereby ensuring that the point cloud information obtained by scanning is true and accurate.” Second to last paragraph on page 11).
With respect to claim 7, Alvarez and Long teach the method of claim 1. Alvarez further teaches wherein the first plurality of LIDAR data frames is collected during a first session, and wherein the second plurality of LIDAR data frames is collected during a second session (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10 ).
With respect to claim 17, Alverez and Long teach the method of claim 1, Alverez further teaches wherein determining the third yaw angle comprises calculating an average of the first and second yaw angles (“For example, the one or more processors may determine a mean/variance of the received sensor data as compared to the data of the central data, and if the mean/variance is outside of a predetermined range (i.e., the mean/variance is “abnormal”), the one or more processors may detect the data outside of the permitted region of the distribution (e.g., using the Wald test), and the one or more processors may generate and/or send a calibration instruction for one more sensors that detected the data having a mean/variance outside of the predetermined range.” Paragraph 0133).
With respect to claim 19, Alvarez and Long teach the method of claim 1. Alverez teaches wherein the third yaw angle is calculated and used in calibrating the LiDAR (“In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150).
Long teaches each time an event is detected by the vehicle prompts sensor adjustments (“Here, the high-speed motion scene may be a scene where the allowable driving speed of the vehicle is higher than a preset threshold, for example, the preset threshold may be 60km/h, and the scene where the speed limit is higher than 60km/h may be a high-speed motion scene, such as a highway, expressways, roads on viaducts, etc. Since high-speed motion scenes usually have relatively simple road conditions, such as no pedestrians or few pedestrians, no crossroads or few crossroads, etc., you can focus on the middle area within the detection range and increase the point cloud density in the middle area. In order to improve the recognition accuracy of objects in the middle area, the range of the middle area can also be increased to improve the measurement accuracy of objects in the middle area. The middle area may be an area located in the middle of the detection range. In one example, the middle area may be an area where the center point coincides with the center point of the detection range.” pages 9-10 last paragraph of page 9, first paragraph of page 10 and “Here, the low-speed motion scene may include a scene where the vehicle speed is lower than a preset threshold, such as urban areas, villages, and parks. When the vehicle is driving in a low-speed motion scene, due to the complex road conditions of the low-speed motion scene, such as many pedestrians and small distances between vehicles, a more comprehensive perception of the environment is required. Therefore, the detection range can be increased in the horizontal and /or the scanning angle in the vertical direction, ie increasing the horizontal FOV and/or vertical FOV of the detection device. Moreover, the point clouds within the detection range can be evenly distributed to give enough attention to each area in the scene. In addition, considering that the objects in the low-speed motion scene are relatively close, and the laser energy needs to be limited by safety regulations, the range corresponding to the detection range can be reduced, that is, the detection range can be scanned with a lower luminous power.” Page 10 paragraph 5).
With respect to claim 20, Alvarez and Long teach the method of claim 19, Long teaches wherein the event comprises that an output of an inertial measurement unit satisfies a criterion (“Here, the high-speed motion scene may be a scene where the allowable driving speed of the vehicle is higher than a preset threshold, for example, the preset threshold may be 60km/h, and the scene where the speed limit is higher than 60km/h may be a high-speed motion scene, such as a highway, expressways, roads on viaducts, etc. Since high-speed motion scenes usually have relatively simple road conditions, such as no pedestrians or few pedestrians, no crossroads or few crossroads, etc., you can focus on the middle area within the detection range and increase the point cloud density in the middle area. In order to improve the recognition accuracy of objects in the middle area, the range of the middle area can also be increased to improve the measurement accuracy of objects in the middle area. The middle area may be an area located in the middle of the detection range. In one example, the middle area may be an area where the center point coincides with the center point of the detection range.” pages 9-10 last paragraph of page 9, first paragraph of page 10 and “Here, the low-speed motion scene may include a scene where the vehicle speed is lower than a preset threshold, such as urban areas, villages, and parks. When the vehicle is driving in a low-speed motion scene, due to the complex road conditions of the low-speed motion scene, such as many pedestrians and small distances between vehicles, a more comprehensive perception of the environment is required. Therefore, the detection range can be increased in the horizontal and /or the scanning angle in the vertical direction, ie increasing the horizontal FOV and/or vertical FOV of the detection device. Moreover, the point clouds within the detection range can be evenly distributed to give enough attention to each area in the scene. In addition, considering that the objects in the low-speed motion scene are relatively close, and the laser energy needs to be limited by safety regulations, the range corresponding to the detection range can be reduced, that is, the detection range can be scanned with a lower luminous power.” Page 10 paragraph 5).
Claims 4-6, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Alvarez and Long as applied to claims 3, 7, and 1 above, and further in view of Liao (US 20210109205 A1).
With respect to claim 4, Alvarez and Long teach the method of claim 3. Alvarez teaches collecting first and second pluralities of Lidar data frame (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10). Alvarez does not explicitly teach first and second sets of ground points.
Liao teaches extracting ground points from each LiDAR data frame (“The painted lane markings 1240 can be used for dynamically calibrating the position and orientation of the LiDAR sensor 1210 relative to the vehicle 1220 based on LiDAR images acquired while the vehicle is traveling along the road 1230.” Paragraph 0099) and calculating yaw angles (“Thus, by analyzing the LiDAR images of the lane markings 1240 with respect to the vehicle path 1250, the amount of mis-alignment (e.g., the yaw error) of the LiDAR sensor 1210 with respect to the vehicle 1220 can be estimated” paragraph 0100).
Liao is analogous art to the claimed invention. Liao is directed towards lidar sensor calibration (“According to some embodiments, methods of calibrating a LiDAR sensor mounted on a vehicle are provided.” Paragraph 0025). A person of ordinary skill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez and Long by utilizing its teachings of road indicators of yaw in combination with the combined system’s calibration code and vehicle pose estimation processes and lidar calibration process with the expectation that doing so would lead to improved lidar calibration by making the system more adaptable to various road conditions that may affect calibration code perception (“The mis-alignment may result in inaccurate measurements of a position of an obstacle (e.g., a person 260) relative to the vehicle 220. Thus, re-calibration of the LiDAR sensor 210 may be required.” Paragraph 0035) and by improving the comparison process of first and second lidar data (“Once the relationship of the LiDAR 410 to the vehicle 420 is determined, any discrepancy of this relationship from the current LiDAR calibration can be used to correct the LiDAR calibration.” Paragraph 0061).
With respect to claim 5, Alvarez, Long, and Liao teach the method of claim 4. Alvarez teaches a first and second plurality of lidar data frames (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10). Alvarez does not explicitly teach first and second sets of ground points.
Liao teaches extracting ground points frame-by-frame (“Calibrations can be performed periodically or continuously while the vehicle is parked or even during driving.” Paragraph 0025 and “The painted lane markings 1240 can be used for dynamically calibrating the position and orientation of the LiDAR sensor 1210 relative to the vehicle 1220 based on LiDAR images acquired while the vehicle is traveling along the road 1230.” Paragraph 0099).
With respect to claim 6, Alvarez, Long, and Liao teach the method of claim 4. Alvarez teaches a first and second plurality of lidar data frames (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10). Alvarez does not explicitly teach first and second sets of ground points.
Liao teaches ground points are extracted based on being in the region with regard to the vehicle (“The painted lane markings 1240 can be used for dynamically calibrating the position and orientation of the LiDAR sensor 1210 relative to the vehicle 1220 based on LiDAR images acquired while the vehicle is traveling along the road 1230. Using the painted lane markings for calibration can be advantageous, as they exist almost on all roads. Also, the distance between a pair of lane markings 1240 is usually a standard distance.” Paragraph 0099).
With respect to claim 18, Alvarez and Long teach the method of claim 1, Alvarez teaches calculating the third yaw angle (“In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150).
Liao teaches repeatedly calculated values and using them in calibrating the LiDAR according to a calibration schedule (“Calibrations can be performed periodically or continuously while the vehicle is parked or even during driving” paragraph 0025).
Liao is analogous art to the claimed invention. Liao is directed towards lidar sensor calibration (“According to some embodiments, methods of calibrating a LiDAR sensor mounted on a vehicle are provided.” Paragraph 0025). A person of ordinary skill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez and Long by utilizing its teachings of road indicators of yaw in combination with the combined system’s calibration code and vehicle pose estimation processes and lidar calibration process with the expectation that doing so would lead to improved lidar calibration by making the system more adaptable to various road conditions that may affect calibration code perception (“The mis-alignment may result in inaccurate measurements of a position of an obstacle (e.g., a person 260) relative to the vehicle 220. Thus, re-calibration of the LiDAR sensor 210 may be required.” Paragraph 0035) and by improving the comparison process of first and second lidar data (“Once the relationship of the LiDAR 410 to the vehicle 420 is determined, any discrepancy of this relationship from the current LiDAR calibration can be used to correct the LiDAR calibration.” Paragraph 0061).
Claims 8, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Alverez and Long as applied to claim 7 above, and further in view of Liao and Jiang (US 20210373138 A1).
With respect to claim 8, Alvarez and Long teach the method of claim 7. Alvarez teaches first and second sessions (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10). Alvarez does not explicitly teach first and second sets of ground points.
Liao teaches extracting ground points from each LiDAR data frame (“The painted lane markings 1240 can be used for dynamically calibrating the position and orientation of the LiDAR sensor 1210 relative to the vehicle 1220 based on LiDAR images acquired while the vehicle is traveling along the road 1230.” Paragraph 0099) and calculating yaw angles (“Thus, by analyzing the LiDAR images of the lane markings 1240 with respect to the vehicle path 1250, the amount of mis-alignment (e.g., the yaw error) of the LiDAR sensor 1210 with respect to the vehicle 1220 can be estimated” paragraph 0100).
Liao is analogous art to the claimed invention. Liao is directed towards lidar sensor calibration (“According to some embodiments, methods of calibrating a LiDAR sensor mounted on a vehicle are provided.” Paragraph 0025). A person of ordinary skill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez and Long by utilizing its teachings of road indicators of yaw in combination with the combined system’s calibration code and vehicle pose estimation processes and lidar calibration process with the expectation that doing so would lead to improved lidar calibration by making the system more adaptable to various road conditions that may affect calibration code perception (“The mis-alignment may result in inaccurate measurements of a position of an obstacle (e.g., a person 260) relative to the vehicle 220. Thus, re-calibration of the LiDAR sensor 210 may be required.” Paragraph 0035) and by improving the comparison process of first and second lidar data (“Once the relationship of the LiDAR 410 to the vehicle 420 is determined, any discrepancy of this relationship from the current LiDAR calibration can be used to correct the LiDAR calibration.” Paragraph 0061).
Jiang teaches extracting lane marker points based on light intensity (“Ground points are extracted based on ground fitting and filtering at 530. Lane mark points are extracted based on intensity change detection and filtering at 540” paragraph 0072); fitting a line to the lane marker points (“Ground points are extracted based on ground fitting and filtering at 530. Lane mark points are extracted based on intensity change detection and filtering at 540” paragraph 0072); and evaluating a fit of the line to the lane marker points (“Ground points are extracted based on ground fitting and filtering at 530. Lane mark points are extracted based on intensity change detection and filtering at 540. Potential lane mark points are extracted based on spatial filtering (a>x>b, c>y>d) at 550 based on vehicle locations and the reference lane mark line information from maps, crowd sourcing, and history data. Noise points are removed by line model fitting at 560” paragraph 0072).
Jiang is analogous art in the same field of endeavor as the claimed invention. Jiang is directed towards vehicular lidar systems (“The present disclosure generally relates to lidar systems, and more particularly relates to systems and methods for lidars of a vehicle” paragraph 0001). A person of ordinary skill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez, Long, and Liao with Jiang by utilizing Jiang’s teachings of noise points and line fittings in combination of Liao’s lane marking calibration system with the expectation that doing so would improve the accuracy of lane marking detection and thus the accuracy of the combined systems calibration process by eliminating outlier measurements (“Noise points are removed by line model fitting at 560.” Paragraph 0072).
With respect to claim 10, Alvarez, Long, Liao, and Jiang teach the method of claim 8. Alvarez further teaches wherein the determination whether the first and second yaw angles satisfy the consistency criterion (“In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150) is performed in response to having at least the first and second sessions of the first and second plurality of LIDAR data frames, respectively (“Throughout portions of this disclosure, sensor data may be referred to as “first sensor data” and “second sensor data”. The first sensor data may be understood at least as data from a sensor whose calibration is being tested, and the second sensor data may be understood as data from a database to which the first sensor data is compared. Because of the nature of this process, and at least due to the fact that some or all of the second sensor data may be obtained by the central database before the first central data, the terms “first” and “second” with respect to the sensor data are not intending to suggest a chronology, but rather are used for identification purposes only.” Paragraph 0151 and “light detection and ranging (LIDAR) sensors” page 7 paragraph 0060 lines 9-10 and “In Example 71, a sensor calibration detection device, including: one or more processors, configured to: determine a difference between first sensor data detected during movement of a first sensor along a route of travel and stored second sensor data; and if the difference is outside of a predetermined range, switch from a first operational mode to a second operational mode.” Paragraph 0221).
With respect to claim 12, Alvarez, Long, Liao, and Jiang teach the method of claim 8. Alvarez further teaches comprising determining whether a threshold number of frames have been accumulated in each session (“Once the data is collected, one or more processors may determine a mean and/or variance of the data. The one or more processors may determine out-of-distribution data using the Wald test. Should outlying data be found, the one or more processors may attribute the outlying data to a sensor problem (i.e., sensor calibration).” Paragraph 0136 and “If the subsequent data uploaded by the vehicle is continuously problematic (e.g., determined to be suspect T times in a predetermined time frame, or determined to be suspect in T number of subsequent iterations), the sensor may be designated as a sensor requiring calibration.” Paragraph 0142).
Claims 9, 11, and 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Alverez, Long, Liao, Jiang as applied to claim 8 above, and further in view of Chen (CN 107577996 A).
With respect to claim 9, Alvarez, Long, Liao, and Jiang teach the method of claim 8. Chen teaches wherein in response to the fit of the line to the lane marker points not satisfying a criterion, the method further comprises discarding a current frame (“In a continuous video sequence, the change of lane between each frame of video should be smooth and continuous, so the invention uses smooth filter, the one period of time fitting the lane in a buffer area, and when a new frame picture input, combining multiple image processing result with the result of the current detection smoothing filter as the output of the final lane line.” Page 9 paragraph 9 And “However, the smoothing filter will bring accumulation of error, with the continuous increase of the error may be the lane tracking failure. In order to solve this problem, the method of difference between two lane coordinate sets a threshold value, if the difference value is greater than the threshold, the lane tracking deviation occurs, then discarding the data in the buffer area, re-searching the full image thereby tracking the lane again” pages 9-10 last paragraph of page 9, first paragraph of page 10).
Chen is analogous art in the same field of endeavor as the claimed invention. Chen is directed towards vehicle sensor calibration (“camera calibration algorithm” page 7 paragraph 3). A person of ordinary sill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez, Long, Liao, and Jiang with Chen by utilizing its lane tracking teachings, namely its deviation and discard system, in conjunction with Liao’s and Jiang’s lane and ground point extraction and outlier and noise data discarding teaching as well as taking it in conjunction with Alvarez’s frame discarding due to sensor misalignment teaching, with the expectation that doing so would lead to a more accurate calibration process due to more accurate lane marker information.
With respect to claim 11, Alvarez, Long, Liao, and Jiang teach the method of claim 8, Chen teaches wherein evaluating the fit comprises evaluating a standard deviation of a point-to-line distance (“considering the lane is parallel to each other, can calculate two detection lane all point coordinate of the standard deviation, if the standard deviation is greater than a certain threshold value, the lane tracking deviation, re-performing a sliding window search the whole image to determine lane position.” Page 9 Paragraph 1).
Chen is analogous art in the same field of endeavor as the claimed invention. Chen is directed towards vehicle sensor calibration (“camera calibration algorithm” page 7 paragraph 3). A person of ordinary sill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez, Long, Liao, and Jiang with Chen by utilizing its lane tracking teachings, namely its deviation and discard system, in conjunction with Liao’s and Jiang’s lane and ground point extraction and outlier and noise data discarding teaching as well as taking it in conjunction with Alvarez’s frame discarding due to sensor misalignment teaching, with the expectation that doing so would lead to a more accurate calibration process due to more accurate lane marker information.
With respect to claim 13, Alvarez, Long, Liao, and Jiang teach the method of claim 8, Alvarez further teaches wherein in response to the consistency criterion not being satisfied (“Once the data is collected, one or more processors may determine a mean and/or variance of the data. The one or more processors may determine out-of-distribution data using the Wald test. Should outlying data be found, the one or more processors may attribute the outlying data to a sensor problem (i.e., sensor calibration).” Paragraph 0136), outlying data is deleted from the database (“Once an outlier has been identified, a notification can be sent that the sensor requires calibration; the data from said sensor may be deleted from the database; and/or steps may be taken to improve the sensor's calibration” paragraph 0103). Alvarez further teaches saving sessions that don’t show sensors in need of calibration (“In Example 75, the sensor calibration detection device of Example 69 or 74, in which the second operational mode includes discarding the first sensor data.” Paragraph 0225 and “In Example 69, the sensor calibration system of any one of Examples 46 to 68, in which the calibration instruction includes instructions to cause the sensor to be adjusted by the calibration adjustment.” Paragraph 0219)
Chen teaches data showing a worse fit of the line to the lane marker points being deleted (“In order to solve this problem, the method of difference between two lane coordinate sets a threshold value, if the difference value is greater than the threshold, the lane tracking deviation occurs, then discarding the data in the buffer area” pages 9-10 last paragraph of page 9, first paragraph of page 10).
Chen is analogous art in the same field of endeavor as the claimed invention. Chen is directed towards vehicle sensor calibration (“camera calibration algorithm” page 7 paragraph 3). A person of ordinary sill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez, Long, Liao, and Jiang with Chen by utilizing its lane tracking teachings, namely its deviation and discard system, in conjunction with Liao’s and Jiang’s lane and ground point extraction and outlier and noise data discarding teaching as well as taking it in conjunction with Alvarez’s frame discarding due to sensor misalignment teaching, with the expectation that doing so would lead to a more accurate calibration process due to more accurate lane marker information.
With respect to claim 14, Alvarez, Long, Liao, Jiang and Chen render obvious all limitations in consideration of claim 13. Additionally, Alvarez teaches after saving the session having the better fit as the previous session the method further comprises determining whether a threshold number of frames have been processed (“Once the data is collected, one or more processors may determine a mean and/or variance of the data. The one or more processors may determine out-of-distribution data using the Wald test. Should outlying data be found, the one or more processors may attribute the outlying data to a sensor problem (i.e., sensor calibration).” Paragraph 0136 and “If the subsequent data uploaded by the vehicle is continuously problematic (e.g., determined to be suspect T times in a predetermined time frame, or determined to be suspect in T number of subsequent iterations), the sensor may be designated as a sensor requiring calibration.” Paragraph 0142).
With respect to claim 15, Alvarez, Long, Liao, Jiang and Chen teach the method of claim 14, Alverez further teaches wherein in response to the threshold number of frames having been processed (“Once the data is collected, one or more processors may determine a mean and/or variance of the data. The one or more processors may determine out-of-distribution data using the Wald test. Should outlying data be found, the one or more processors may attribute the outlying data to a sensor problem (i.e., sensor calibration).” Paragraph 0136 and “If the subsequent data uploaded by the vehicle is continuously problematic (e.g., determined to be suspect T times in a predetermined time frame, or determined to be suspect in T number of subsequent iterations), the sensor may be designated as a sensor requiring calibration.” Paragraph 0142), the method further comprises using a yaw angle from the previous session (“Temporary misalignments of the vehicle heading with the road orientation may be accounted for by averaging across multiple frames. Again, the one or more processors may calculate the intersection of the optical axis of the sensor with the normalized pattern plane. This may yield the yaw and pitch angle of the sensor pose relative to the road orientation. The one or more processors may infer the roll angle by comparing the corner positions of the normalized calibration pattern.” Paragraph 0117 and “In Example 97, the method of sensor calibration detection of any one of Examples 45 to 96, in which the second operational mode includes the one or more processors determining an affine transformation of the first sensor data relative to a mean of the second sensor data, and sending an instruction to adjust a sensor corresponding to the first sensor data by the determined affine transformation.” Paragraph 0248 and “Although the specific implementation of the mechanism by which the sensor is calibrated exceeds the scope of this disclosure, it is generally anticipated that a sensor to be calibrated (e.g., a camera, a LIDAR sensor, a Radar sensor, etc.) may be equipped with a motorized mount, which may be controlled by one or more processors. When the one or more processors receive a calibration instruction, they may control the motorized mount to alter the calibration of the sensor (e.g., the pitch, yaw, and/or roll of the sensor).” Paragraph 0150, second sensor data coming from database).
With respect to claim 16, Alvarez, Long, Liao, and Jiang teach the method of claim 8, Chen teaches wherein the lane marker points correspond to a curved road (“to determine m pixels corresponding to the lane line, the m lane pixel point curve equation by least squares fitting, sliding window search according to a series of coordinate points obtained by solving a curve y=p (x) to express lane.” Page 8 paragraph 2), and wherein the line fitted to the lane marker points is a curved line (“to determine m pixels corresponding to the lane line, the m lane pixel point curve equation by least squares fitting, sliding window search according to a series of coordinate points obtained by solving a curve y=p (x) to express lane.” Page 8 paragraph 2).
Chen is analogous art in the same field of endeavor as the claimed invention. Chen is directed towards vehicle sensor calibration (“camera calibration algorithm” page 7 paragraph 3). A person of ordinary sill before the effective filing date of the claimed invention would have found it obvious to combine the system of Alvarez, Long, Liao, and Jiang with Chen by utilizing its lane tracking teachings, namely its deviation and discard system, in conjunction with Liao’s and Jiang’s lane and ground point extraction and outlier and noise data discarding teaching as well as taking it in conjunction with Alvarez’s frame discarding due to sensor misalignment teaching, with the expectation that doing so would lead to a more accurate calibration process due to more accurate lane marker information.
Response to Arguments
Applicant’s arguments filed 01/02/2026 have been considered, but are not considered persuasive.
Inside the remarks applicant argues that Alverez does not disclose various limitations within claim 1. Applicant argues that Alvarez does not teach collecting from vehicle LiDAR first and second pluralities of LIDAR data frames from which respected first and second yaw angles are calculated (see Claim 1 lines 4-8). The examiner disagrees. The mapped language corresponds to example 71 which is directed to the system determining calibration inconsistencies based on sensor data and initiating a recalibration reconstruction (change in operational mode). Applicant denies this interpretation based largely on two reasons. Point 1, being that the sensor data does not correspond to the sensor data as claimed in claim 1 limitations (see page 7 section 103 paragraph 5 and page 9 paragraphs 4-7 of remarks) and Point 2 being that the operational mode as mentioned in example 71 does not correspond to modes A and B (see pages 7 (bottom) – 9 paragraph 3 of remarks).
In regards to point 1, the applicant argues that the sensor data is not described as originating for the same movement or coming from the same vehicle (see page 9 paragraph 6). The examiner disagrees. Firstly, Alvarez paragraph 0151 describes the distinction of first and second data as data from a sensor whose calibration is being tested and stored data being compared against it. The specification mentions many instances where calibration is tested, including in Modes A and B (see Alverez paragraphs 0105, 0082, and 0085). Although the specification does mention second sensor data coming from additional vehicles it does so as an example. Nowhere does the specification of Alvarez indicate that sensor data from the database concerning a different vehicle traveling in a different way is being is limited to being compared against non-like first data concerning a secondary vehicle, sensor, or way of travel. Additionally, the specification does support calibration data being stored inside vehicle-based memory (see Alvarez paragraph 0098).
In regards to point 2, applicant argues that Mode A and B can’t be interpreted as first and second operation modes (see page 8 paragraph 2 of remarks). The examiner disagrees. Two of the explicitly defined modes in the specification that sensors operate in, are Modes A and B. Thusly, at the very least, Modes A and B can be interpreted as operational modes. Applicant further argues that the structure of the examples merits an interpretation akin to that of claim language and that because example 71 does not depend from a previous example it should be treated as an independent claim (see page 8 paragraphs 3 of remarks). The examiner does not view this as an appropriate interpretation, however if the examiner were to interpret the example as a claim, based on its BRI it still does not exclude Modes A and B. The applicant additionally references further examples, referred to as dependent claims, and argues that they contain references to limitations not present in Modes A and B (see page 8 paragraph 4-9 and page 9 paragraphs 1-3). The examiner points out that no where within the mapping of claim 1 were these additional examples cited and furthermore if following the applicant’s claim-based interpretation these dependent claim limitations would not be read into its parent claim meaning a reference teaching the parent claim need not teach the dependent claim.
Additionally, examiner notes Modes A and B describe pose reconstruction techniques for the process of calibrating systems utilizing a comparison of various parameters including Yaw (see Alvarez paragraphs 0115, 0116, 0117, and 0118).
Finally, the applicant argues that Alvarez does not teach where the comparison takes place pointing to the claim language, of claim 1, referencing the processes occurring while the vehicle is under a specific speed threshold (see page 9 (bottom) of remarks). The examiner disagrees and points out that Alvarez is not cited to teach the limitation of a speed threshold and that this is a moot point. However, the examiner also notes that Mode A and B both describe calibration involving a vehicle in motion (see Alvarez paragraph 0078).
Due to the above arguments not being persuasive the examiner declines to withdraw the previously held rejections.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA C WILLIAMS whose telephone number is (571)272-7074. The examiner can normally be reached M-F 7:30am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew W Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA COLETTE WILLIAMS/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677