Prosecution Insights
Last updated: April 18, 2026
Application No. 18/150,707

ONLINE CALIBRATION OF MISALIGNMENT BETWEEN VEHICLE SENSORS

Non-Final OA §102§103
Filed
Jan 05, 2023
Examiner
WIGGER, BENJAMIN DAVID
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
GM Cruise Holdings LLC
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
20 currently pending
Career history
20
Total Applications
across all art units

Statute-Specific Performance

§103
48.6%
+8.6% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
25.7%
-14.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Claims 1 – 20 are presented for examination. Claim Objections Claim 14 is objected to because of the following informalities: The phrase “associated the first object” at the end of claim 14 is grammatically incorrect. The grammatical error could be fixed by changing it to read “associated with the first object”. Appropriate correction is required. Claim Rejections - 35 USC § 102 (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 12-15 and 18-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US20200089971 (hereinafter Li). Regarding Claim 12, Li teaches a computer-implemented method for performing online calibration of misalignment of vehicle sensors, comprising: filtering a segmented light detection and ranging (LIDAR) point cloud captured by a LIDAR sensor to generate a filtered point cloud ([0029] describes filtering the point cloud data to include only a static object); identifying a contour corresponding to a silhouette of a first object in a camera image captured by a camera ([0030] describes performing feature extraction on an identified edge of a static object), wherein the LIDAR sensor and the camera are mounted on a vehicle, and the point cloud and the camera image are captured while the vehicle is operating normally ([0016] describes how during travel of a vehicle, the vehicle's sensors can include camera and lidar sensors used to detect surrounding objects); determining a distance between points of the filtered point cloud and the contour, wherein the distance corresponds to a misalignment error between the LIDAR sensor and the camera ([0034] describes determining a translation vector between LIDAR and camera data to achieve alignment); and performing correction based on the misalignment error ([0034] describes performing an alignment operation). Regarding Claim 13, Li teaches the computer-implemented method of claim 12, wherein the segmented LIDAR point cloud is clustered, and clusters of points in the segmented LIDAR point cloud correspond to certain objects ([0021] describes classifying objects in the point cloud to types including cars, houses, trees and utility poles). Regarding Claim 14, Li teaches the computer-implemented method of claim 12, wherein filtering the segmented LIDAR point cloud comprises: removing points which are not associated the first object ([0004] describes filtering the point cloud data for static objects. Since the first object is a static object, removal of points associated with moving objects would not be associated with the first object). Regarding Claim 15, Li teaches the computer-implemented method of claim 12, wherein the filtered point cloud has points corresponding to a silhouette of the first object ([0030] describes performing feature extraction on an identified edge of LIDAR data associated with a static object. An edge of a static object is considered to be part of its silhouette). Regarding Claim 18, Li teaches the computer-implemented method of claim 12, wherein performing the correction comprises: performing a post-processing procedure which transforms, one or more of: (1) point clouds from the LIDAR sensor and (2) camera images from the camera, based on the misalignment error ([0034] describes determining a translation vector between the camera and the lidar and performing an alignment operation). Regarding Claim 19, Li teaches the computer-implemented method of claim 12, wherein identifying the contour comprises: performing object classification or image segmentation to identify the first object in the camera image ([0019] of Li describes classification of objects in the point cloud using the image acquired by the camera and deep learning & [0031] of Li describes how the contour is recognized based on the image recognition technology). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 7-9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Non-Patent Literature “Automatic Online Calibration of Cameras and Lasers” (hereinafter Levinson). Regarding Claim 1, Li teaches a computer-implemented method for monitoring and addressing misalignment of vehicle sensors, comprising: receiving a light detection and ranging (LIDAR) point cloud captured by a LIDAR sensor, and a camera image captured by a camera, wherein the LIDAR sensor and the camera are mounted on a vehicle, and the LIDAR point cloud and the camera image are captured while the vehicle is operating on a road ([0016] describes how during travel of a vehicle, the vehicle's sensors can include camera and lidar sensors used to detect surrounding objects); performing online calibration based on the LIDAR point cloud and the camera image to determine a misalignment error between the LIDAR sensor and the camera ([0025] describes how sensor calibration is implemented based on camera and lidar data, [0034] describes determining a translation vector between the camera and the lidar and performing an alignment operation); Li fails to teach in response to the misalignment error meeting a first threshold raising a diagnostic error; and in response to the misalignment error failing to meet the first threshold, performing misalignment correction based on the misalignment error. However, Levinson teaches: in response to the misalignment error meeting a first threshold raising a diagnostic error (on page 5, the final paragraph of Section III describes alerting a command center if the calibration falls below a threshold); and in response to the misalignment error failing to meet the first threshold, performing misalignment correction based on the misalignment error (on page 5, the final paragraph of Section III describes pausing to perform offline calibration before resuming). Li and Levinson are both directed to online calibration systems for alignment of LIDAR and camera sensors associated with mobile autonomous platforms and are therefore analogous art. It would have been obvious for a person having ordinary skill in the art to improve the teachings of Li with the failure mode responses taught by Levinson in order to avoid prolonged operation in a degraded sensing state. Levinson teaches three different responses to cases of severe misalignment that vary in severity from reporting to complete cessation of activity. Regarding Claim 2, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein the LIDAR point cloud is segmented by an object classification process (0019] of Li describes classification of objects in the point cloud using the image acquired by the camera and deep learning). Regarding Claim 3, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein performing online calibration comprises: filtering the LIDAR point cloud to remove points which are not associated with vehicles ([0021] describes how static objects include cars, meaning that the system would filter out many non-car returns). Regarding Claim 4, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein performing online calibration comprises: filtering the LIDAR point cloud to keep points which are associated with edges ([0030] describes performing analysis on edge portions of the detected objects). Regarding Claim 5, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein performing online calibration comprises: filtering the LIDAR point cloud to remove points which are associated with moving objects ([0029] describes recognizing a static object from the surrounding objects from the point cloud). Regarding Claim 7, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein performing online calibration comprises: filtering the LIDAR point cloud to remove points which are beyond a threshold depth (on page 3 in the Laser Processing section, Levinson describes how depth discontinuities of greater than 30cm are used to identify edges of objects in a point cloud and filter all other points out. Consequently, any LIDAR points adjacent to but not on the edge of a detected object would be more than a threshold 30 cm behind the detected object and filtered out). Regarding Claim 8, the combination of Li and Levinson teach the computer-implemented method of claim 1, further comprising: in response to the diagnostic error being raised, causing the vehicle to enter a first degraded state, and to perform a safe stop maneuver (Page 5, final paragraph of Section III of Levinson describes suspending operation when sensor calibration falls below a threshold). Regarding Claim 9, the combination of Li and Levinson teach the computer-implemented method of claim 1, further comprising: in response to the diagnostic error being raised, causing the vehicle to perform a safe stop maneuver and to perform offline calibration for misalignment between the LIDAR sensor and the camera (Page 5, final paragraph of Section III of Levinson describes pausing to perform offline calibration before resuming). Regarding Claim 11, the combination of Li and Levinson teach the computer-implemented method of claim 1, wherein the LIDAR point cloud and the camera image are captured at substantially the same time ([0025] of Li describes simultaneous acquisition of data by the camera and the LIDAR). Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Levinson as applied to claim 1 and further in view of CN 117368943 (hereinafter Yan). Regarding Claim 6, the combination of Li and Levinson teaches the computer-implemented method of claim 1, however, the combination fails to teach wherein performing online calibration comprises: filtering the LIDAR point cloud to remove points which are associated with vegetation. Yan teaches filtering the LIDAR point cloud to remove points which are associated with vegetation (at page 6 lines 16-19 Yan describes performing point cloud registration… to remove noise objects, where the noise objects include vegetation). Yan, Li and Levinson all describe the manipulation of point cloud data collected by a LIDAR device and are therefore analogous art. Yan teaches that vegetation can introduce noise into a point cloud and suggests removal of vegetation from a point cloud and then at page 6 lines 43-47 goes on to state how removal of data points ensures the quality and consistency of the point cloud data. Consequently, a person having ordinary skill in the art would have found it obvious to improve the invention of Li and Levinson by removing vegetation detections in order to obtain a higher quality LIDAR point cloud. The computer-implemented method of claim 1, further comprising: in response to the misalignment error meeting a second threshold greater than the first threshold, causing the vehicle to enter a second degraded state and to navigate to a maintenance facility. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Levinson as applied to claim 1 and further in view of US 20230152431 (hereinafter Zhang). Regarding Claim 10, the combination of Li and Levinson teach the computer-implemented method of claim 1, but fail to teach further comprising in response to the misalignment error meeting a second threshold greater than the first threshold, causing the vehicle to enter a second degraded state and to navigate to a maintenance facility. However, Zhang teaches further comprising in response to the misalignment error meeting a second threshold greater than the first threshold, causing the vehicle to enter a second degraded state and to navigate to a maintenance facility ([0109] of Zhang describes navigating the vehicle to a maintenance facility to address hardware issues with sensors of an autonomous vehicle). Zhang and Levinson both describe ways of managing degradation of sensors enabling self-driving operation of an autonomous platform. As described above in the rejection of Claim 1, a person having ordinary skill in the art would have found it obvious to add responses to a degraded state taught by Levinson to the teachings of Li. That same person having ordinary skill in the art would also have found it obvious to further improve the combination of Li and Levinson by adding the additional response to sensor degradation taught by Zhang in order to give the autonomous vehicle additional ways to deal with sensor degradation problems. Claims 16 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of US20240095960 (hereinafter Qian). Regarding Claim 16, Li teaches the computer-implemented method of claim 12, wherein determining the distance comprises: matching points in filtered point cloud to points in the contour ([0035] describes rotating the 3d point cloud to correspond to the image plane of the camera-derived imagery); forming residual vectors measuring distances between the points in filtered point cloud to the points in the contour; and summing magnitudes of the residual vectors ([0035]describes an iterative process performed between coordinates on the camera-derived imagery and the point cloud), wherein the summed magnitudes corresponds to the misalignment error ([0035] describes translation vectors and rotation matrices derived from the iterative process corresponding to the misalignment error). Li does not teach applying a random sample consensus algorithm to match points in the filtered point cloud with the contour. However, Qian teaches applying a random sample consensus algorithm to match points in the filtered point cloud with the contour ([0052] describes the use of a RANSAC framework to match detected objects (lane blocks) from the camera in successive frames, which is part of the process for calibrating a LIDAR 112A with a camera 110A, as described in [0036]) Li and Qian are both directed to calibration of LIDAR and camera sensors during operation of an autonomous vehicle and are therefore analogous art. A person having ordinary skill in the art would have recognized that applying the known RANSAC framework described in Qian to improve the matching techniques of Li would have yielded predictable results. In particular, doing so would improve the point to point correlation taught by Li, thereby helping to disregard erroneous or outlying data points during the correlation. Claims 17 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of US20210270948 (hereinafter Villalobos-Martinez). Regarding Claim 17, Li teaches the computer-implemented method of claim 12, wherein performing the correction comprises: performing a physical adjustment of the LIDAR sensor and the camera which reduces the misalignment error. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Yan and further in view of US 20180284279 (hereinafter Campbell). Regarding Claim 20, Li teaches a vehicle, comprising: one or more light detection and ranging (LIDAR) sensors; one or more cameras ([0016] describes the first embodiment in which a vehicle has multiple sensors including a camera and a LIDAR); one or more processors (processors 416, FIG. 4 and [0048] describe a computer device with one or more processors 416 usable with the first embodiment); and one or more storage devices (storage device 428) to store point clouds generated by the one or more LIDAR sensor, camera images generated by the one or more cameras, and instructions, which when executed by the one or more processors, cause the one or more processors to perform misalignment calibration ([0054] describe a program being stored in storage device 428 for implementing a sensor calibration method describe in embodiment 1) comprising: generating a filtered point cloud from a light detection and ranging (LIDAR) point cloud captured by the one or more LIDAR sensors ([0020]-[0021] describe filtering the point cloud collected by the LIDAR sensor for static objects), wherein generating comprises maintaining points corresponding to edges, and removing points corresponding to moving objects, ([0020]-[0021] describe filtering the point cloud collected by the LIDAR sensor for static objects) identifying a contour corresponding to a silhouette of a first object in a camera image captured by the one or more cameras ([0032] describes identifying a contour of the static object (analogous to the first object) based object recognition), wherein the LIDAR point cloud and the camera image are aligned in time (; determining a misalignment error between points of the filtered point cloud and the contour ([0030] describes performing feature extraction on an identified edge of a static object & [0034] describes determining a translation vector and rotation matrix characterizing the misalignment between the camera and LIDAR sensor readings corresponding to the static object); and performing correction based on the misalignment error ([0034]-[0035] also describes performing an alignment operation to correct the misalignment). Li fails to teach: (1) wherein generating comprises removing points corresponding to vegetation, and (2) wherein generating comprises removing points corresponding to distant objects. However, Yan teaches wherein generating comprises removing points corresponding to vegetation (Page 6 lines 16-19 of Yan describes performing point cloud registration… to remove noise objects on the target slope that do not belong to the target slope, where the noise objects include vegetation) and Yan and Li both describe the manipulation of point cloud data collected by a LIDAR device and are therefore analogous art. Yan teaches that vegetation can introduce noise into a point cloud and suggests removal of vegetation from a point cloud and then at page 6 lines 43-47, Yan goes on to state how removal of data points ensures the quality and consistency of the point cloud data. Consequently, a person having ordinary skill in the art would have found it obvious to improve the invention of Li by removing vegetation detections, as taught by Li, in order to obtain a higher quality / less noisy LIDAR point cloud. Li as modified by Yan still does not teach wherein generating comprises removing points corresponding to distant objects. However, Campbell teaches wherein generating comprising removing points corresponding to distant objects ([0093] describes using range-gating to filter out any point cloud data arriving outside a distance of 50 – 100m from the LIDAR system, [0042] identifies exemplary max ranges beyond which a LIDAR system does not operate). Campbell and Li as modified by Yan both pertain to the detection of objects using LIDAR devices. Campbell at [0093] teaches that range-gating can be used with LIDAR devices to limit the collection or point cloud data past a particular range from the LIDAR device. Doing so prevents weak returns and other ambient noise sources from adversely affecting the collection of point cloud data (Campbell at [0034] describes problems caused by background noise). Consequently, a person having ordinary skill in the art would have found it obvious to improve the invention of Li as modified by Yan by removing distant detections using range-gating techniques in order to obtain a higher quality / cleaner LIDAR point cloud data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN WIGGER whose telephone number is (571)272-4208. The examiner can normally be reached 9:30am to 7:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at 5712703603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN DAVID WIGGER/Examiner, Art Unit 3645 /YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Jan 05, 2023
Application Filed
Jan 05, 2026
Non-Final Rejection — §102, §103
Mar 19, 2026
Interview Requested
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary
Mar 30, 2026
Response Filed

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month