Prosecution Insights
Last updated: April 19, 2026
Application No. 17/212,218

HYBRID LIDAR SYSTEM

Non-Final OA §103
Filed
Mar 25, 2021
Examiner
CHILTON, CLARA GRACE
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Goodrich Corporation
OA Round
5 (Non-Final)
56%
Grant Probability
Moderate
5-6
OA Rounds
3y 12m
To Grant
67%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
31 granted / 55 resolved
+4.4% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 12m
Avg Prosecution
43 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
58.1%
+18.1% vs TC avg
§102
23.4%
-16.6% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/19/2025 has been entered. Response to Arguments Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive. Applicant argues the cited prior art only teaches moving prisms to desired locations, but only capturing points after the prism has been moved to a position. Examiner disagrees. Applicant cites the language, used in the final office action, of "moving the prisms to desire positions, stopping motion, capturing points, them moving to new position." This clearly states that prisms are moved after points are captured. Thus, this argument is not persuasive. Applicant argues the prior art says nothing about including position data of the prism with captured points. Examiner disagrees. By being able to move the prism in response to measured points, it implies the position data of the prism must be known. Thus, this argument is not persuasive. Applicant argues the prior art says nothing about generating LiDAR data and TOF data as claimed. Examiner disagrees. This argument is unclear, as applicant admits Spickermann teaches LiDAR data at [0015] and TOF data at [0022], but fails to explain how this does not fit the claims. Thus, this argument is not persuasive. Applicant argues the cited reference does not teach a 3D point cloud generated in the specific way of flash based LiDAR generating varying TOF for different detection points. As stated in the final office action dated 9/25/2025, Spickermann teaches, in [0014], the generation of a 3D point cloud, and further teaches, in [0008], created a 3D image based on light detected by a 2D pixel array. Further, Spickermann describes moving prisms, illuminating a spot, and moving prisms again ([0019]) – thus, flash based LiDAR. Thus, this meets the limitations of the claim, and the argument is not persuasive. Claim Objections Claims 9, 14, 15, and 17-19 are objected to because of the following informalities: Claims 9, 14, 15, and 17-19 recite the limitation “the flash-based LiDAR.” There is insufficient antecedent basis for this limitation in the claim as there is no previous mention of “a flash based LiDAR”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5-9, 10, 11, 14-20, 23, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Spickermann (US 20190041518 A1) in view of Justice (US 20110285981 A1). Regarding Claim 1, Spickermann teaches a method of generating LIDAR data comprising: directing a broad beam from a broad beam laser emitter ([0035] - up to 45 degree spread) (Fig 2, light source 201) through a first beam steering mechanism including a first Risley prism pair (Fig. 1, illumination prisms 123, 114 and Fig 2, first Risley prisms 205, [0017] - two prisms); scanning a scene with the broad beam from the broad beam laser emitter ([0035]); directing returns of the broad beam from the scene with a second beam steering mechanism to a LiDAR detector array ([0038] and [0051] – directing beams to image sensor 209), the second beam steering mechanism including a second Risley prism pair having at least one Risley prism co-aligned with the first Risley prism pair (Fig. 1, prisms 111, 109, Fig. 2, prisms 212. [0038]. [0020] – synchronizing positions); detecting prism positions for one or more prisms of the first Risley prism pair and one or more prisms of the second Risley prism pair (([0017] lines 1-4 - setting prism angles implies being able to detect positions and [0019] – moving prisms to desired positions implies position detection); […] generating time of flight data for the returns of the broad beam for a plurality of detector points in the LIDAR detector array for each scanning pulse of the broad beam ([0051 lines 16-18), wherein time of flight varies for each pulse from a first detector point in in the LiDAR detector array to a second detector point in the LiDAR detector array to form three-dimensional (3D) data representing the scene for each scanning pulse ([0014] – 3D point cloud implies different time of flight for each pulse and [0008] – 3D image created from 2D array); and generating LiDAR data that combine each of the prism positions, […] detected positional data with the time of flight data ([0019]). Spickermann does not teach generating geo-location data using an inertial navigation system (INS), and using the geo-location data […] to generate LiDAR data. Justice teaches a sensor system which registers sensor data over a map of the area being scanned using GPS/INS ([0069] – GPS/INS and [0071] - registering sensor data over map). It would have been obvious to use the GPS/INS system and registration of points over a map as taught by Justice in the method as taught by Spickermann because Justice’s method allows for real time location tracking of each image – allowing for issues with the map to be mitigated by the real time sensor data (See justice [0070}-{0071]). Regarding Claim 2, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1, wherein directing the returns of the broad beam includes steering the broad beam with the beam steering mechanism over a conical field of regard (Spickermann [0037] - solid angle implies conical). Regarding Claim 5, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1, further comprising controlling the first and second beam steering mechanisms to maintain alignment of the LIDAR detector array and the broad beam laser emitter (Spickermann [0020]). Regarding Claim 6, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1, further comprising controlling LIDAR actuation with a LIDAR controller operatively connected to the LIDAR detector array and to the broad beam laser emitter (Spickermann Fig 2, light source modulator 214 would be understood to be able to actuate the light source). Regarding Claim 7, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1, further comprising generating a raw 3-d point cloud with all metadata required for geo-registration of detected LIDAR points (Justice [0071]). Regarding Claim 8, Spickermann, as modified in view of Justice, teaches the method as recited in claim 7, wherein the INS is operatively connected to the LIDAR detector array (Justice [0069]). Regarding Claim 9, Spickermann, as modified in view of Justice, teaches the method as recited in claim 7, further comprising aligning metadata with the LIDAR data using a real-time computer operatively connected to the flash based LIDAR detector array (Justice [0071]). Regarding Claim 10, Spickermann teaches a method of generating LIDAR data comprising: directing a broad beam from a broad beam laser emitter ([0035] - up to 45 degree spread) (Fig 2, light source 201) through a first Risley prism pair (Fig 2, first Risley prisms 205 and [0017] - two prisms); scanning a scene with the broad beam from the broad beam laser emitter (Fig 2, showing fields of view 207 created by laser beams and [0051]); directing returns of the broad beam from the scene with a second Risley prism pair to a LIDAR detector array (Fig 2, pixel-array image sensor 209); the second Risley prism pair having at least one Risley prism co-aligned with the first Risley prism pair ([0020] – synchronizing positions); detecting prism positions for one or more prisms of the first Risley prism pair and the second Risley prism pair (([0017] lines 1-4 - setting prism angles implies being able to detect positions and [0019] – moving prisms to desired positions implies position detection); generating time of flight data for the returns of the broad beam for a plurality of detector points in the LIDAR detector array for each scanning pulse of the broad beam ([0051] lines 16-18), wherein time of flight varies for each pulse from a first detector point in the LiDAR detector array to a second detector point in the LiDAR detector array to form three-dimensional (3D) data representing the scene for each scanning pulse ([0014] – 3D point cloud implies different time of flight for each pulse); maintaining alignment of the LIDAR detector array and the broad beam laser emitter using at least one controller ([0020]); and generating LiDAR data that combine each of the prism positions, […] and the time of flight data ([0019]). Spickermann does not teach generating geo-location data using an inertial navigation system (INS) including a global positioning system (GPS) and inertial measurement unit (IMU), and using the geo-location data […] to generate LiDAR data. Justice teaches a sensor system which registers sensor data over a map of the area being scanned using GPS/INS and associates this data with each laser pulse ([0069] – GPS/INS and [0071] - registering sensor data over map – obvious that INS includes IMU). It would have been obvious to use the GPS/INS system and registration of points over a map as taught by Justice in the method as taught by Spickermann because Justice’s method allows for real time location tracking of each image – allowing for issues with the map to be mitigated by the real time sensor data (See justice [0070}-{0071]). Regarding Claim 11, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, wherein directing the returns of the broad beam includes steering the broad beam with the first Risley prism pair over a conical field of regard (Spickermann [0037] - solid angle implies conical). Regarding Claim 14, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, further comprising controlling the first and second Risley prism pairs to maintain alignment of the flash based LIDAR detector array and the broad beam laser emitter (Spickermann [0020]). Regarding Claim 15, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, further comprising controlling LIDAR actuation with a LIDAR controller operatively coupled to the flash based LIDAR detector array and to the broad beam laser emitter (Spickermann Fig 2, light source modulator 214 would be understood to be able to actuate the light source). Regarding Claim 16, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, generating a raw 3D point cloud with all metadata required for geo-registration of detected LIDAR points (Justice [0071] - registering sensor data over map). Regarding Claim 17, Spickermann, as modified in view of Justice, teaches the method as recited in claim 16, wherein the INS is operatively connected to the flash based LIDAR detector array (Justice [0069]). Regarding Claim 18, Spickermann, as modified in view of Justice, teaches the method as recited in claim 16, further comprising aligning metadata with the LIDAR data using a real-time computer operatively connected to the flash based LIDAR detector array (Justice [0071]). Regarding Claim 19, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, but not further comprising generating metadata associated with the returns of the broad beam detected by the flash basedLIDAR detector array using at least one sensor subsystem ([0069] – GPS/INS and [0071] - registering sensor data over map). Regarding Claim 20, Spickermann teaches method of generating LIDAR data comprising: directing a broad beam from a broad beam laser emitter ([0035] - up to 45 degree spread) (Fig 2, light source 201) through a first Risley prism pair (Fig 2, first Risley prisms 205 and [0017] - two prisms); scanning a scene with the broad beam from the broad beam laser emitter (Fig 2, showing fields of view 207 created by laser beams and [0051]); directing returns of the broad beam from the scene with a second Risley prism pair to a flash based LIDAR detector array (Fig 2, pixel-array image sensor 209); the second Risley prism pair having at least one Risley prism co-aligned with the first Risley prism pair (Fig. 1, prisms 111, 109, Fig. 2, prisms 212. [0038]. [0020] – synchronizing positions); […] generating time of flight data for the returns of the broad beam for a plurality of detector points in the flash based LIDAR detector array for each scanning pulse of the broad beam ([0051 lines 16-18), wherein time of flight varies for each pulse from detector point to detector point to form three-dimensional (3D) data representing the scene for each scanning pulse ([0014] – 3D point cloud implies different time of flight for each pulse); detecting prism positions for one or more prisms of the first Risley prism pair and one or more prisms of the second Risley prism pair (([0017] lines 1-4 - setting prism angles implies being able to detect positions and [0019] – moving prisms to desired positions implies position detection); generating LiDAR data that combine each of the prism positions, […] detected positional data with the time of flight data ([0019]), controlling the first and second Risley prism pairs to maintain alignment of the flash based LIDAR detector array and the broad beam laser emitter ([0020]). Spickermann does not teach generating geo-location data using an inertial navigation system (INS) including a global position system (GPS) and a inertial measurement unit (IMU). Justice teaches a sensor system which registers sensor data over a map of the area being scanned using GPS/INS and associates this data with each laser pulse ([0069] – GPS/INS and [0071] - registering sensor data over map – obvious that INS includes IMU). It would have been obvious to use the GPS/INS system and registration of points over a map as taught by Justice in the method as taught by Spickermann because Justice’s method allows for real time location tracking of each image – allowing for issues with the map to be mitigated by the real time sensor data (See Justice [0070]-{0071]). Regarding Claim 23, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1, further comprising: merging metadata with the LIDAR data using a real-time computer operatively connected to the flash based LIDAR detector array (Justice [0071] – building map). Regarding Claim 24, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10, further comprising: merging metadata with the LIDAR data using a real-time computer operatively connected to the flash based LIDAR detector array (Justice [0071] – building map). Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over (US 20190041518 A1), in view of Justice, further in view of Kusevic (US 20100157280 A1). Regarding Claim 21, Spickermann, as modified in view of Justice, teaches the method as recited in claim 1 but not further comprising: detecting context data associated with the scene using a context imaging module; and merging the context data with the LIDAR data. Kusevic teaches a LiDAR device which is aligned with a scanning camera (Fig 1, camera scan 112 and laser scan 102 and [0020]). It would have been obvious to use the camera integration, as taught by Kusevic, with the method as taught by Spickermann, as modified in view of Justice, because, as Kusevic teaches, image data enhances the value of LiDAR data by adding elements such as color or greyscale (Kusevic [0004]), which allows for more information to be gained by a user. Regarding Claim 22, Spickermann, as modified in view of Justice, teaches the method as recited in claim 10 but not further comprising: detecting context data associated with the scene using a context imaging module; and merging the context data with the LIDAR data. Kusevic teaches a LiDAR device which is aligned with a scanning camera (Fig 1, camera scan 112 and laser scan 102 and [0020]). It would have been obvious to use the camera integration, as taught by Kusevic, with the method as taught by Spickermann, as modified in view of Justice, because, as Kusevic teaches, image data enhances the value of LiDAR data by adding elements such as color or greyscale (Kusevic [0004]), which allows for more information to be gained by a user. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLARA CHILTON whose telephone number is (703)756-1080. The examiner can normally be reached Monday-Friday 6-2 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached at (571) 272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CLARA G CHILTON/Examiner, Art Unit 3645 /ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Mar 25, 2021
Application Filed
Sep 09, 2024
Non-Final Rejection — §103
Nov 25, 2024
Response Filed
Jan 13, 2025
Final Rejection — §103
Mar 27, 2025
Response after Non-Final Action
Apr 18, 2025
Request for Continued Examination
Apr 21, 2025
Response after Non-Final Action
May 05, 2025
Non-Final Rejection — §103
Aug 04, 2025
Response Filed
Sep 22, 2025
Final Rejection — §103
Nov 13, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566251
INTEGRATED AND COMPACT LIDAR MEASURMENT SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12523748
DETECTOR HAVING QUANTUM DOT PN JUNCTION PHOTODIODE
2y 5m to grant Granted Jan 13, 2026
Patent 12481040
LOW POWER LiDAR SYSTEM WITH SMART LASER INTERROGRATION
2y 5m to grant Granted Nov 25, 2025
Patent 12474454
SENSOR WITH CROSS TALK SUPPRESSION
2y 5m to grant Granted Nov 18, 2025
Patent 12461208
DIFFRACTIVE LIGHT DISTRIBUTION FOR PHOTOSENSOR ARRAY-BASED LIDAR RECEIVING SYSTEM
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
56%
Grant Probability
67%
With Interview (+10.6%)
3y 12m
Median Time to Grant
High
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month