DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
1. This action is in response to the applicant’s filing on March 5, 2026. Claims 1-20 are pending. Applicant’s amendments to the Claims have overcome the 112(d) and 103 rejections previously set forth in the Non-Final Office Action mailed December 5, 2025.
Claim Rejections – 35 USC § 112 (a)
2. The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL. The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
3. Claims 1 and 12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claims contain subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
4. Regarding Claims 1 and 12, applying the factors set forth in In re Wands.
5. State of the prior art: Level of predictability in the art:
Claims 1 and 12 (Currently Amended) recite: calculating correlation coefficients between the measurement data and each of the reference data candidates; determining a final distance offset based on the correlation coefficients. The specification states: processor 110 may estimate a distance offset c based on a correlation coefficient between the measurement data and the reference data in the range of the distance offset. In this case, the correlation coefficient refers to a relationship between a data aspect of the measurement data and a data aspect of the reference data. More specifically, the correlation coefficient represents a relationship of the increase of decrease trend between the reference data according to the increase or decrease trend of the measurement data and has a value between 0 and 1. As the value approaches 1, the data aspects of the measurement data and the reference data coincide with each other. The prior art is silent on the use a correlation coefficient in the way described by the inventor. i.e. in the art a correlation coefficient may be used to determine which alignment parameters of a LiDAR system might affect correction factors concerning data fusion or LiDAR-LiDAR calibration. Special attention can be paid to those parameters with high error correlation [Design and Evaluation of a Permanently Installed Plane-Based Calibration Field for Mobile Laser Scanning Systems]. The correlation coefficient can also be used to determine if there is any correlation in the distance offset between different axis in a 3D point cloud. i.e., the correlation between different parameters [The universal LiDAR model]. The current application uses atypical data (same axis, same channel, same parameter variables) and applies it in an atypical way (selection of a specific distance correction based on a comparison of the same variables, rather than a test for how different variables relate to one another linearly). When viewed in light of it’s typical use in the art, a lack of predictability arises.
6. Amount of direction provided by the inventor:
The inventor provides no direction on the atypical use of the above named function to be employed in the disclosed method or to be executed by the processor, in the disclosed apparatus.
7. Existence of working examples:
The disclosed examples appear to be prophetic rather than working examples due to the predominant use of the present tense in example descriptions. This is further indicated by the lack of variation in data sets provided in the figures. Specifically figures 8 and 13 appear to be modeled data.
8. Quantity of experimentation needed to make or use the invention based on the content of the disclosure:
At the time of filing, the level of predictability in the art for using the correlation coefficient, as described in the specification, to correct LiDAR data was low. Applicant’s disclosure provides no clear direction on how to implement the method. Further, no working examples that utilize this atypical method are known in the art. In light of the above factors, undue experimentation would be required by any person skilled in the art to practice the full scope of the claims.
9. Claims 2-11 and 13-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claims contain subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
10. Regarding Claims 2-11 and 13-20:
Claims 2--11 are dependent on Claim 1 and Claims 13-20 are dependent on Claim 12. A dependent claim incorporates all limitations of the independent claim it relies on and must be supported by the same enablement as the independent claim.
Claim Rejections - 35 USC § 103
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
12. Claims 1-3, 5-9, 12-14, and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Hrabe et al (US 20210146942 A1), hereinafter Hrabe, in view of Dimsdal et al (US 20060007422 A1), hereinafter Dimsdale, and further in view of Rodarmel et al (The Universal LiDAR Error Model), hereinafter Rodarmel.
13. Regarding Claims 1 & 12:
Hrabe teaches A method of correcting distance distortion to improve detection accuracy of a Light Detection and Ranging (LiDAR), ([0005]: By means of non-limiting illustrative examples, to calibrate a robot with multiple light detection and ranging (LIDAR) sensors may require positioning and repositioning multiple target objects around the robot to calibrate the sensors). The method comprising: acquiring measurement data for detecting a target by the LiDAR ([0017]: The first data set corresponding to a set of coordinates generated by the at least one sensor based on at least one respective reference target along a first path of the at least one sensor).
Hrabe does not teach, determining reference data candidates from among a plurality of reference data based on differences between the measurement data and each of the plurality of reference data, wherein each of the plurality of reference data is distortion data calculated by adding a respective preset distance offset to an assumed accurate position of the target.
However, Dimsdale teaches ([0055] In use, the calibrator compensates for the apparent ranges of objects measured in the field. At the time of system construction, timing circuit 106 or 138 will be used to measure the apparent ranges of the pulses coming from resonator/attenuator 114. These values are recorded. One can assume that the fibers in the resonator/attenuator 114 will not change length or refractive index over time. Then, when a new object is measured, the measured ranges of the object will be altered depending on the apparent ranges from the calibration. For instance, if the calibrator measurements suggest that time intervals are all being measured 0.5% longer, then the same amount will be added to the measurements for the real object). Dimsdale further teaches, ([0056] The calibrator can also be used to determine variations in measured range due to variations in return intensity. Since the spacing of the pulses from the calibrator is independent of intensity, the attenuation can be varied and both pulse intensity measurements and time interval measurements can be recorded. This can be used to produce a table of measured intensity vs. measured time. This table can then be used as a correction to measurements taken from real objects, so that objects of differing intensity but the same range will appear to have the same range as long as the environmental conditions of the sensor remain unchanged. When the environment does change, a new table of corrections must be acquired. This procedure adds significantly to the system resolution and accuracy).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Hrabe with Dimsdale to include, determining reference data candidates based on differences between measurement data and reference data, wherein reference data is distortion data, since they are the same field of endeavor and results would have been predictable. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Hrabe with Dimsdale since, ([0062]: While timing interpolators are designed to provide an accurate, linear measure of the elapsed time between asynchronous events and system clock pulses, real circuits can only provide an approximation. The difference can become exceedingly important when attempting to make measurements with picosecond precision, and also when the interpolation ratios are large).
Regarding the use of correlation coefficients: Examiner has made the following analysis based on the application of correlation coefficients known in the art, i.e. the comparison of how different variables relate to one another linearly, rather than the disclosed, distance correction based on a comparison of the same variables. This analysis will continue to be used for any instance of correlation coefficients occurring in the following claims.
Hrabe as modified by Dimsdale does not teach, calculating correlation coefficients between the measurement data and each of the reference data candidates; determining a final distance offset based on the correlation coefficients, wherein the final distance offset corresponds to a distance offset of a reference data candidate selected based on the correlation coefficients and correcting the measurement data using the final distance offset.
However, Rodarmel teaches ([P. 548]: Correlation coefficients (ρ) are computed based on the spatial distance between points in each dimension (Δu, Δv, Δw).In most cases, UE is determined by empirical means, such as performing data adjustments in areas with much control data, providing initial estimates of the UE and its correlations, and refining these estimates until the resulting reference variance approaches unity). Rodarmel further teaches, ([P. 546]: ULEM was developed to provide support for rigorous error propagation and adjustability in the essential lidar point cloud scenarios while efficiently storing the requisite metadata). Rodarmel also teaches, ([P. 548]: this ULEM metadata can then be passed through and utilized in the subsequent processing architecture used by lidar practitioners).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Hrabe with Rodarmel to include estimating the distance offset based on a correlation coefficient between the measurement data and the reference data in the range of the distance offset; and correcting the measurement data using the estimated distance offset, since it is the same field of endeavor and results would have been predictable. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Hrabe with Rodarmel since, (Rodarmel [P. 543]: for some applications it is necessary to quantify the accuracy (or uncertainty) of the lidar data and/or fuse the lidar data with other lidar datasets and/or with image products, sometimes in the absence of independent ground-coordinate check points. This necessitates error propagation methods applied to an error model to predict the lidar data uncertainty, or to weight the adjustable parameters).
14. Regarding Claim 12:
Hrabe teaches a processor, ([0017]: Further, the at least one processor is configured to execute the computer readable instructions to, calculate the value by comparing a respective coordinate in the first data set with a respective coordinate in the reference data set). Hrabe continues to teach, ([0032]: FIG. 5 is a process flow diagram of a method for operating a calibration room to calibrate a robot by a specialized processor according to an exemplary embodiment). Hrabe further teaches, ([0117]: Block 510 illustrates sensor units 214 receiving data from the location and orientation of sensor targets 102, 104, 106, 108, 110, and/or 112. According to at least one non-limiting exemplary embodiment, this sensor data may comprise any one or more of lateral orientation, horizontal orientation, vertical orientation, pitch, yaw, and/or roll and may be stored in a matrix within memory 224).
15. Regarding claims 2 & 13:
Hrabe as modified by Dimsdale and Rodarmel teaches, the method is performed for each channel to apply a respective final distance offset for each channel. (Hrabe: [0017]: In another non-limiting example embodiment, a system for calibrating at least one sensor a device is disclosed. Further, wherein the at least one processor is further configured to execute the computer readable instructions to receive a second data set from a different respective sensor of the plurality of sensors, the second data set corresponding to a set of coordinates generated by the respective sensor of the plurality of sensors based on a second reference target along a second path. Wherein, the first reference target is different from the second reference target, the first data set is different from the second data set, and the second path is different from the first path. The first reference target and the second reference target being spaced apart from the device).
16. Regarding Claims 3 & 14:
Hrabe as modified by Dimsdale and Rodarmel teaches, the plurality of reference data have respective preset distance offsets that are equally spaced. (Hrabe: [0067]: Each one of the plurality of standoff targets 102 may be equidistant from the pyramid target 104).
17. Regarding Claims 5 & 16:
Hrabe as modified by Dimsdale does not teach, the final distance offset is determined by selecting one of the reference data candidates corresponding to a highest correlation coefficient from among the calculated correlation coefficients.
However, Rodarmel teaches ([P. 548]: Correlation coefficients (ρ) are computed based on the spatial distance between points in each dimension (Δu, Δv, Δw). In most cases, UE is determined by empirical means, such as performing data adjustments in areas with much control data, providing initial estimates of the UE and its correlations, and refining these estimates until the resulting reference variance approaches unity). Rodarmel further teaches, ([P. 548]: A list of CU polynomial parameters, based on the seven Sensor-Space ULEM general adjustable parameters, is provided below for each CU i:
• Δxi , Δyi , Δzi : PCS positional offsets
• θ1i , θ2i , θ3i : PCS angular offsets
• Δri : range offsets
• Δx · i , Δy · i , Δz · i : PCS positional offset rates
• θ · 1i , θ · 2i , θ · 3i : PCS angular offset rates
• Δ · ri : range offset rate). Rodarmel also teaches, ([P. 546]: ULEM was developed to provide support for rigorous error propagation and adjustability in the essential lidar point cloud scenarios while efficiently storing the requisite metadata).
Obvious/Motivation analysis: See Claim 1.
18. Regarding Claims 6 & 17;
Hrabe as modified by Dimsdale and Rodarmel teaches, correcting of the measurement data includes subtracting the final distance offset from the measurement data and replacing the measurement data with the subtracting result. (Hrabe: [0063]: According to at least one non-limiting exemplary embodiment, the non-transitory computer-readable storage medium further contains computer readable instructions that, when executed by a specialized processing apparatus, applies transformations to sensor data to cause the sensor coupled or affixed to the robot to physically reorient in its position or apply a digital transformation to data from the sensor. The degree or measure of transformations of the sensor data being the difference between the ideal or representative measurements (corresponding to the CAD model) and the measurements acquired by the sensor).
19. Regarding Claims 7 & 18:
Hrabe as modified by Dimsdale and Rodarmel teaches, estimating a horizontal angle between the LiDAR and the target using the measurement data; identifying an error in the measurement data using an error map related to an error generated by a distance resolution of the LiDAR; and correcting the measurement data using the identified error. ([0085]: Sensor units 214 may include one or more exteroceptive sensors, such as sonars, light detection and ranging (“LIDAR”) sensors, radars, lasers, cameras. According to exemplary embodiments, sensor units 214 may collect raw measurements (e.g., currents, voltages, resistances, gate logic, etc.) and/or transformed measurements (e.g., distances, angles, detected points in obstacles, etc.). In some cases, measurements may be aggregated and/or summarized. Sensor units 214 may generate data based at least in part on measurements. Such data may be stored in data structures, such as matrices, arrays, queues, lists, arrays, stacks, bags, etc.). Hrabe further teaches, ([0063]: According to at least one non-limiting exemplary embodiment, the non-transitory computer-readable storage medium further contains computer readable instructions that, when executed by a specialized processing apparatus, applies transformations to sensor data to cause the sensor coupled or affixed to the robot to physically reorient in its position or apply a digital transformation to data from the sensor. The degree or measure of transformations of the sensor data being the difference between the ideal or representative measurements (corresponding to the CAD model) and the measurements acquired by the sensor. One skilled in the art would appreciate that this transformation may be virtual, adjusted by actuator units, and/or adjusted manually by an operator).
20. Regarding Claim 8:
Hrabe as modified by Dimsdale and Rodarmel teaches, generating the error map by mapping the error for each coordinate within a detection area of the LiDAR. ([0085]: According to exemplary embodiments, sensor units 214 may collect raw measurements (e.g., currents, voltages, resistances, gate logic, etc.) and/or transformed measurements (e.g., distances, angles, detected points in obstacles, etc.). In some cases, measurements may be aggregated and/or summarized. Sensor units 214 may generate data based at least in part on measurements. Such data may be stored in data structures, such as matrices, arrays, queues, lists, arrays, stacks, bags, etc.). Hrabe further teaches, ([0063]: According to at least one non-limiting exemplary embodiment, the non-transitory computer-readable storage medium further contains computer readable instructions that, when executed by a specialized processing apparatus, applies transformations to sensor data to cause the sensor coupled or affixed to the robot to physically reorient in its position or apply a digital transformation to data from the sensor. One skilled in the art would appreciate that this transformation may be virtual, adjusted by actuator units, and/or adjusted manually by an operator).
21. Regarding Claims 9 & 19:
Hrabe as modified by Dimsdale and Rodarmel teaches, generating of the error map includes generating the error map for each horizontal angle between the LiDAR and the target. ([0085]: According to exemplary embodiments, sensor units 214 may collect raw measurements (e.g., currents, voltages, resistances, gate logic, etc.) and/or transformed measurements (e.g., distances, angles, detected points in obstacles, etc.). In some cases, measurements may be aggregated and/or summarized. Sensor units 214 may generate data based at least in part on measurements. Such data may be stored in data structures, such as matrices, arrays, queues, lists, arrays, stacks, bags, etc.). Hrabe further teaches, ([0063]: According to at least one non-limiting exemplary embodiment, the non-transitory computer-readable storage medium further contains computer readable instructions that, when executed by a specialized processing apparatus, applies transformations to sensor data to cause the sensor coupled or affixed to the robot to physically reorient in its position or apply a digital transformation to data from the sensor. The degree or measure of transformations of the sensor data being the difference between the ideal or representative measurements (corresponding to the CAD model) and the measurements acquired by the sensor. One skilled in the art would appreciate that this transformation may be virtual, adjusted by actuator units, and/or adjusted manually by an operator).
22. Claims 4, 10,15, & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hrabe et al (US 20210146942 A1), hereinafter Hrabe, in view of Dimsdale et al (US 20060007422 A1), hereinafter Dimsdale, further in view of Rodarmel et al (The Universal LiDAR Error Model), hereinafter Rodarmel, as applied to Claims 1, 7, 12-13 & 18, further in view of Meinherz et al. (US 20160124089 A1), hereinafter Meinherz.
23. Regarding Claims 4 & 15, Hrabe as modified by Dimsdale and Rodarmel does not teach, each of the differences between the measurement data and each of the plurality of reference data is calculated for a center pixel of a channel.
However, Meinherz teaches, ([0065]: Image data is received at TOF sensor device corresponding to an image of a viewing area monitored by the device. At 504, pixel array information is generated by the imaging sensor device based on the image data received at step 502. At 506, TOF analysis is performed on one or more pixels in order to determine distance information for an object or surface corresponding to the one or more pixels). Meinherz further teaches, ([0066]: At 508, a current focal length of a receiving lens element of the TOF sensor device is determined. The TOF sensor device uses auto-focus capabilities to focus the lens on the object or surface corresponding to the one or more pixels prior to performing the TOF distance analysis at step 506. As such, the current focal length is indicative of the distance of the object or surface from the TOF sensor device. At 510, a determination is made regarding whether the TOF distance matches the focal length. In this regard, the TOF distance may be assumed to match the focal length if the two values are within a defined tolerance range of one another). Meinherz also teaches, ([FIG. 1A & FIG. 1B]: Shows the object corresponds to a center pixel of a channel).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Hrabe, in view of Dimsdale, further in view of Rodarmel, with Meinherz to include calculating the difference between the measurement data and reference data using a center pixel of a channel since, it is the same field of endeavor and results would have been predictable. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Hrabe, in view of Rodarmel, with Meinherz in order to provide a robust onboard calibration reference for any device requiring range finding feedback. Especially considering the fact that, (Meinherz, [0040]: There are a number of factors that can compromise measurement accuracy of TOF sensors. For example, many TOF sensors are sensitive to temperature, in that temperatures outside a rated tolerance can introduce distance measurement offset errors).
24. Regarding Claims 10 & 20, Hrabe as modified by Dimsdale and Rodarmel does not teach, the correcting of the measurement data includes correcting the measurement data using the identified error includes correcting the measurement data for each pixel using the error identified for each pixel.
However, Meinherz teaches, ([0066]: At 508, a current focal length of a receiving lens element of the TOF sensor device is determined. The TOF sensor device uses auto-focus capabilities to focus the lens on the object or surface corresponding to the one or more pixels prior to performing the TOF distance analysis at step 506. As such, the current focal length is indicative of the distance of the object or surface from the TOF sensor device. At 510, a determination is made regarding whether the TOF distance matches the focal length. In this regard, the TOF distance may be assumed to match the focal length if the two values are within a defined tolerance range of one another. If the TOF distance matches the focal length, the methodology ends, and no correction factor is applied. Alternatively, if it is determined at step 510 that the TOF distance does not match the focal length, the methodology moves to step 512, where a correction factor is applied to the TOF distance determined at step 506 based on a difference between the TOF distance and the focal length).
Obvious/Motivation: See Claims 4 & 15
25. Regarding Claim 20, Meinherz as discussed above is incorporated herein, Meinherz further teaches a processor ([0004]: A time-of-flight (TOF) sensor comprising at least one processor).
26. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Hrabe et al (US 20210146942 A1), hereinafter Hrabe, in view of Dimsdale et al (US 20060007422 A1), hereinafter Dimsdale, further in view of Rodarmel et al (The Universal LiDAR Error Model), hereinafter Rodarmel, as applied to Claims 1 & 7, further in view of Kaempchen et al. (US20060290920A1), hereinafter Kaempchen.
27. Regarding Claim 11, Hrabe as modified by Dimsdale and Rodarmel does not teach, estimating of the horizontal angle includes estimating the horizontal angle through a regression line calculated using linear regression from the measurement data.
However, Kaempchen teaches, ([0135]: For the determination of the yaw angle straight regression lines extending on the second calibration surfaces 64 and their angle relative to the longitudinal axis 45 of the vehicle, which corresponds to the yaw angle, are again determined by using distance image points on the second calibration surfaces 64. Here also it is essentially distance data that is used so that errors in the angular determination are not significant).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Hrabe, in view of Dimsdale, further in view of Rodarmel, with Kaempchen to include estimating the horizontal angle through a regression line, since it is the same field of endeavor and results would have been predictable. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Hrabe, in view of Dimsdale, further in view of Rodarmel with Kaempchen since, (Kaempchen, [0005]: In order to be able to precisely determine the position of detected articles relative to the vehicle the position and alignment of the distance image sensor and thus also of the scanned area relative to the vehicle must be precisely known).
Response to arguments
28. Applicant has traversed Examiners original grounds for rejection of Claims 1 & 12 based on amended claim language. (See Applicant’s response, pages 8-9, “Claim Rejections 35 U.S.C. 103”). Applicant's amendments filed 03/05/2026 have rendered Applicant’s arguments moot in view of new grounds of rejection.
Conclusion
29. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES W NAPIER whose telephone number is (571)272-7451. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Helal Algahaim can be reached at (571) 270-5227. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.W.N./Examiner, Art Unit 3645
/HELAL A ALGAHAIM/SPE , Art Unit 3645