DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 27 January 2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 27 January 2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendments filed 27 January 2026 have been entered. Claims 1-9 and 11-20 remain pending in the application, as well as newly added claims 21-22 (claim 10 has been cancelled). The Applicant’s amendments to the claims overcome each and every rejection previously set forth in the Final Rejection dated 27 August 2025.
Response to Arguments
Applicant’s arguments, see pages 7-12, filed 27 January 2026, with respect to the rejections of claims 1, 19, and 20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Heinonen (USPGPub 20210116567 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 12, and 19-22 are rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1).
Regarding claim 1, Becker teaches a method, comprising: causing, by a processing system of a distance sensor including at least one processor, a light projecting system of the distance sensor to project a three-dimensional pattern onto an object, wherein the three-dimensional pattern comprises a plurality of points of light (¶91, By noting the position of the intersection point relative to the position of the camera lens optical axis 4544, the distance from the projector (and camera) to the object surface can be determined using the principles of triangulation; ¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface; and ¶89, The electrical circuit 219 may include one or more microprocessors, digital signal processors, memory, and other types of signal conditioning and/or storage circuits); causing, by the processing system, a light receiving system of the distance sensor to capture an image of the three-dimensional pattern projected onto the object (¶101, the cameras are configured to image points of light on an object or in an environment); and calculating, by the processing system, a first set of three-dimensional coordinates for a first point of the plurality of points of light, wherein the calculating is based on an appearance of the first point in the image and knowledge of a trajectory of the first point (¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface; ¶102, using coded patterns, different characters, different shapes, different thicknesses or sizes, or different colors, for example, may be used to provide distinctive elements; ¶111, The three dimensional coordinates of the workpiece 2528 is measured by the scanner camera 2530 by using the principles of triangulation; and see remainder of ¶111 for further details). However, Becker fails to explicitly teach wherein the distance sensor includes a temperature sensor; retrieving, by the processing system, a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic includes a temperature that is measured by the temperature sensor during a calibration of the distance sensor; wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending, by the processing system, the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting, by the processing system, a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Hsu teaches wherein the distance sensor (DMD) includes a temperature sensor (TS/1610) (see figure 12; and ¶60, The temperature sensor 1610 is utilized for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD); and wherein a first distance measurement characteristic includes a temperature that is measured by the temperature sensor (TS/1610) during a calibration of the distance sensor (DMD) (¶¶53-55, step 1210: providing a temperature sensor TS for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD; step 1220: calculating a calibrated imaging location D.sub.CS.sub.--.sub.CAB according to the ambient temperature TEMP.sub.AMB and the imaging location D.sub.CS1; step 1230: calculating a calibrated measured distance D.sub.M according to the calibrated imaging location D.sub.CS.sub.--.sub.CAB.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Becker to incorporate the teachings of Hsu to further include a temperature sensor that assists in calibration because, [i]n this way, when the distance-measuring device measures the measured object, the error due to the variation of the ambient temperature is avoided according to the calibrating method (Hsu, abstract). However, the combination fails to explicitly teach retrieving, by the processing system, the first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending, by the processing system, the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting, by the processing system, a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heidemann teaches retrieving, by the processing system, a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic is measured during a calibration of the distance sensor (¶35, The point 220 is corrected in the read-out data by applying a correction factor to remove the effects of lens aberrations; and ¶46, The triangular arrangement of the 3D imager 300 may also be used to automatically update compensation/calibration parameters. Compensation parameters are numerical values stored in memory, for example, in an internal electrical system of an 3D measurement device or in another external computing unit. Such parameters may include the relative positions and orientations of the cameras and projector in the 3D imager. The compensation parameters may relate to lens characteristics such as lens focal length and lens aberrations. They may also relate to changes in environmental conditions such as temperature. Sometimes the term calibration is used in place of the term compensation. Often compensation procedures are performed by the manufacturer to obtain compensation parameters for a 3D imager. In addition, compensation procedures are often performed by a user. User compensation procedures may be performed when there are changes in environmental conditions such as temperature).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker and Hsu to incorporate the teachings of Heidemann to further include a calibration and calibration parameters (i.e. distance characteristics) because [c]ompensation parameters are used to correct imperfections or nonlinearities in the mechanical, optical, or electrical system to improve measurement accuracy (Heidemann, ¶47). However, the combination fails to explicitly teach wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending, by the processing system, the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting, by the processing system, a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heinonen teaches wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature (¶99, This is being done to compensate for the varying operating conditions, like the operating temperature of the LEDs in the first emitter and the second emitter, ambient temperature, wind conditions, operating time of the LiDAR device, other sources of temperature variation (such as thermal connections between different components of the LiDAR device as part of the integration), etc. over a period of time of operation of the LiDAR device).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, and Heidemann to incorporate the teachings of Heinonen to sense both ambient temperature and the temperature of components of the LiDAR device because, [t]his way the LiDAR device may always be configured to operate at optimum adjusted parameters for the first emitter and the second emitter in spite of time varying operating conditions of the LiDAR device (Heinonen, ¶99). However, the combination fails to explicitly teach appending, by the processing system, the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting, by the processing system, a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Pivac teaches appending, by the processing system, the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting, by the processing system, a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic (¶172, the system also typically includes a look-up table of calibration data stored in memory of the one or more electronic processing devices, the calibration data including pixel position values and range correlated to camera focusing data, so that observed target pixel array coordinates have camera focusing data applied to thereby apply range correction in the determination of distance to targets; and NOTE: in order to correct for errors, the data must be output).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, and Heinonen to incorporate the teachings of Pivac to further apply calibration parameters to the coordinate system because [t]his enables the pixel array coordinates to be corrected for lens distortion and camera errors (Pivac, ¶172).
Regarding claim 2, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 1, further comprising: repeating, by the processing system, the calculating, the appending, and the outputting for a second point of the plurality of points, independently of performing the calculating, the appending, and the outputting for the first point (Becker, ¶91, The distance from the projector to other points on the line of light 4526, that is points on the line of light that do not lie in the plane of the paper of FIG. 4, may similarly be found; and ¶101, wherein the cameras are configured to image points of light on an object or in an environment).
Regarding claim 3, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 1, wherein the first distance measurement characteristic comprises a characteristic of the first point that affects an ability of the processing system to accurately detect the first point within the three- dimensional pattern and to identify the first point from among the plurality of points (Heidemann, see ¶46; and ¶47, Compensation parameters are used to correct imperfections or nonlinearities in the mechanical, optical, or electrical system to improve measurement accuracy).
Regarding claim 4, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 3, wherein the first distance measurement characteristic comprises a brightness of the first point (Heidemann, ¶56, the diffraction grating is configured to give some of the projected spots more power than the others, thereby enabling some of the spots to be distinguished from the others).
Regarding claim 5, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 3, wherein the first distance measurement characteristic comprises a physical profile of the first point (Becker, ¶102, using coded patterns, different characters, different shapes, different thicknesses or sizes, or different colors, for example, may be used to provide distinctive elements, also known as coded elements or coded features… A coded feature on the source pattern of light 2570 may be identified on the photosensitive array 2580).
Regarding claim 6, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 3, wherein the first distance measurement characteristic comprises capture optics factor associated with the first point (Heidemann, see figure 2, lenses 214 and 234 (i.e. capture optics); and ¶35, The point 220 is corrected in the read-out data by applying a correction factor to remove the effects of lens aberrations).
Regarding claim 7, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 3, wherein the first distance measurement characteristic comprises a calibration specification associated with the first point (Heidemann, ¶46, The triangular arrangement of the 3D imager 300 may also be used to automatically update compensation/calibration parameters. Compensation parameters are numerical values stored in memory, for example, in an internal electrical system of an 3D measurement device or in another external computing unit. Such parameters may include the relative positions and orientations of the cameras and projector in the 3D imager. The compensation parameters may relate to lens characteristics such as lens focal length and lens aberrations. They may also relate to changes in environmental conditions such as temperature. Sometimes the term calibration is used in place of the term compensation. Often compensation procedures are performed by the manufacturer to obtain compensation parameters for a 3D imager. In addition, compensation procedures are often performed by a user. User compensation procedures may be performed when there are changes in environmental conditions such as temperature).
Regarding claim 12, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 1, wherein the first distance measurement characteristic comprises a unique identifier of the first point (Becker, ¶102, using coded patterns, different characters, different shapes, different thicknesses or sizes, or different colors, for example, may be used to provide distinctive elements, also known as coded elements or coded features… A coded feature on the source pattern of light 2570 may be identified on the photosensitive array 2580).
Regarding claim 19, Becker teaches a distance sensor including at least one processor, wherein the processing system performs operations (¶89, The electrical circuit 219 may include one or more microprocessors, digital signal processors, memory, and other types of signal conditioning and/or storage circuits), the operations comprising: causing a light projecting system of the distance sensor to project a three- dimensional pattern onto an object, wherein the three-dimensional pattern comprises a plurality of points of light (¶91, By noting the position of the intersection point relative to the position of the camera lens optical axis 4544, the distance from the projector (and camera) to the object surface can be determined using the principles of triangulation; and ¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface); causing a light receiving system of the distance sensor to capture an image of the three-dimensional pattern projected onto the object (¶101, the cameras are configured to image points of light on an object or in an environment); and calculating a first set of three-dimensional coordinates for a first point of the plurality of points of light, wherein the calculating is based on an appearance of the first point in the image and knowledge of a trajectory of the first point (¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface; ¶102, using coded patterns, different characters, different shapes, different thicknesses or sizes, or different colors, for example, may be used to provide distinctive elements; ¶111, The three dimensional coordinates of the workpiece 2528 is measured by the scanner camera 2530 by using the principles of triangulation; and see remainder of ¶111 for further details). However, Becker fails to explicitly teach a non-transitory machine-readable storage medium encoded with instructions executable by a processing system; a temperature sensor; retrieving a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic includes a temperature that is measured by the temperature sensor during a calibration of the distance sensor; wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Hsu teaches a temperature sensor (TS/1610) (see figure 12; and ¶60, The temperature sensor 1610 is utilized for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD); and wherein a first distance measurement characteristic includes a temperature that is measured by the temperature sensor (TS/1610) during a calibration of the distance sensor (DMD) (¶¶53-55, step 1210: providing a temperature sensor TS for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD; step 1220: calculating a calibrated imaging location D.sub.CS.sub.--.sub.CAB according to the ambient temperature TEMP.sub.AMB and the imaging location D.sub.CS1; step 1230: calculating a calibrated measured distance D.sub.M according to the calibrated imaging location D.sub.CS.sub.--.sub.CAB.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Becker to incorporate the teachings of Hsu to further include a temperature sensor that assists in calibration because, [i]n this way, when the distance-measuring device measures the measured object, the error due to the variation of the ambient temperature is avoided according to the calibrating method (Hsu, abstract). However, the combination fails to explicitly teach a non-transitory machine-readable storage medium encoded with instructions executable by a processing system; retrieving the first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heidemann teaches a non-transitory machine-readable storage medium encoded with instructions executable by a processing system (¶7, One or more processors are provided that are configured to execute computer readable instructions); and retrieving a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic is measured during a calibration of the distance sensor (¶35, The point 220 is corrected in the read-out data by applying a correction factor to remove the effects of lens aberrations; and ¶46, The triangular arrangement of the 3D imager 300 may also be used to automatically update compensation/calibration parameters. Compensation parameters are numerical values stored in memory, for example, in an internal electrical system of an 3D measurement device or in another external computing unit. Such parameters may include the relative positions and orientations of the cameras and projector in the 3D imager. The compensation parameters may relate to lens characteristics such as lens focal length and lens aberrations. They may also relate to changes in environmental conditions such as temperature. Sometimes the term calibration is used in place of the term compensation. Often compensation procedures are performed by the manufacturer to obtain compensation parameters for a 3D imager. In addition, compensation procedures are often performed by a user. User compensation procedures may be performed when there are changes in environmental conditions such as temperature).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker and Hsu to incorporate the teachings of Heidemann to further include a calibration and calibration parameters (i.e. distance characteristics) because [c]ompensation parameters are used to correct imperfections or nonlinearities in the mechanical, optical, or electrical system to improve measurement accuracy (Heidemann, ¶47). Additionally, it would have been obvious to provide instructions in order for the device to function properly. However, the combination fails to explicitly teach wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heinonen teaches wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature (¶99, This is being done to compensate for the varying operating conditions, like the operating temperature of the LEDs in the first emitter and the second emitter, ambient temperature, wind conditions, operating time of the LiDAR device, other sources of temperature variation (such as thermal connections between different components of the LiDAR device as part of the integration), etc. over a period of time of operation of the LiDAR device).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, and Heidemann to incorporate the teachings of Heinonen to sense both ambient temperature and the temperature of components of the LiDAR device because, [t]his way the LiDAR device may always be configured to operate at optimum adjusted parameters for the first emitter and the second emitter in spite of time varying operating conditions of the LiDAR device (Heinonen, ¶99). However, the combination fails to explicitly teach appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Pivac teaches appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic (¶172, the system also typically includes a look-up table of calibration data stored in memory of the one or more electronic processing devices, the calibration data including pixel position values and range correlated to camera focusing data, so that observed target pixel array coordinates have camera focusing data applied to thereby apply range correction in the determination of distance to targets; and NOTE: in order to correct for errors, the data must be output).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, and Heinonen to incorporate the teachings of Pivac to further apply calibration parameters to the coordinate system because [t]his enables the pixel array coordinates to be corrected for lens distortion and camera errors (Pivac, ¶172).
Regarding claim 20, Becker teaches a distance sensor, comprising: a processing system including at least one processor, the processing system performing operations (¶89, The electrical circuit 219 may include one or more microprocessors, digital signal processors, memory, and other types of signal conditioning and/or storage circuits); the operations comprising: causing a light projecting system of the distance sensor to project a three-dimensional pattern onto an object, wherein the three-dimensional pattern comprises a plurality of points of light (¶91, By noting the position of the intersection point relative to the position of the camera lens optical axis 4544, the distance from the projector (and camera) to the object surface can be determined using the principles of triangulation; and ¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface); causing a light receiving system of the distance sensor to capture an image of the three-dimensional pattern projected onto the object (¶101, the cameras are configured to image points of light on an object or in an environment); and calculating a first set of three-dimensional coordinates for a first point of the plurality of points of light, wherein the calculating is based on an appearance of the first point in the image and knowledge of a trajectory of the first point (¶94, light from a scanner may be projected in a line pattern to collect 3D coordinates over a line. Alternatively, light from a scanner may be projected to cover an area, thereby obtaining 3D coordinates over an area on an object surface; ¶102, using coded patterns, different characters, different shapes, different thicknesses or sizes, or different colors, for example, may be used to provide distinctive elements; ¶111, The three dimensional coordinates of the workpiece 2528 is measured by the scanner camera 2530 by using the principles of triangulation; and see remainder of ¶111 for further details). However, Becker fails to explicitly teach a temperature sensor; a non-transitory machine-readable storage medium encoded with instructions executable by the processing system, retrieving a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic includes a temperature that is measured by the temperature sensor during a calibration of the distance sensor; wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Hsu teaches a temperature sensor (TS/1610) (see figure 12; and ¶60, The temperature sensor 1610 is utilized for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD); and wherein a first distance measurement characteristic includes a temperature that is measured by the temperature sensor (TS/1610) during a calibration of the distance sensor (DMD) (¶¶53-55, step 1210: providing a temperature sensor TS for measuring the ambient temperature TEMP.sub.AMB of the distance-measuring device DMD; step 1220: calculating a calibrated imaging location D.sub.CS.sub.--.sub.CAB according to the ambient temperature TEMP.sub.AMB and the imaging location D.sub.CS1; step 1230: calculating a calibrated measured distance D.sub.M according to the calibrated imaging location D.sub.CS.sub.--.sub.CAB.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Becker to incorporate the teachings of Hsu to further include a temperature sensor that assists in calibration because, [i]n this way, when the distance-measuring device measures the measured object, the error due to the variation of the ambient temperature is avoided according to the calibrating method (Hsu, abstract). However, the combination fails to explicitly teach a non-transitory machine-readable storage medium encoded with instructions executable by the processing system, retrieving the first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heidemann teaches a non-transitory machine-readable storage medium encoded with instructions executable by the processing system (¶7, One or more processors are provided that are configured to execute computer readable instructions), and retrieving a first distance measurement characteristic for the first point from a memory of the distance sensor, wherein the first distance measurement characteristic is measured during a calibration of the distance sensor (¶35, The point 220 is corrected in the read-out data by applying a correction factor to remove the effects of lens aberrations; and ¶46, The triangular arrangement of the 3D imager 300 may also be used to automatically update compensation/calibration parameters. Compensation parameters are numerical values stored in memory, for example, in an internal electrical system of an 3D measurement device or in another external computing unit. Such parameters may include the relative positions and orientations of the cameras and projector in the 3D imager. The compensation parameters may relate to lens characteristics such as lens focal length and lens aberrations. They may also relate to changes in environmental conditions such as temperature. Sometimes the term calibration is used in place of the term compensation. Often compensation procedures are performed by the manufacturer to obtain compensation parameters for a 3D imager. In addition, compensation procedures are often performed by a user. User compensation procedures may be performed when there are changes in environmental conditions such as temperature).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker and Hsu to incorporate the teachings of Heidemann to further include a calibration and calibration parameters (i.e. distance characteristics) because [c]ompensation parameters are used to correct imperfections or nonlinearities in the mechanical, optical, or electrical system to improve measurement accuracy (Heidemann, ¶47). Additionally, it would have been obvious to provide instructions in order for the device to function properly. However, the combination fails to explicitly teach wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature; appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Heinonen teaches wherein the temperature measured includes an ambient temperature and at least one distance sensor component temperature (¶99, This is being done to compensate for the varying operating conditions, like the operating temperature of the LEDs in the first emitter and the second emitter, ambient temperature, wind conditions, operating time of the LiDAR device, other sources of temperature variation (such as thermal connections between different components of the LiDAR device as part of the integration), etc. over a period of time of operation of the LiDAR device).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, and Heidemann to incorporate the teachings of Heinonen to sense both ambient temperature and the temperature of components of the LiDAR device because, [t]his way the LiDAR device may always be configured to operate at optimum adjusted parameters for the first emitter and the second emitter in spite of time varying operating conditions of the LiDAR device (Heinonen, ¶99). However, the combination fails to explicitly teach appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic.
However, Pivac teaches appending the first distance measurement characteristic to the first set of three-dimensional coordinates; and outputting a set of data including the first set of three-dimensional coordinates appended with the distance measurement characteristic (¶172, the system also typically includes a look-up table of calibration data stored in memory of the one or more electronic processing devices, the calibration data including pixel position values and range correlated to camera focusing data, so that observed target pixel array coordinates have camera focusing data applied to thereby apply range correction in the determination of distance to targets; and NOTE: in order to correct for errors, the data must be output).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, and Heinonen to incorporate the teachings of Pivac to further apply calibration parameters to the coordinate system because [t]his enables the pixel array coordinates to be corrected for lens distortion and camera errors (Pivac, ¶172).
Regarding claim 21, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 2 further comprising: calculating a second set of three-dimensional coordinates for the second point; and determining a second distance measurement characteristic based on the second point; wherein the second set of three-dimensional coordinates is calculated independently from the first set of three-dimensional coordinates (Becker, see ¶¶91, 94, 101, 102, and 111 for details; and ¶91, The distance from the projector to other points on the line of light 4526, that is points on the line of light that do not lie in the plane of the paper of FIG. 4, may similarly be found; and ¶101, wherein the cameras are configured to image points of light on an object or in an environment).
Regarding claim 22, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the method of claim 1 wherein the at least one distance sensor component temperature includes a temperature of a light source (Heinonen, ¶99, This is being done to compensate for the varying operating conditions, like the operating temperature of the LEDs in the first emitter and the second emitter, ambient temperature, wind conditions, operating time of the LiDAR device, other sources of temperature variation (such as thermal connections between different components of the LiDAR device as part of the integration), etc. over a period of time of operation of the LiDAR device).
Claims 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1) as applied to claims 7 and 3 above, and further in view of Schneider et al. (USPGPub 20190389365 A1).
Regarding claim 8, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the calibration specification (Heidemann, ¶46). However, the combination fails to explicitly teach wherein the calibration specification comprises a number of measured points forming a trajectory associated with the first point during calibration of the distance sensor.
However, Schneider teaches wherein the calibration specification comprises a number of measured points forming a trajectory associated with the first point during calibration of the distance sensor (¶41, scenario illustrated in FIG. 1 at the same time describes the basic principle according to which the initial calibration is able to be performed in order to determine the 3D beam characteristic of the segments of the headlight S…the initial calibration may be performed at four different distances, for instance 5 m, 10 m, 15 m and 25 m. The pattern MU is projected at each of the set distances A, and the characteristic points CP1′, CP2′, CP3′ are detected in each image B of the projection of the pattern MU, which characteristic points preferably correspond to corners of the light field ST of the pattern MU. The same characteristic points CP1′, CP2′, CP3′ of all of the evaluated projections of the light pattern MU give 3D sets of points from which the associated trajectories TR1, TR2, TR3 are able to be formed by linear modeling (for example linear regression in the 3D space). These may ultimately be transformed from the image coordinate system KS′ into the 3D world coordinate system KS, and then give the corresponding beams S1, S2, S3. After initial calibration has been performed, the linear equations of the beams S1, S2, S3 are then known, that is to say their receptor points and direction vectors).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Schneider to further include forming a trajectory in order to continually calibrate the machine even when the components of said machine are moving or have been altered, providing a device that is able to be correctly calibrated despite movement/offset.
Regarding claim 9, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the distance measurement characteristic (Heidemann, ¶46). However, the combination fails to explicitly teach wherein the first distance measurement characteristic comprises an amount by which the first point deviates from a stored trajectory associated with the first point.
However, Schneider teaches wherein the first distance measurement characteristic comprises an amount by which the first point deviates from a stored trajectory associated with the first point (¶5, the incorrect position of the headlight can result from the determination of an offset of characteristic points in a light pattern in comparison with corresponding initial characteristic points in the image from the vehicle camera. The incorrect position of the headlight may likewise result from the determination of a rotation of trajectories of the characteristic points in comparison with corresponding initial trajectories).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Schneider to include the measurement of deviation in order to continually calibrate the machine even when the components of said machine are moving or have been altered, providing a device that is able to be correctly calibrated despite movement/offset.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1) as applied to claim 1 above, and further in view of Haas et al. (DE 102017117614 A1).
Regarding claim 11, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the distance measurement characteristic (Heidemann, ¶46). However, the combination fails to explicitly teach wherein the first distance measurement characteristic comprises at least one selected from a group of: a minimum value for a z coordinate of the first set of three-dimensional coordinates and a maximum value for the z coordinate of the first set of three-dimensional coordinates.
However, Haas teaches wherein the first distance measurement characteristic comprises at least one selected from a group of: a minimum value for a z coordinate of the first set of three-dimensional coordinates and a maximum value for the z coordinate of the first set of three-dimensional coordinates (¶12, The two positions can also be selected so that they correspond to a minimum and a maximum distance at which the distance measurement should work in real operation of the vehicle; and ¶13, If the first position and the second position on the trajectory of a structure are determined, the section of the trajectory lying between these positions describes all possible positions of the associated structure in the image plane of the vehicle camera (i.e. in an image captured by the vehicle camera) at distances between the vehicle and the projection surface that lie within the functional range of the distance measurement).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Haas to further include calibration through the minimum and maximum distances in order to determine the functional range of the device.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1) as applied to claim 1 above, and further in view of Karaoguz et al. (USPGPub 20200219242 A1).
Regarding claim 13, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the distance measurement characteristic (Heidemann, ¶46). However, the combination fails to explicitly teach wherein the first distance measurement characteristic comprises an indicator that describes a confidence in a detection and recognition of the first point.
However, Karaoguz teaches wherein the first distance measurement characteristic comprises an indicator that describes a confidence in a detection and recognition of the first point (¶103, the additional information item moreover includes one or more supplementary indicators, such as a confidence index indicating level of confidence in detection of the one or more obstacle(s) 62).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Karaoguz to include confidence indication in order to account for the precision and/or standardization of the detected object, as well as alerting the user to changes in the variability of the determined objects.
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), Pivac et al. (USPGPub 20200173777 A1), and Karaoguz et al. (USPGPub 20200219242 A1) as applied to claim 13 above, and further in view of Schuster (U.S. Patent No. 10325485 B1).
Regarding claim 14, Becker as modified by Hsu, Heidemann, Heinonen, Pivac, and Karaoguz teaches confidence in a detection (Karaoguz, ¶103). However, the combination fails to explicitly teach wherein the confidence is based on conditions under which the detection and recognition occurred.
However, Schuster teaches wherein the confidence is based on conditions under which the detection and recognition occurred (col. 14, lines 65-67 and col. 15, lines 1-22, after raw measured data 302 has been received and object detection component 206 has identified objects or people corresponding to the items of measured data 302, weighing component 208 applies weights to each source of the raw measured data 302 indicating a confidence in the reliability or accuracy of the data source to yield weighed sensor data 406… the reflectivity of an object within the sensor's field of view can affect the accuracy with which the sensor can detect the object. An imaging sensor's detection abilities may also be less reliable when detecting objects that are near the end of the sensor's detection range).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, Pivac, and Karaoguz to incorporate the teachings of Schuster to include a confidence indicator for distance and reflectivity in order to account for variations in both the sensing environment and the objects being sensed as well as to evaluate future states or behaviors.
Regarding claim 15, Becker as modified by Hsu, Heidemann, Heinonen, Pivac, Karaoguz, and Schuster teaches the method of claim 14, wherein the conditions include a reflectance of the object and a distance of the object (Schuster, col. 14, lines 65-67 and col. 15, lines 1-22, after raw measured data 302 has been received and object detection component 206 has identified objects or people corresponding to the items of measured data 302, weighing component 208 applies weights to each source of the raw measured data 302 indicating a confidence in the reliability or accuracy of the data source to yield weighed sensor data 406… the reflectivity of an object within the sensor's field of view can affect the accuracy with which the sensor can detect the object. An imaging sensor's detection abilities may also be less reliable when detecting objects that are near the end of the sensor's detection range).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1) as applied to claim 1 above, and further in view of Akagi (USPGPub 20190271778 A1).
Regarding claim 17, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the outputting of data (Pivac, ¶110, FPGA that combines the data from all of the cameras in the array to calculate a resulting position and velocity of the camera array and outputs the data; and NOTE (for all three references): in order for these devices to communicate distance and position data, they must output the data). However, the combination fails to explicitly teach outputting, for a second point of the plurality of points that the processing system fails to detect, an identifier without a distance measurement characteristic.
However, Akagi teaches outputting, for a second point of the plurality of points that the processing system fails to detect, an identifier without a distance measurement characteristic (¶27, even if the first optical distance measurement sensor fails, the determination device still determines whether an intrusion occurs and outputs a signal indicating intrusion detection when determining that there is an intrusion. Since the determination device determines whether an intrusion occurs and outputs a signal indicating intrusion detection when determining that there is an intrusion even if the first optical distance measurement sensor fails, it has high reliability in determining intrusion).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Akagi to allow for the outputting of an indicator despite a sensing failure in order to provide the user with as much information as possible even when a distance cannot be determined.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Becker et al. (WO 2017116585 A1) in view of Hsu et al. (USPGPub 20110307206 A1), Heidemann et al. (USPGPub 20170067734 A1), Heinonen (USPGPub 20210116567 A1), and Pivac et al. (USPGPub 20200173777 A1) as applied to claim 1 above, and further in view of Smith et al. (USPGPub 20200341116 A1).
Regarding claim 18, Becker as modified by Hsu, Heidemann, Heinonen, and Pivac teaches the distance measurement characteristic (Heidemann, ¶46). However, the combination fails to explicitly teach wherein the first distance measurement characteristic is re-measured and updated in the memory prior to the causing the light projecting system to project the three-dimensional pattern.
However, Smith teaches wherein the first distance measurement characteristic is re-measured and updated in the memory prior to the causing the light projecting system to project the three-dimensional pattern (¶48, the system 10 can recalibrate the relative positions of the field of illumination FOI and field of view FOV in the field, e.g., before, during, and/or after operation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Becker, Hsu, Heidemann, Heinonen, and Pivac to incorporate the teachings of Smith to perform a remeasurement for calibration purposes before use of the device, as changes in alignment may arise during any period, so allowing a recalibration before operation allows for flexibility of the device without wasted time and energy.
Allowable Subject Matter
Claim 16 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 16, the prior art of record individually in combined fails to teach the method of claim 1 as claimed, more specifically in combination with wherein the first distance measurement characteristic comprises a repair history of the distance sensor.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIN R GARBER whose telephone number is (571)272-4663. The examiner can normally be reached M-F 0730-1730.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Georgia Y Epps can be reached at (571)272-2328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIN R GARBER/Examiner, Art Unit 2878