DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The following addresses applicant’s remarks/amendments dated 2 December 2025.
Claims 17, 24, 26-29, 31, and 34-36 were amended. Claims 23, 25, and 33 were cancelled. New claims 37-39 were added. Therefore, claims 17-22, 24, 26-32, and 34-39 are currently pending in the current application and are addressed below.
Response to Arguments
Applicant’s arguments, see pages 9-12 of the Remarks, filed 2 December 2025, with respect to the rejections of claims 17, 28, and 31 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Niclass et al., US 20120075615 A1 in view of Masuda, US 20180232897 A1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 17-22, 26-28, 31-32, and 36-39 are rejected under 35 U.S.C. 103 as being unpatentable over Niclass et al., US 20120075615 A1 (“Niclass”) in view of Masuda, US 20180232897 A1 (“Masuda”).
Regarding claim 17, Niclass discloses a solid-state lidar device, comprising: a laser generator configured to generate a pulsed laser beam that is directed on a target (Fig. 1, laser diodes LD 30, LD driver parts 40, object 70, Paragraph [0048]); an optical lens arrangement configured to collect the laser beam after it is reflected by the target to form a reflected laser beam, the optical lens arrangement having a focal length and providing a rear focal plane (Fig. 1, lens 90, Paragraph [0048]); a solid-state sensing array positioned at the rear focal plane of the optical lens arrangement (Fig. 1, 2D array of photo detectors 100, Paragraph [0048]), the solid-state sensing array comprising at least a first sensor and a second sensor configured to detect the reflected laser beam, wherein the first sensor and the second sensor are spaced from each other by a first sensor distance (Fig. 2, photodetector array contains 2 sensors, photo sensitive area 200 and APD 210, spaced by a peripheral circuit 220, Paragraph [0049]); and at least one processor configured to: obtain multiple measured distances of the target from pulsed time-of-flight measurements utilizing the laser generator and a plurality of sensors of the solid- state sensing array (Fig. 3, TDC 330, histogram circuit 340, signal processing circuit 350, Paragraph [0050], Fig. 4, measurements 1-M, Paragraph [0057]), […].
Niclass does not teach:
each measured distance of the multiple measured distances corresponding to a different sensor of the solid-state sensing array; and
obtain at least one spatial coordinate for the target from the multiple measured distances using a calibration parameter indicative of a ratio of the first sensor distance and the focal length--, comprising:
calculating an optimized value of the calibration parameter by fitting a fitting function to a point cloud function that comprises provisional spatial coordinates for the different spatial locations of the target, wherein the provisional spatial coordinates are obtained from the multiple measured distances using provisional values for the calibration parameter, and the optimized value of the calibration parameter is a provisional value of the calibration parameter which optimizes the fitting; and
obtaining the at least one spatial coordinate for the target using the optimized value of the calibration parameter.
However, Masuda teaches an imaging element with a plurality of imaging pixels (Fig. 2, imaging element 60, imaging pixel group 60A, imaging pixels 60A1, Paragraph [0110]). Masuda also teaches a three-dimensional coordinate calculation function based on two pixel coordinates, the imaging position distance, and focal length of the imaging lens. The depth coordinate specifically includes a ratio between the focal length and the distance between pixel coordinates (Expression (1), Paragraph [0152]-[0154]). The two pixel coordinates in the depth calculation also imply that the subsequent measurements hit different pixels in the imaging element.
Masuda also teaches a method for deriving an imaging position distance from the designated pixel coordinates (Fig. 9, derivation unit 112, Paragraph [0167]). In deriving the imaging position distance, the derivation unit first decides the plane equation. Then using the plane equation, the derivation unit derives the imaging position distance (Expression (3), Paragraph [0172]-[0174]). Specifically, using the pixel coordinates which depend on the ratio of the focal length to pixel distance, the derivation unit optimizes the parameters of the plane equation. The plane equation, along with the focal length and pixel coordinates are then used to calculate the imaging distance (Fig. 12, steps 228-234, Paragraph [0203]-[0210]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Niclass’ optical rangefinder by including a derivation unit that uses a plane equation and multiple pixel coordinates to calculate distance, which is disclosed by Masuda. One of ordinary skill in the art would have been motivated to make this modification in order to “derive the imaging position distance with a high level of accuracy”, as suggested by Masuda (Paragraph [0022]).
Regarding claim 18, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein the first sensor and the second sensor are single-photon avalanche diodes (SPADs) arranged on a common substrate of the solid-state sensing array (Niclass, Fig. 2, APD 201, Paragraph [0054]: APD may operate in Geiger mode).
Regarding claim 19, Niclass, as modified in view of Masuda, discloses the device according to claim 18, wherein the solid-state sensing array further comprises a third sensor configured to detect the reflected laser beam (Niclass, Fig. 2, photo sensitive area 200, APD 201, Paragraph [0049]), and wherein the first sensor, the second sensor and the third sensor are arranged in a one-dimensional arrangement (Niclass, Fig. 2, photo sensitive area 200, APD 201, pixel 230, 3 pixels aligned in one row or column, Paragraph [0049]; See also Paragraph [0048]: receiver embodiments may be a line of detectors).
Regarding claim 20, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein the solid-state sensing array further comprises a third sensor configured to detect the reflected laser beam, and wherein the first sensor, the second sensor and the third sensor are arranged in a one-dimensional arrangement (Niclass, Fig. 2, photo sensitive area 200, APD 201, pixel 230, 3 pixels aligned in one row or column, Paragraph [0049]; See also Paragraph [0048]: receiver embodiments may be a line of detectors).
Regarding claim 21, Niclass, as modified in view of Masuda, discloses the device according to claim 20, wherein the second sensor and the third sensor define a second sensor distance that is equal to the first sensor distance (Niclass, Fig. 2, photo sensitive area 200, APD 201, peripheral circuit, each pixel 230 has same layout, Paragraph [0049]).
Regarding claim 22, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein the solid-state sensing array further comprises a third sensor configured to detect the reflected laser beam, and wherein the second sensor and the third sensor define a second sensor distance that is equal to the first sensor distance (Niclass, Fig. 2, photo sensitive area 200, APD 201, peripheral circuit, each pixel 230 has same layout, Paragraph [0049]).
Regarding claim 26, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein the fitting function is a linear function representable as a flat plane (Masuda, Expression (3), Paragraph [0172]-[0174], Fig. 12, steps 228-234, Paragraph [0203]-[0210]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Niclass’ optical rangefinder by including a derivation unit that uses a plane equation and multiple pixel coordinates to calculate distance, which is disclosed by Masuda. One of ordinary skill in the art would have been motivated to make this modification in order to “derive the imaging position distance with a high level of accuracy”, as suggested by Masuda (Paragraph [0022]).
Regarding claim 27, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein obtaining the at least one spatial coordinate for the target further comprises modifying at least one measured distance by at least one additional sensor-specific calibration parameter indicative of inaccuracy for the at least one measured distance for at least one sensor of the solid-state sensing array (Masuda, Expression (1), Paragraph [0152]-[0154]; See also Fig. 12, steps 228-234, Paragraph [0203]-[0210]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Niclass’ optical rangefinder by including a derivation unit that uses a plane equation and multiple pixel coordinates to calculate distance, which is disclosed by Masuda. One of ordinary skill in the art would have been motivated to make this modification in order to “derive the imaging position distance with a high level of accuracy”, as suggested by Masuda (Paragraph [0022]).
Claims 28, 31-32, and 36-37 are a method claim corresponding to apparatus claims 17-18 and 26-27. Claims 28, 31-32, and 36-37 are rejected for the same reasons.
Regarding claim 38, Niclass, as modified in view of Masuda, discloses the device according to claim 17, further comprising a diffuser, configured to spread out the laser beam from the laser generator (Niclass, Fig. 1, diffuser 10, Paragraph [0048]).
Regarding claim 39, Niclass, as modified in view of Masuda, discloses the device according to claim 17, wherein the solid-state sensing array comprises a two-dimensional arrangement of sensors (Niclass, Fig. 1, 2D array of photodetectors 100, Paragraph [0048]).
Claims 24, 29-30, and 34-35 are rejected under 35 U.S.C. 103 as being unpatentable over Niclass, as modified in view of Masuda, in further view of Kusevic et al., US 20100157280 A1 (“Kusevic”).
Regarding claim 24, Niclass, as modified in view of Masuda, discloses the device according to claim 17.
Niclass, as modified in view of Masuda, does not disclose the following, however Kusevic does teach: wherein the fitting function is a linear function representable as a straight line. (Kusevic, Fig. 5, steps 514 to 520, Paragraph [0029]: “for three distances the fit would be a linear model”).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the optical rangefinder and dimension deriving function, disclosed by Niclass and Masuda, by performing a least squares adjustment to determine the best fit parameters, which is disclosed by Kusevic. One of ordinary skill in the art would have been motivated to make this modification in order to correct for alignment differences, as suggested by Kusevic (abstract).
Regarding claim 29, Niclass, as modified in view of Masuda, discloses the method according to claim 28.
Niclass, as modified in view of Masuda, does not teach: wherein the target comprises a flat surface facing the laser generator, and wherein the laser beam is reflected at the flat surface.
However, Kusevic teaches using flat wall with positioned targets to determine correction parameters for a LIDAR reference coordinate system (Fig. 2, target surface 140, scanning targets 142, Paragraph [0023]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the optical rangefinder and dimension deriving function, disclosed by Niclass and Masuda, by setting up calibration targets along a flat wall, which is disclosed by Kusevic. One of ordinary skill in the art would have been motivated to make this modification in order to correct for alignment differences, as suggested by Kusevic (abstract).
Regarding claim 30, Niclass, as modified in view of Masuda, discloses the method according to claim 28.
Niclass, as modified in view of Masuda, does not teach: wherein the scanning is performed with a major surface of the solid-state sensing array being positioned non-parallel with respect to the target.
However, Kusevic teaches using flat wall with positioned targets to determine correction parameters for a LIDAR reference coordinate system. The camera is vertically offset from the scanner. (Fig. 2, target surface 140, scanning targets 142, Paragraph [0023]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the optical rangefinder and dimension deriving function, disclosed by Niclass and Masuda, by setting up calibration targets along a flat wall, which is disclosed by Kusevic. One of ordinary skill in the art would have been motivated to make this modification in order to correct for alignment differences, as suggested by Kusevic (abstract).
Claim 34 is method claims corresponding to apparatus claims 24 and are rejected for the same reasons.
Regarding claim 35, Niclass, as modified in view of Masuda and Kusevic, discloses the method according to claim 34, wherein obtaining the at least one spatial coordinate for the target further comprises modifying at least one measured distance by at least one additional sensor-specific calibration parameter indicative of inaccuracy for the at least one measured distance for at least one sensor of the solid-state sensing array (Masuda, Expression (1), Paragraph [0152]-[0154]; See also Fig. 12, steps 228-234, Paragraph [0203]-[0210]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Niclass’ optical rangefinder by including a derivation unit that uses a plane equation and multiple pixel coordinates to calculate distance, which is disclosed by Masuda. One of ordinary skill in the art would have been motivated to make this modification in order to “derive the imaging position distance with a high level of accuracy”, as suggested by Masuda (Paragraph [0022]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL N NGUYEN whose telephone number is (571)270-5405. The examiner can normally be reached Monday - Friday 8 am - 5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RACHEL NGUYEN/Examiner, Art Unit 3645
/YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645