Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 7, 9, and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Minear et al. (United States Patent Application Publication 20090232388 A1), hereinafter Minear.
Regarding claim 1, Minear teaches a computer-implemented method for generating a densified LiDAR point cloud ([0084] A person skilled in the art will further appreciate that the present invention may be embodied as a data processing system or a computer program product.), the method comprising:
receiving a plurality of LiDAR point clouds including a reference LiDAR point cloud and remaining LiDAR point clouds, wherein the plurality of LiDAR point clouds is obtained based on measurements by a LiDAR device of a vehicle at subsequent measurement times ([0040] An overview of the process for registering a plurality of frames i, j of 3D point cloud data will now be described in reference to FIG. 3. The process begins in step 302 and continues to step 304. Steps 302 and 304 involve obtaining 3D point cloud data 200-i, 200-j comprising frame i and j, where frame j is designated as a reference frame.);
generating the densified LiDAR point cloud by combining the reference LiDAR point cloud and the remaining LiDAR point clouds ([0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane.; [0044] For example, a sensor may collect 25 to 40 consecutive frames consisting of 3D measurements during a collection interval. All of these frames can be aligned with the process described in FIG. 3. The process thereafter terminates in step 900 and the aggregated data from a sequence of frames can be displayed.),
wherein each remaining LiDAR point cloud is transformed by correcting for a movement of the LiDAR device (5) between the measurement time of the remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([0005] Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence; [0030] Sensors 102-i, 102-j can be physically different sensors of the same type, or they can represent the same sensor at two different times. [0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane. Thereafter, a similar coarse registration of the data in frames i and j is performed in step 500 with respect to the x, z plane. .), and
wherein only those points of the remaining LiDAR point clouds are combined into the densified LiDAR point cloud which are located in a predefined neighborhood around a point of the reference LiDAR point cloud ([0053] In practice, the mask is slid over the image and the center pixel contained within the mask is examined to determine if it has similar values as compared to its neighboring pixels. If not, this is often an indication that the particular pixel has been corrupted by noise.); and
further enhancing the densified LiDAR point cloud by including a plurality of further points, selected based on a statistical distribution around the points of the densified LiDAR point cloud ([0073] Referring once again to FIG. 3, it will be recalled that a fine registration process is performed in step 700 following the coarse registration process in steps 400, 500 and 600... Such an approach can involve finding x, y and z transformations that best explain the positional relationships between the data points in frame i and frame j after coarse registration has been completed.).
Regarding claim 7, Minear teaches the method according to claim 1, wherein the plurality of LiDAR point clouds comprises 2N+1 LiDAR point clouds, wherein the measurement times of N of the LiDAR point clouds are before the measurement time of the reference LiDAR point cloud and the measurement times of N of the LiDAR point clouds are after the measurement time of the reference LiDAR point cloud ([0060] For a sequence of frames (as is collected for objects located under a tree canopy, for instance) the center frame works best as the reference frame.).
Regarding claim 9, Minear teaches the method according to claim 1, wherein correcting for a movement of the LiDAR device between the measurement time of the remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud is performed using an iterative closest point algorithm (Fig. 3; [0042] In step 600, a determination is made as to whether coarse registration has been completed for all n frames in a sequence of frames which are to be registered. If not, then the value of j is incremented in step 602 and the process returns to step 304 to acquire the point cloud data for the next frame j.).
Regarding claim 11, Minear teaches a device for generating a densified LiDAR point cloud ([0033] One example of a 3D imaging system that generates one or more frames of 3D point cloud data is a conventional LIDAR imaging system.), comprising:
an interface configured to receive a plurality of LiDAR point clouds including a reference LiDAR point cloud and remaining LiDAR point clouds, wherein the plurality of LiDAR point clouds is obtained based on measurements by a LiDAR device of a vehicle at subsequent measurement times ([0040] An overview of the process for registering a plurality of frames i, j of 3D point cloud data will now be described in reference to FIG. 3. The process begins in step 302 and continues to step 304. Steps 302 and 304 involve obtaining 3D point cloud data 200-i, 200-j comprising frame i and j, where frame j is designated as a reference frame.); and
a computing device configured to generate a densified LiDAR point cloud by combining the reference LiDAR point cloud and the remaining LiDAR point clouds ([0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane.; [0044] For example, a sensor may collect 25 to 40 consecutive frames consisting of 3D measurements during a collection interval. All of these frames can be aligned with the process described in FIG. 3. The process thereafter terminates in step 900 and the aggregated data from a sequence of frames can be displayed.),
wherein the computing device is adapted to transform each remaining LiDAR point cloud by correcting for a movement of the LiDAR device between the measurement time of the remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([0005] Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence; [0030] Sensors 102-i, 102-j can be physically different sensors of the same type, or they can represent the same sensor at two different times. [0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane. Thereafter, a similar coarse registration of the data in frames i and j is performed in step 500 with respect to the x, z plane. ), and ;
wherein the computing device is further configured to combine only those points of the remaining LiDAR point clouds into the densified LiDAR point cloud which are located in a predefined neighborhood around a point of the reference LiDAR point cloud ([0053] In practice, the mask is slid over the image and the center pixel contained within the mask is examined to determine if it has similar values as compared to its neighboring pixels. If not, this is often an indication that the particular pixel has been corrupted by noise.)
wherein the computing device is further configured to further enhance the densified LiDAR point cloud by including a plurality of further points, selected based on a statistical distribution around the points of the densified LiDAR point cloud ([0073] Referring once again to FIG. 3, it will be recalled that a fine registration process is performed in step 700 following the coarse registration process in steps 400, 500 and 600... Such an approach can involve finding x, y and z transformations that best explain the positional relationships between the data points in frame i and frame j after coarse registration has been completed.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Minear in view of Briggs et al. (United States Patent No. 211105905 B2), hereinafter Briggs.
Regarding claim 2, Minear teaches the method according to claim 1,
Minear fails to teach the method wherein a spatial extension of the predefined neighborhood around a point of the reference LiDAR point cloud for determining whether a point of a remaining LiDAR point cloud is to be combined into the densified LiDAR point cloud depends on a difference between the measurement time of said remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud.
However, Briggs teaches the method wherein a spatial extension of the predefined neighborhood around a point of the reference LiDAR point cloud for determining whether a point of a remaining LiDAR point cloud is to be combined into the densified LiDAR point cloud depends on a difference between the measurement time of said remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([Col. 9, lines 9-13] As discussed above, both the camera-LiDAR calibration process, and the LiDAR-LiDAR calibration process requires the use of at least three calibration surfaces that are orientated in different directions, or at least three images of the same calibration surface at different times).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the neighborhood based on different measurement times of LiDAR point clouds similar to Briggs, with a reasonable expectation of success. This would have the predictable result of allowing for a single position detection taken across different times to be compared to generate an accurate environmental image.
Regarding claim 3, Minear, as modified above, teaches the method according to claim 2,
Minear fails to teach the method wherein the spatial extension depends linearly on the difference between the measurement time of said remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud.
However, Briggs teaches the method wherein the spatial extension depends linearly on the difference between the measurement time of said remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([Col. 10, lines 7-12] This camera 3D point cloud 340 includes the 3D positions of all of the decoded ArUco markers 330 and allows the camera sensors to correlate the 3D position of each decoded ArUco marker 330 to each pixel of the 3D image of calibration surface 130 taken by the camera sensors.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the linear correlation similar to Briggs, with a reasonable expectation of success. This would have the predictable result of using a direct correlation between the time interval of the detection as a way to accurately measure and relate point clouds.
Regarding claim 4, Minear teaches the method according to claim 1,
Minear fails to teach the method wherein a spatial extension of the predefined neighborhood around a point of the reference LiDAR point cloud depends on a depth of said point of the reference LiDAR point cloud.
However, Briggs teaches the method wherein a spatial extension of the predefined neighborhood around a point of the reference LiDAR point cloud depends on a depth of said point of the reference LiDAR point cloud ([Col. 11, lines 11-21] In particular embodiments, the filtering may be accomplished by first generating a plane based on the camera 3D point cloud (e.g., a best-fit plane), and then determining, for each point on the LiDAR 3D point cloud, a distance of the point to the plane generated based on the camera 3D point cloud. If it is determined that the distance of any of the points on the LiDAR 3D point cloud are more than a threshold distance from the plane generated based on the camera 3D point cloud, these points are determined to be irrelevant points and thus removed from the LiDAR 3D point cloud).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the depth defined neighborhood similar to Briggs, with a reasonable expectation of success. This would have the predictable result of utilizing the distance inherently determined by a LiDAR device as the metric by which to eliminate noise in the point cloud.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Minear in view of Briggs, and further in view of van Hoff et al (United States Patent Application Publication 20200134911 A1), hereinafter van Hoff.
Regarding claim 5, Minear, as modified above, teaches the method according to claim 4,
Minear fails to teach the method wherein the spatial extension depends exponentially on the depth of said point of the reference LiDAR point cloud.
However, van Hoff teaches the method wherein the spatial extension depends exponentially on the depth of said point of the reference LiDAR point cloud ([0058] In some examples, voxelization system 502 may create a point cloud from the 2D images that corresponds to each video capture device 402.; [0146] First, the nearest neighbors are identified for each video capture device by comparing the dot product between normals of the video capture devices with an empirical threshold. Second, the normals and depth values...once a candidate neighboring video capture device has been selected, exponential blending may be performed to average pixels from the two video capture devices.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the exponential relationship of point clouds similar to van Hoff, with a reasonable expectation of success. This would have the predictable result of using a known mathematical relationship to determine the correlation between depth data in a LiDAR point cloud.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Minear in view of Newman et al. (United States Patent Application Publication 20200371240 A1), hereinafter Newman.
Regarding claim 6, Minear teaches the method according to claim 1, wherein the plurality of further points for enhancing the densified LiDAR point cloud is selected based on the points of the densified LiDAR point cloud ([0073] In this regard, the optimization routine can iterate between finding the various positional transformations of data points that explain the correspondence of points in the frames i, j, and then finding the closest points given a particular iteration of a positional transformation. Various mathematical techniques that are known in the art can be applied to this problem.).
Minear fails to teach the selection based on a Gaussian distribution around the points of the densified LiDAR point cloud.
However, Newman teaches the selection based on a Gaussian distribution around the points of the densified LiDAR point cloud ([0017] In the present disclosure, however, a 3D Gaussian filter is used to produce a density point cloud.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the selection based on Gaussian distribution points of a point cloud similar to Newman, with a reasonable expectation of success. This would have the predictable result of utilizing a known mathematical method of statistical analysis that utilizes a normalized distribution of points relative to a reference frame.
Claims 8, 10, 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Minear in view of Ditty et al. (United States Patent Application Publication 20190258251 A1), hereinafter Ditty.
Regarding claim 8, Minear teaches the method according to claim 1, wherein correcting for a movement of the LiDAR device (5) between the measurement time of the remaining LiDAR point cloud ([0005] Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence; [0030] Sensors 102-i, 102-j can be physically different sensors of the same type, or they can represent the same sensor at two different times. [0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane. Thereafter, a similar coarse registration of the data in frames i and j is performed in step 500 with respect to the x, z plane.).
Minear fails to teach the method wherein the measurement time of the reference LiDAR point cloud is performed using sensor data obtained from an inertial measurement unit of the vehicle.
However, Ditty teaches the method wherein the measurement time of the reference LiDAR point cloud is performed using sensor data obtained from an inertial measurement unit of the vehicle ([0122] The bus can be read to find steering wheel angle, ground speed, engine RPM, button positions, and other vehicle status indicators.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the measurement data from an inertial measurement unit similar to Ditty, with a reasonable expectation of success. This would have the predictable result of correcting to the unit’s motion relative to an external environment to better clear up any noise obtained through the point cloud register.
Regarding claim 10, Minear teaches the method as set forth in claim 1,
Minear fails to teach the method further comprising: providing output training data from the plurality of densified LiDAR point clouds; providing input training data given by the reference LiDAR point clouds corresponding to the plurality of densified LiDAR point clouds; and training an artificial neural network using the input training data as input and the output training data as output.
However, Ditty teaches the method further comprising: providing output training data from the plurality of densified LiDAR point clouds ([0043] The neural network is comprised of an input layer (6010), a plurality of hidden layers ( 6020), and an output layer (6030));
providing input training data given by the reference LiDAR point clouds corresponding to the plurality of densified LiDAR point clouds ([0043] The neural network is comprised of an input layer (6010)); and
training an artificial neural network using the input training data as input and the output training data as output ([0043] FIG. 3 illustrates the training of a neural network...After sufficient training, the neural network can accurately identify images, with even greater precision than humans.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the neural network training similar to Ditty, with a reasonable expectation of success. This would have the predictable result of creating a system that learns how to calibrate on its own for faster real-time data correction in a real-world environment.
Regarding claim 12, Minear teaches a device for generating a densified LiDAR point cloud ([0033] One example of a 3D imaging system that generates one or more frames of 3D point cloud data is a conventional LIDAR imaging system.), comprising
an interface configured to receive a plurality of LiDAR point clouds including a reference LiDAR point cloud and remaining LiDAR point clouds, wherein the plurality of LiDAR point clouds is obtained based on measurements by the LiDAR device of the vehicle at subsequent measurement times ([0040] An overview of the process for registering a plurality of frames i, j of 3D point cloud data will now be described in reference to FIG. 3. The process begins in step 302 and continues to step 304. Steps 302 and 304 involve obtaining 3D point cloud data 200-i, 200-j comprising frame i and j, where frame j is designated as a reference frame.), and
a computing device configured to generate a densified LiDAR point cloud by combining the reference LiDAR point cloud and the remaining LiDAR point clouds ([0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane.; [0044] For example, a sensor may collect 25 to 40 consecutive frames consisting of 3D measurements during a collection interval. All of these frames can be aligned with the process described in FIG. 3. The process thereafter terminates in step 900 and the aggregated data from a sequence of frames can be displayed.),
wherein the computing device is adapted to transform each remaining LiDAR point cloud by correcting for a movement of the LiDAR device between the measurement time of the remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([0005] Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence; [0030] Sensors 102-i, 102-j can be physically different sensors of the same type, or they can represent the same sensor at two different times. [0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane. Thereafter, a similar coarse registration of the data in frames i and j is performed in step 500 with respect to the x, z plane.), and
wherein the computing device is further configured to combine only those points of the remaining LiDAR point clouds into the densified LiDAR point cloud which are located in a predefined neighborhood around a point of the reference LiDAR point cloud ([0053] In practice, the mask is slid over the image and the center pixel contained within the mask is examined to determine if it has similar values as compared to its neighboring pixels. If not, this is often an indication that the particular pixel has been corrupted by noise.),
wherein the computing device is further configured to further enhance the densified LiDAR point cloud by including a plurality of further points, selected based on a statistical distribution around the points of the densified LiDAR point cloud ([0073] Referring once again to FIG. 3, it will be recalled that a fine registration process is performed in step 700 following the coarse registration process in steps 400, 500 and 600... Such an approach can involve finding x, y and z transformations that best explain the positional relationships between the data points in frame i and frame j after coarse registration has been completed.);
Minear fails to teach a driver assistance system for a vehicle, comprising: a LiDAR device configured to generate LiDAR measurement data; a control unit configured to control at least one function of the vehicle based on the generated densified LiDAR point cloud.
However, Ditty teaches a driver assistance system for a vehicle ([0120] FIG. 4 shows an example self-driving vehicle (50)), comprising:
a LiDAR device configured to generate LiDAR measurement data ([0124] one or more Light Detection and Ranging (“LIDAR”) sensors (70),);
a control unit configured to control at least one function of the vehicle based on the generated densified LiDAR point cloud ([0283] Each of the processors may independently process the sensor data, and independently provides actuation information and/or control signals that may be used to control the vehicle actuators.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the vehicle implementation and control unit similar to Ditty, with a reasonable expectation of success. This would have the predictable result of utilizing the point cloud registers outlined in Minear as a substitute point cloud register for Ditty that uses the process to control a vehicle through real-world environments.
Regarding claim 13, Minear, as modified above, teaches the driver assistance system according to claim 12, wherein the device for generating the densified LiDAR point cloud is further configured to correct for a movement of the LiDAR device between the measurement time of the remaining LiDAR point cloud and the measurement time of the reference LiDAR point cloud ([0005] Usually this involves a registration process by which a sequence of image frames for a specific target taken from different sensor poses are corrected so that a single composite image can be constructed from the sequence; [0030] Sensors 102-i, 102-j can be physically different sensors of the same type, or they can represent the same sensor at two different times. [0041] The process continues in step 400, which involves performing a coarse registration of the data contained in frame i and j with respect to the x, y plane. Thereafter, a similar coarse registration of the data in frames i and j is performed in step 500 with respect to the x, z plane. )
Minear fails to teach the system further comprising an inertial measurement unit (“IMU”) configured to generate the data, and using the sensor data obtained from the IMU.
However, Ditty teaches the system further comprising an inertial measurement unit (“IMU”) configured to generate the data, and using the sensor data obtained from the IMU ([0122] The bus can be read to find steering wheel angle, ground speed, engine RPM, button positions, and other vehicle status indicators.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of this invention to modify the invention of Minear to comprise the inertial measurement unit as presented similar to Ditty, with a reasonable expectation of success. This would have the predictable result of correcting to the unit’s motion relative to an external environment to better clear up any noise obtained through the point cloud register.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT WILLIAM VASQUEZ JR whose telephone number is (571)272-3745. The examiner can normally be reached Monday thru Thursday, Flex Friday, 7:00-4:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT HODGE can be reached at (571)272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT W VASQUEZ/Examiner, Art Unit 3645
/ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645