DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 08/01/2025 is being entered. Claims 26-49 are pending. Claims 27, 28, 30, 31, 34, and 43 are amended. Claims 46-49 are new.
Response to Arguments
Applicant’s arguments, see Remarks, filed 08/01/2025, with respect to the rejection of claims 26-45 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration and search, a new ground(s) of rejection under 35 U.S.C. 103 is made in view of Wei. Therefore, a new non-final office action is enclosed herein.
Specification
The disclosure is objected to because of the following informalities: there is a typo in Paragraph 044 in line 14, the line recites “the the”.
Appropriate correction is required.
The abstract of the disclosure is objected to because the abstract contains more than 150 words. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 26, 28-31, 33-36, 38-40, 42-45, and 47-49 are rejected under 35 U.S.C. 103 as being unpatentable over Wee et al. (U.S. Publication No. 2012/0081544 A1) hereinafter Wee in view of Oxford et al. (EP No. 2,728,376 A1) hereinafter Oxford further in view of Wei et al. (U.S. Publication No. 2018/0316873 A1) hereinafter Wei.
Regarding claim 26, Wee discloses a navigational system for a host vehicle, the system comprising: at least one processor comprising circuitry and a memory, wherein the memory includes instructions that when executed by the circuitry cause the at least one processor to [see Figure 7 below – depicts an image acquisition unit (processing unit) with a memory (RAM)]:
PNG
media_image1.png
461
677
media_image1.png
Greyscale
Figure 7 of Wee
receive a stream of images captured by a camera onboard the host vehicle, wherein the captured images are representative of an environment surrounding the host vehicle [see Paragraphs 0042 and 0049 - discusses using a video camera to monitor the front of a vehicle (lane markings), the camera mounted on a vehicle];
receive an output of a LIDAR onboard the host vehicle, wherein the output of the LIDAR is representative of a plurality of laser reflections from at least a portion of the environment surrounding the host vehicle [see Paragraph 0051 - discusses that the LIDAR output is reflected back to the detector, the LIDAR is mounted on the vehicle];
determine at least one indicator of relative alignment between the output of the LIDAR and at least one image captured by the camera [see Paragraphs 0064-0065 - discusses that a parameter of alignment of the LIDAR and camera is determined during calibration, see Paragraph 0032 - discusses how the camera pixels and LIDAR pixels are matched (aligned) during co-registration (calibration)]
However, Wee fails to disclose:
determining the at least one indicator of relative alignment comprises determining a transform between the at least one image captured by the camera and the output of the LIDAR and is based on aligning, in image space, the output of the LIDAR and the at least one image; and
determine, using aligned LIDAR output based on the at least one indicator of relative alignment, a distance to at least one object in the environment surrounding the host vehicle.
Oxford discloses determining at least one indicator of relative alignment comprises determining a transform [see Paragraph 0041 – discusses determining a metric which reflects the quality of alignment of a LIDAR and camera] between the at least one image captured by the camera and the output of the LIDAR and is based on aligning, in image space, the output of the LIDAR and the at least one image [see Paragraphs 0036-0038 – discusses determining the alignment in image space by transforming LIDAR data to image space (cameras frame of reference)].
Oxford suggests that determining validity of calibration (of a LIDAR and camera) is needed after bumps, knocks, or vibrations [see Paragraphs 0005-0006].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the navigational system as taught by Wee to determine a transform between the at least one image captured by the camera and the output of the LIDAR in image space based on aligning the output of the LIDAR and the at least one image as taught by Oxford in order to validate calibrations of a LIDAR and a camera after bumps, knocks, or vibrations [Oxford, see Paragraphs 0005-0006].
Wei discloses determine, using aligned LIDAR output based on at least one indicator of relative alignment, a distance to at least one object in an environment surrounding a host vehicle [see Paragraphs 0003 and 0016 – discusses a system where determining distance to an object is based on a reflected LIDAR signal (LIDAR output), the distance is determined based on alignment between a camera and a lidar].
Wei suggests that the system is used to assist a human operator as needed to avoid an object (for example a vehicle) and to follow the object (vehicle) at a safe distance [see Paragraph 0008].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the processor as taught by Wee to determine, using aligned LIDAR output based on the at least one indicator of relative alignment, a distance to the at least one object in an environment surrounding the host vehicle as taught by Wei in order to assist a human operator as needed to avoid another vehicle and to follow the other object (vehicle) at a safe distance [Wei, see Paragraph 0008].
Regarding claim 28, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses determine, using the aligned LIDAR output, an elevation of a road surface at a point where the at least one object contacts the road surface [see Paragraph 0026 – discusses determining heights using LIDAR].
Regarding claim 29, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses wherein the at least one object includes another vehicle [see Paragraph 0039 – discusses other vehicles].
Regarding claim 30, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses determine a road plane associated with a road on which the host vehicle travel [see Paragraph 0026 - discusses determining the center of the road (keep vehicle center) using the lane markers identified by the camera and object analysis and distance to lane marker measured by the LIDAR].
Regarding claim 31, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses wherein execution of the instructions included in the memory further cause the at least one processor to determine a first plurality of points associated with the at least a portion of the road identified in the at least one [see Paragraph 0023 – discusses RBG data from a camera, RGB is associated with different points on a road] and interleave the first plurality of points with a second plurality of points [see Paragraph 0029 – discusses that the RGB pixels (points) from the camera are matched with the depth (D) and intensity (I) pixels (points)] derived from the LIDAR reflection information indicative of distances between the host vehicle and a plurality of locations on the road [see Paragraph 0026 – discusses determining distances to locations on the road (to identify edges and curves of a road by recognizing difference in height between roads and curbs) using the LIDAR (the depth information (D) of positions of curbs or barriers (see Paragraph 0004))].
Regarding claim 33, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses wherein the at least one image captured by the camera comprises pixels [see Paragraph 0024 – discusses the video camera images have pixels];
the output of the LIDAR comprises at least one of a point cloud or a depth map [see Paragraph 0024 – discusses each LIDAR pixel will have a depth data associated with it (lidar scan will have a depth map)];
the transform aligns at least one of the pixels with corresponding LIDAR reflection information in the point cloud or the depth map [see Paragraph 0029 – discusses matching (aligning) the pixels with the depth data of the LIDAR during co-registration].
Regarding claim 34, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses wherein execution of the instructions included in the memory further causes the at least one processor to determine the at least one navigational characteristic associated with the host vehicle, the at least one navigational characteristic being based on a correlation of depth or distance information of the output of the LIDAR with the at least one object [see Paragraph 0039 – discusses searching and correlating objects using a library of objects detected by the camera, see Paragraph 0040 - discusses determining a depth of the object using the LIDAR reflection information, see Paragraph 0040 – discusses determining a distance to an object based on the camera and LIDAR data, and see Paragraph 0026 – discusses determining a vehicles position when determining an object (such as a lane marking) in order to center the vehicle].
Regarding claim 35, Wee, Oxford, and Wei disclose the invention with respect to claim 26. Wee further discloses wherein a first portion of the output of the LIDAR and a second portion of the at least one image are associated with a same lane marking [see Paragraph 0040 – discusses that combining the LIDAR output and image data (combined signal) to determine differently colored lane markings].
Oxford further discloses wherein the transform results in overlap of a first portion of the output of the LIDAR and a second portion of the at least one image [see Paragraph 0036 - discusses transforming the LIDAR (3D point cloud data) into the cameras image].
Oxford suggests that determining validity of calibration (of a LIDAR and camera) is needed after bumps, knocks, or vibrations [see Paragraphs 0005-0006].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the navigational system as taught by Wee to transform an overlap of a first portion of the output of the LIDAR and a second portion of the at least one image as taught by Oxford in order to validate calibrations of a LIDAR and a camera after bumps, knocks, or vibrations [Oxford, see Paragraphs 0005-0006].
Claims 36, 38-40, and 42-44 are analogous to claims 26, 28, 30-31, and 33-35 and are therefore rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of Oxford in view of Wei.
Claims 45, 47, 48, and 49 are analogous to claims 36, 38, 42, and 34 and are therefore rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of Oxford in view of Wei.
Regarding claim 48, Wee, Oxford, and Wei disclose the invention with respect to claim 45. Wee further discloses wherein the at least one image captured by the camera comprises pixels [see Paragraph 0024 — discusses the video camera images have pixels|;
the output of the LIDAR comprises at least one of a point cloud or a depth map [see Paragraph 0024 — discusses each LIDAR pixel will have a depth data associated with it (lidar scan will have a depth map)];
the transform aligns at least one of the pixels with corresponding LIDAR reflection information in the point cloud or the depth map [see Paragraph 0029 — discusses matching (aligning) the pixels with the depth data of the LIDAR during co-registration].
Claims 27 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of in view of Oxford in view of Wei further in view of Zeng et al. (U.S. Publication No. 2010/0017128 A1) hereinafter Zeng.
Regarding claim 27, Wee, Oxford, and Wei disclose the invention with respect to claim 26.
However, the combination of Wee, Oxford, and Wei fails to wherein execution of the instructions included in the memory further cause the at least one processor to disclose determine, using the distance to the at least one object, a speed of the host vehicle.
Zeng discloses determine, using the distance to the at least one object, a speed of the host vehicle [see Paragraph 0008 - discusses a system using vehicle dynamics to determine a vehicle speed using a stationary object, see Paragraph 0016 - discusses using a camera and LIDAR to match images to track objects, and determine vehicle velocity, if the vehicle is moving and tracking a stationary object there is going to be a change in distance between the two, and see Paragraph 0017 - discusses the object sensors (LIDAR and camera) determine the ego-motion of the vehicle from the measurement of stationary objects].
Zeng suggests that wheel speed sensors performance is reduced due to slippage from cornering and swerving [see Paragraph 0005].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the invention as taught by Wee to determine, using the distance to the at least one object, a speed of the host vehicle as taught by Zeng in order to increase velocity estimation of a host vehicle during slippage due to cornering and/or swerving [Zeng, see Paragraph 0005].
Claims 37 is analogous to claim 27 and is therefore rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of in view of Oxford in view of Wei in view of Zeng.
Claims 32 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of Oxford in view of Wei in view of Napier et al. (U.S. Publication No. 2015/0317781 A1) hereinafter Napier.
Regarding claim 32, Wee, Oxford, and Wei disclose the invention with respect to claim 26.
However, the combination of Wee, Oxford, and Wei fails to disclose wherein the determination of the at least one indicator of relative alignment between the output of the LIDAR and at least one image captured by the camera is performed periodically over time.
Napier discloses wherein the determination of the at least one indicator of relative alignment between the output of the LIDAR and at least one image captured by the camera is performed periodically over time [see Paragraph 0079 - discusses calibrating sensors (LIDAR and camera) after a bump, see Paragraph 0009 - discusses comparing image and LIDAR data to determine extrinsic calibration parameters for the sensor devices, and see Paragraph 0079 - discusses performing the calibration at intervals].
Napier teaches that consistent results from an experiment [see Paragraphs 0080-0087] for calibration using LIDAR data and image data shows that the calibration is repeatable [see Paragraph 0080].
Napier also suggests that on the fly calibration (vehicle moving) is possible using a camera and LIDAR [see Paragraph 0088]
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the LIDAR and camera as taught by Wee to be calibrated at intervals (periods of time) as taught by Napier because the experimental results proved that the calibration of LIDAR and camera sensors is repeatable [see Paragraph 0080] and can be done on the fly under general vehicle motion [see Paragraph 0088].
Claims 41 is analogous to claim 32 and is therefore rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of in view of Oxford in view of Wei in view of Napier.
Claim 46 is rejected under 35 U.S.C. 103 as being unpatentable over Wee in view of in view of Oxford in view of Wei further in view of Schwindt et al. (U.S. Publication No. 2018/0025645 A1) hereinafter Schwindt.
Regarding claim 46, Wee, Oxford, and Wei disclose the invention with respect to claim 45.
However, the combination of Wee, Oxford, and Wei fails to disclose wherein execution of the instructions included in the memory further causes the at least one processor to determine, based on changes in distance information in the aligned LIDAR output over time, a rate at which the at least one object is approaching the host vehicle .
Schwindt discloses wherein execution of instructions included in a memory further causes at least one processor to determine, based on changes in distance information in the aligned LIDAR output over time, a rate at which at least one object is approaching a host vehicle [see Paragraphs 0022 and 0050-0052 – discusses using a LIDAR sensor to detect changes in distance and a controller determines a relative velocity (rate of change in displacement over time) of an approaching vehicle].
Schwindt suggests that when an approaching object is determined and a lane change is too risky/inappropriate, then a vehicle is prevented from moving or steering due to the risk of a collision of the approaching vehicle [see Paragraph 0052].
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention, with a reasonable expectation of success, to modify the processor as taught by Wee to determine, based on changes in distance information in an aligned LIDAR output over time, a rate at which at least one object is approaching a host vehicle the as taught by Schwindt in order to prevent a collision with an approaching vehicle [see Paragraph 0052].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shayne M Gilbertson whose telephone number is (571)272-4862. The examiner can normally be reached Tuesday - Friday: 10:30 AM - 9:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached on 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAYNE M. GILBERTSON/Examiner, Art Unit 3665