DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Status of Claims
Claims 1-18 are pending in this application.
Claim 6 is amended.
Claims 1-18 are presented for examination.
Claim Interpretation
The examiner is interpreting “front swing filter” found in claims 2 and 11 as a filter to contain the most likely paths along a lane to account for any front swing of a vehicle.
Response to Amendments
Applicant’s amendments, filed 10 November 2025, with respect to the rejection of claim 6 under 35 U.S.C. §112(b) or 35 U.S.C. 112 (pre-AIA ) second paragraph have been fully considered, and the rejection is withdrawn.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Rosenblum et al. (US Publication 2022/0333932 A1) in view of Wu et al. (Foreign Reference CN113345237A) and in further view of Xiao et al. (US Publication 2021/0370968 A1).
Regarding claim 1, Rosenblum teaches a path planning system, comprising: a sensing device, comprising: a camera being configured to take a road image of a target road (Rosenblum: Para. 9, 154; receive images from the wide FOV camera and process the images to detect vehicles, pedestrians, lane marks, traffic signs, traffic lights, and other road objects); and a light detection and ranging (LIDAR) scanner being configured to capture a plurality of distance data points of the target road and generate a road distance point cloud image (Rosenblum: Para. 9, 381; vehicle may collect various types of information related to the road; receive point cloud information from a LIDAR system; point cloud information may be representative of distances to various objects in the environment of the host vehicle); a processing device being electrically connected to the sensing device (Rosenblum: Para. 9; processor programmed to receive from a camera onboard the host vehicle at least one captured image) and comprising: a camera LIDAR calibration module being configured to receive the road image from the camera (Rosenblum: Para. 9; a host vehicle may include at least one processor programmed to receive from a camera; processor may also be programmed to receive point cloud information from a LIDAR system; correlate the point cloud information with the at least one captured image to provide per-pixel depth information for one or more regions of the at least one captured image), receive the road distance point cloud image from the LIDAR scanner (Rosenblum: Para. 9; receive point cloud information from a LIDAR system onboard the host vehicle; point cloud information may be representative of distances to various objects in the environment of the host vehicle), and project the road distance point cloud image onto the road image to generate a road and point cloud composite image (Rosenblum: Para. 9; correlate the point cloud information with the at least one captured image to provide per-pixel depth information for one or more regions of the at least one captured image); a drivable area detection module being configured to receive the road image from the camera (Rosenblum: Para. 163; navigational response module may store software executable by processing unit to determine a desired navigational response based on data derived from execution of monocular image analysis module and/or stereo image analysis module), and generate a road segmentation image based on a deep learning model and a probabilistic graphical model (Rosenblum: Para. 272, 303; to conclude the short range road model at each frame, one or more detection modules may be used; second module is an end-to-end deep neural network, which may be trained to predict the correct short range path from an input image; generated for a sparse map within a segment of a map skeleton; probability optimization may be used); an adjacent lane information acquisition module being electrically connected to the camera LIDAR calibration module and the drivable area detection module (Rosenblum: Para. 159; memory may store a monocular image analysis module, a stereo image analysis module, a velocity and acceleration module, and a navigational response module), and being configured to receive the road and point cloud composite image and the road segmentation image (Rosenblum: Para. 9; correlate the point cloud information with the at least one captured image to provide per-pixel depth information for one or more regions of the at least one captured image), calculate an adjacent lane connected component for the road segmentation image to obtain an adjacent lane connected area (Rosenblum: Para. 173; detect a set of objects by scanning one or more images; detect segments of lane markings, lane geometry information; processing unit may group together the segments detected in step 550 belonging to the same road mark or lane mark; develop a model to represent the detected segments, such as a mathematical model), and generate an adjacent lane information point cloud of the adjacent lane connected area based on the road and point cloud composite image (Rosenblum: Para. 224, 381; a vehicle or a driver may navigate a vehicle along a road segment in the environment; collect LIDAR reflections from one or more objects from its environment; a non-moving object (e.g., a light pole) may move in the field of view of the LIDAR system, which may appear in the collected data (e.g., a point cloud generated based on reflections of the laser emissions by the object)); a main lane information acquisition module being electrically connected to the camera LIDAR calibration module and the drivable area detection module (Rosenblum: Para. 159; memory may store a monocular image analysis module, a stereo image analysis module, a velocity and acceleration module, and a navigational response module), and being configured to receive the road and point cloud composite image and the road segmentation image (Rosenblum: Para. 8; receiving, from an entity remotely located relative to a vehicle, a sparse map; comparing the received point cloud information with at least one of the plurality of mapped navigational landmarks in the sparse map to provide a lidar-based localization of the vehicle relative to the at least one target trajectory), calculate a main lane connected component for the road segmentation image to obtain a main lane connected area (Rosenblum: Para. 106, 213; a sparse data model including polynomial representations of certain road features (e.g., lane markings) or target trajectories for the host vehicle; target trajectories may represent the preferred or recommended paths for each available lane of a roadway), and generate a main lane information point cloud of the main lane connected area based on the road and point cloud composite image (Rosenblum: Para. 8, 202, 224; comparing the received point cloud information with at least one of the plurality of mapped navigational landmarks in the sparse map to provide a lidar-based localization of the vehicle relative to the at least one target trajectory); a lane changing planning module being electrically connected to the adjacent lane information acquisition module and configured to receive the adjacent lane information point cloud (Rosenblum: Para. 159; memory may store a monocular image analysis module, a stereo image analysis module, a velocity and acceleration module, and a navigational response module), and ….. , and convert a LIDAR coordinate of the adjacent lane group center into a vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory), and ….. ; a lane maintaining planning module being electrically connected to the main lane information acquisition module and configured to receive the main lane information point cloud (Rosenblum: Para. 159; memory may store a monocular image analysis module, a stereo image analysis module, a velocity and acceleration module, and a navigational response module), and ……… , and convert a LIDAR coordinate of the main lane group center into a vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory), and …….. ; and a path selection module being electrically connected to the lane changing planning module and the lane maintaining planning module (Rosenblum: Para. 159; memory may store a monocular image analysis module, a stereo image analysis module, a velocity and acceleration module, and a navigational response module) and configured to receive an obstacle information, the lane changing path point cloud and the lane maintaining path point cloud (Rosenblum: Para. 11, 475; receive point cloud information from a LIDAR system; point cloud information may be representative of distances to various objects; based on the per-pixel depth information added into the image, the system may cause one or more navigational responses in the host vehicle, such as maintaining or changing a current heading direction).
Rosenblum doesn’t explicitly teach cluster the adjacent lane information point cloud based on a clustering algorithm, and calculate an adjacent lane group center after clustering the adjacent lane information point cloud, and ….. cluster the main lane information point cloud based on a clustering algorithm, and calculate a main lane group center after clustering the main lane information point cloud.
However Wu, in the same field of endeavor, teaches cluster the adjacent lane information point cloud based on a clustering algorithm (Wu: Pg. 5 Lines 21-29; when the lidar scans the vehicle target, find the coordinates of the front key point of the vehicle, locate it in the corresponding small square, match the vehicle to the corresponding lane, and calculate the number of adjacent lanes of the vehicle), and calculate an adjacent lane group center after clustering the adjacent lane information point cloud (Wu: Pg. 5 Lines 21-29; the point cloud on the lane is distributed within 1.5m from the center line of each lane), and ….. cluster the main lane information point cloud based on a clustering algorithm (Wu: Pg. 5 Lines 21-29; when the lidar scans the vehicle target, find the coordinates of the front key point of the vehicle, locate it in the corresponding small square, match the vehicle to the corresponding lane, and calculate the number of adjacent lanes of the vehicle), and calculate a main lane group center after clustering the main lane information point cloud (Wu: Pg. 5 Lines 21-29; the point cloud on the lane is distributed within 1.5m from the center line of each lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Rosenblum and Wu don’t explicitly teach smooth the vehicle coordinate to obtain a lane changing path point cloud; ……. smooth the vehicle coordinate to obtain a lane maintaining path point cloud; and …… select one of the lane changing path point cloud lane maintaining path point cloud as a driving path according to the obstacle information.
However Xiao, in the same field of endeavor, teaches smooth the vehicle coordinate to obtain a lane changing path point cloud (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV); ……. smooth the vehicle coordinate to obtain a lane maintaining path point cloud (Xiao: Para. 63, 73, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; decision module decides how to encounter the object (e.g., overtake, yield, stop, pass); generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV); and …… select one of the lane changing path point cloud lane maintaining path point cloud as a driving path according to the obstacle information (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 2, Rosenblum teaches the path planning system as claimed in claim 1, wherein the lane maintaining planning module obtains a primary path point cloud after converting the LIDAR coordinate of the main lane group center into the vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory).
Rosenblum doesn’t explicitly teach inputs the primary path point cloud to a lane width filter, a front swing filter and a lane line filter to output a secondary path point cloud, and obtain the lane maintaining path point cloud according to the secondary path point cloud.
However Wu, in the same field of endeavor, teaches inputs the primary path point cloud to a lane width filter, a front swing filter and a lane line filter to output a secondary path point cloud (Wu: Pg. 11 Lines 56-57; Pg. 12 Lines 5-14; automatically filter out the independent small squares; the square on the actual lane line should have the lowest point cloud density; change the point cloud number threshold A until the small squares located on the actual lane line are selected; the vehicle point cloud density in the middle of the lane reaches the maximum), and obtain the lane maintaining path point cloud according to the secondary path point cloud (Wu: Pg. 11 Lines 7-11; the vehicle does not change lanes continuously, but drives in the middle of the same lane; point cloud information characteristics of the set vehicle are concentrated in the center of the lane 1.5m on the left and right sides of the lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Regarding claim 3, Rosenblum doesn’t explicitly teach wherein the lane maintaining planning module further obtains the lane maintaining path point cloud.
However Wu, in the same field of endeavor, teaches wherein the lane maintaining planning module further obtains the lane maintaining path point cloud (Wu: Pg. 11 Lines 7-11; the vehicle does not change lanes continuously, but drives in the middle of the same lane; point cloud information characteristics of the set vehicle are concentrated in the center of the lane 1.5m on the left and right sides of the lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Rosenblum and Wu don’t explicitly teach smoothing the secondary path point cloud through a Bezier curve.
However Xiao, in the same field of endeavor, teaches smoothing the secondary path point cloud through a Bezier curve (Xiao: Para. 63, 169; plan an optimal route and drive vehicle; processing logic further generates a curve to represent each of the point cluster, where the curve includes a Cubic Bezier curve, and where the road is partitioned into the one or more road partitions based on the road markings and the generated Cubic Bezier curve).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 4, Rosenblum teaches the path planning system as claimed in claim 1, …… after convert the LIDAR coordinate of the adjacent lane group center to the vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory).
Rosenblum and Wu don’t explicitly teach wherein the lane changing planning module further obtains the lane changing path point cloud by smoothing the vehicle coordinate through a Bezier curve.
However Xiao, in the same field of endeavor, teaches wherein the lane changing planning module further obtains the lane changing path point cloud (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV) by smoothing the vehicle coordinate through a Bezier curve (Xiao: Para. 63, 169; plan an optimal route and drive vehicle; processing logic further generates a curve to represent each of the point cluster, where the curve includes a Cubic Bezier curve, and where the road is partitioned into the one or more road partitions based on the road markings and the generated Cubic Bezier curve).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 5, Rosenblum doesn’t explicitly teach wherein the clustering algorithm is a density-based spatial clustering of applications with noise (DBSCAN).
However Wu, in the same field of endeavor, teaches wherein the clustering algorithm is a density-based spatial clustering of applications with noise (DBSCAN) (Wu: Pg. 2 Lines 42-45, Pg. 14 Lines 48-52; DBSCAN algorithm is a density-based spatial clustering algorithm; algorithm divides areas with sufficient density into clusters, and finds clusters of arbitrary shapes in a noisy spatial database).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Regarding claim 6, Rosenblum teaches the path planning system as claimed in claim 1, wherein the path selection module receives the distance data points from the sensing device (Rosenblum: Para. 9; receive point cloud information from a LIDAR system onboard the host vehicle; point cloud information may be representative of distances to various objects in the environment of the host vehicle), and calculates a distance between the LIDAR scanner and each of the distance data points according to a time and a light speed, captured by the LIDAR scanner, of each of the distance data points (Rosenblum: Para. 9, 405; LIDAR system; point cloud information may be representative of distances to various objects in the environment of the host vehicle ; with a single laser pulse, and a sensor including rows and columns of pixels to record returned light intensity and time of flight/depth information), and obtains the obstacle information according to each of the distances (Rosenblum: Para. 9; receive point cloud information from a LIDAR system onboard the host vehicle; point cloud information may be representative of distances to various objects in the environment of the host vehicle).
Regarding claim 7, Rosenblum and Wu don’t explicitly teach wherein when the path selection module determines that at least one roadblock is existed on the lane maintaining path point cloud according to the obstacle information, the lane changing path point cloud is selected as the driving path.
However Xiao, in the same field of endeavor, teaches wherein when the path selection module determines that at least one roadblock is existed on the lane maintaining path point cloud according to the obstacle information, the lane changing path point cloud is selected as the driving path (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 8, Rosenblum and Wu don’t explicitly teach wherein when the path selection module determines that at least one roadblock is existed on the lane changing path point cloud according to the obstacle information, the lane maintaining path point cloud is selected as the driving path.
However Xiao, in the same field of endeavor, teaches wherein when the path selection module determines that at least one roadblock is existed on the lane changing path point cloud according to the obstacle information, the lane maintaining path point cloud is selected as the driving path (Xiao: Para. 63, 73, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; decision module decides how to encounter the object (e.g., overtake, yield, stop, pass); generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 9, Rosenblum teaches the path planning system as claimed in claim 1, wherein the road segmentation image includes a main lane, at least one auxiliary lane, and a plurality of lane lines (Rosenblum: Para. 489, Fig. 39A, 39B; correlate the LIDAR data with the image to provide per-pixel depth information for one or more regions of the image).
Regarding claim 10, Rosenblum teaches a path planning method being used on a path planning system, the path planning system comprising a sensing device and a processing device, the sensing device being configured to take a road image of a target road, and capture a plurality of distance data points of the target road to generate a road distance point cloud image, the path planning method being executed by the processing device, and comprising: receiving the road image and the road distance point cloud image (Rosenblum: Para. 9; receive from a camera onboard the host vehicle at least one captured image representative of an environment of the host vehicle; receive point cloud information from a LIDAR system; point cloud information may be representative of distances to various objects), and projecting the road distance point cloud image onto the road image to generate a road and point cloud composite image (Rosenblum: Para. 9; correlate the point cloud information with the at least one captured image to provide per-pixel depth information for one or more regions of the at least one captured image); inputting the road image to a deep learning model and a probabilistic graphical model to generate a road segmentation image (Rosenblum: Para. 272, 303; to conclude the short range road model at each frame, one or more detection modules may be used; second module is an end-to-end deep neural network, which may be trained to predict the correct short range path from an input image; generated for a sparse map within a segment of a map skeleton; probability optimization may be used); calculating an adjacent lane connected component on the road segmentation image to obtain an adjacent lane connected area (Rosenblum: Para. 173; detect a set of objects by scanning one or more images; detect segments of lane markings, lane geometry information; processing unit may group together the segments detected in step 550 belonging to the same road mark or lane mark; develop a model to represent the detected segments, such as a mathematical model), and generating an adjacent lane information point cloud of the adjacent lane connected area based on the road and point cloud composite image (Rosenblum: Para. 224, 381; a vehicle or a driver may navigate a vehicle along a road segment in the environment; collect LIDAR reflections from one or more objects from its environment; a non-moving object (e.g., a light pole) may move in the field of view of the LIDAR system, which may appear in the collected data (e.g., a point cloud generated based on reflections of the laser emissions by the object)); calculating an main lane connected component on the road segmentation image to obtain a main lane connected area (Rosenblum: Para. 106, 213; a sparse data model including polynomial representations of certain road features (e.g., lane markings) or target trajectories for the host vehicle; target trajectories may represent the preferred or recommended paths for each available lane of a roadway), and generating a main lane information point cloud of the main lane connected area based on the road and point cloud composite image (Rosenblum: Para. 8, 202, 224; comparing the received point cloud information with at least one of the plurality of mapped navigational landmarks in the sparse map to provide a lidar-based localization of the vehicle relative to the at least one target trajectory); ……. , converting a LIDAR coordinate of the adjacent lane group center into a vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory), and ………. , converting a LIDAR coordinate of the main lane group center into a vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory), and …… ; and receiving an obstacle information (Rosenblum: Para. 11, 475; receive point cloud information from a LIDAR system; point cloud information may be representative of distances to various objects; based on the per-pixel depth information added into the image, the system may cause one or more navigational responses in the host vehicle, such as maintaining or changing a current heading direction).
Rosenblum doesn’t explicitly teach clustering the adjacent lane information point cloud based on a clustering algorithm, and calculating an adjacent lane group center after clustering the adjacent lane information point cloud, …… clustering the main lane information point cloud based on a clustering algorithm, and calculating a main lane group center after clustering the main lane information point cloud.
However Wu, in the same field of endeavor, teaches clustering the adjacent lane information point cloud based on a clustering algorithm (Wu: Pg. 5 Lines 21-29; when the lidar scans the vehicle target, find the coordinates of the front key point of the vehicle, locate it in the corresponding small square, match the vehicle to the corresponding lane, and calculate the number of adjacent lanes of the vehicle), and calculating an adjacent lane group center after clustering the adjacent lane information point cloud (Wu: Pg. 5 Lines 21-29; the point cloud on the lane is distributed within 1.5m from the center line of each lane), …… clustering the main lane information point cloud based on a clustering algorithm (Wu: Pg. 5 Lines 21-29; when the lidar scans the vehicle target, find the coordinates of the front key point of the vehicle, locate it in the corresponding small square, match the vehicle to the corresponding lane, and calculate the number of adjacent lanes of the vehicle), and calculating a main lane group center after clustering the main lane information point cloud (Wu: Pg. 5 Lines 21-29; the point cloud on the lane is distributed within 1.5m from the center line of each lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Rosenblum and Wu don’t explicitly teach smoothing the vehicle coordinate to obtain a lane changing path point cloud; …… smoothing the vehicle coordinate to obtain a lane maintaining path point cloud; and ……… selecting one of the lane changing path point cloud lane maintaining path point cloud as a driving path according to the obstacle information.
However Xiao, in the same field of endeavor, teaches smoothing the vehicle coordinate to obtain a lane changing path point cloud (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV); …… smoothing the vehicle coordinate to obtain a lane maintaining path point cloud (Xiao: Para. 63, 73, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; decision module decides how to encounter the object (e.g., overtake, yield, stop, pass); generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV); and ……… selecting one of the lane changing path point cloud lane maintaining path point cloud as a driving path according to the obstacle information (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 11, Rosenblum teaches the path planning method as claimed in claim 10, further comprising: obtaining a primary path point cloud after converting the LIDAR coordinate of the main lane group center to the vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory).
Rosenblum doesn’t explicitly teach inputting the primary path point cloud to a lane width filter, a front swing filter and a lane line filter to output the secondary path point cloud; and obtaining the lane maintaining path point cloud according to the secondary path point cloud.
However Wu, in the same field of endeavor, teaches inputting the primary path point cloud to a lane width filter, a front swing filter and a lane line filter to output the secondary path point cloud (Wu: Pg. 11 Lines 56-57; Pg. 12 Lines 5-14; automatically filter out the independent small squares; the square on the actual lane line should have the lowest point cloud density; change the point cloud number threshold A until the small squares located on the actual lane line are selected; the vehicle point cloud density in the middle of the lane reaches the maximum); and obtaining the lane maintaining path point cloud according to the secondary path point cloud (Wu: Pg. 11 Lines 7-11; the vehicle does not change lanes continuously, but drives in the middle of the same lane; point cloud information characteristics of the set vehicle are concentrated in the center of the lane 1.5m on the left and right sides of the lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Regarding claim 12, Rosenblum doesn’t explicitly teach obtaining the lane maintaining path point cloud.
However Wu, in the same field of endeavor, teaches obtaining the lane maintaining path point cloud (Wu: Pg. 11 Lines 7-11; the vehicle does not change lanes continuously, but drives in the middle of the same lane; point cloud information characteristics of the set vehicle are concentrated in the center of the lane 1.5m on the left and right sides of the lane).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Rosenblum and Wu don’t explicitly teach smoothing the secondary path point cloud through a Bezier curve.
However Xiao, in the same field of endeavor, teaches smoothing the secondary path point cloud through a Bezier curve (Xiao: Para. 63, 169; plan an optimal route and drive vehicle; processing logic further generates a curve to represent each of the point cluster, where the curve includes a Cubic Bezier curve, and where the road is partitioned into the one or more road partitions based on the road markings and the generated Cubic Bezier curve).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 13, Rosenblum teaches the path planning method as claimed in claim 10, further comprising: …….. converting the LIDAR coordinate of the adjacent lane group center to the vehicle coordinate (Rosenblum: Para. 175, 222, 224; the position information may include one or more 2D image positions (e.g., X-Y pixel locations) in a captured image; such image positions may correspond to a center of the feature; processing unit may identify road marks appearing within the set of captured images and derive lane geometry information; each lane of a multi-lane road may be associated with its own target trajectory).
Rosenblum and Wu don’t explicitly teach obtaining the lane changing path point cloud by ……… smoothing the vehicle coordinate through a Bezier curve.
However Xiao, in the same field of endeavor, teaches obtaining the lane changing path point cloud (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV) by ……… smoothing the vehicle coordinate through a Bezier curve (Xiao: Para. 63, 169; plan an optimal route and drive vehicle; processing logic further generates a curve to represent each of the point cluster, where the curve includes a Cubic Bezier curve, and where the road is partitioned into the one or more road partitions based on the road markings and the generated Cubic Bezier curve).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 14, Rosenblum doesn’t explicitly teach wherein the clustering algorithm is a density-based spatial clustering of applications with noise (DBSCAN).
However Wu, in the same field of endeavor, teaches wherein the clustering algorithm is a density-based spatial clustering of applications with noise (DBSCAN) (Wu: Pg. 2 Lines 42-45, Pg. 14 Lines 48-52; DBSCAN algorithm is a density-based spatial clustering algorithm; algorithm divides areas with sufficient density into clusters, and finds clusters of arbitrary shapes in a noisy spatial database).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) with a reasonable expectation of success because point cloud information is concentrated in the center of the lane and sparse at the lane lines (Wu: Pg. 10 Lines 7-11).
Regarding claim 15, Rosenblum teaches the path planning method as claimed in claim 10, further comprising: receiving the distance data points; calculating a distance between a LIDAR scanner and each of the distance data points according to a time and a light speed, captured by the LIDAR scanner, of each of the distance data points (Rosenblum: Para. 9, 405; LIDAR system; point cloud information may be representative of distances to various objects in the environment of the host vehicle; with a single laser pulse, and a sensor including rows and columns of pixels to record returned light intensity and time of flight/depth information); and obtaining the obstacle information according to the distances (Rosenblum: Para. 9; receive point cloud information from a LIDAR system onboard the host vehicle; point cloud information may be representative of distances to various objects in the environment of the host vehicle).
Regarding claim 16, Rosenblum, Wu, and Xiao don’t explicitly teach determining that at least one roadblock exists on the lane maintaining path point cloud according to the obstacle information, and selecting the lane changing path point cloud as a driving path.
However Xiao, in the same field of endeavor, teaches determining that at least one roadblock exists on the lane maintaining path point cloud according to the obstacle information, and selecting the lane changing path point cloud as a driving path (Xiao: Para. 63, 75, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; the planning and control data may instruct vehicle to move 10 meters at a speed of 30 mile per hour (mph), then change to a right lane at the speed of 25 mph; generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 17, Rosenblum, Wu, and Xiao don’t explicitly teach determining that at least one roadblock exists on the lane changing path point cloud according to the obstacle information, and selecting the lane maintaining path point cloud as a driving path.
However Xiao, in the same field of endeavor, teaches determining that at least one roadblock exists on the lane changing path point cloud according to the obstacle information, and selecting the lane maintaining path point cloud as a driving path (Xiao: Para. 63, 73, 165; plan an optimal route and drive vehicle, for example, via control system, according to the planned route to reach the specified destination; decision module decides how to encounter the object (e.g., overtake, yield, stop, pass); generates a point cloud map based on the road partitions, where the point cloud map is utilized to perceive a driving environment surrounding the ADV).
It would have been obvious to one having ordinary skill in the art to modify the map correlating camera and lidar data (Rosenblum: Para. 9) with the density-based spatial clustering algorithm of point cloud information (Wu: Pg. 2 Lines 38-45) and the point cloud map based on road partitions (Xiao: Para. 165) with a reasonable expectation of success because deciding to yield or pass an encountered object with the point cloud map of the driving environment surrounding the ADV (Xiao: Para. 73, 165).
Regarding claim 18, Rosenblum teaches the path planning method as claimed in claim 10, wherein the road segmentation image includes a main lane, at least one auxiliary lane, and a plurality of lane lines (Rosenblum: Para. 489, Fig. 39A, 39B; correlate the LIDAR data with the image to provide per-pixel depth information for one or more regions of the image).
Response to Arguments
Applicant’s arguments with respect to the rejection of claims 1-18 under 35 U.S.C. 103 have been fully considered, but they are not persuasive.
The applicant’s attorney agues that ‘Rosenblum does not disclose the relevant technical features of "an adjacent lane information point cloud of the adjacent lane connected area" and "a main lane information point cloud of the main lane connected area" of the claimed inventions.’
In response to the applicant’s argument above, Rosenblum teaches a vehicle LIDAR system that obtains point cloud information representing distances to various objects for one or more regions in the environment of the host vehicle (Rosenblum: Para. 9, 381). The system correlates the point cloud information with a captured image to provide per-pixel depth information for one or more regions (Rosenblum: Para. 8-9). Rosenblum teaches a LIDAR-based localization of the vehicle relative to the at least one target trajectory (Rosenblum: Para. 8, 202). Rosenblum teaches a target trajectory of each lane of a multi-lane road (Rosenblum: Para. 224).
Therefore Rosenblum creates an information point cloud for each lane, adjacent and main, of a multi-lane road.
The applicant next argues that “neither Wu nor Xiao discloses the technical features of generating and processing an adjacent lane information point cloud and a main lane information point cloud.”
In response to the applicant’s argument above, the prior art Rosenblum teaches generating and processing an adjacent lane information point cloud and a main lane information point cloud.
The applicant next argues that “Rosenblum, Wu, and Xiao, alone or in combination, fail to disclose or suggest the technical features relating to the generation and processing of the adjacent lane information point cloud and the main lane information point cloud as recited in independent Claims 1 and 10.”
In response to the applicant’s argument above, the examiner explains how Rosenblum teaches the generation and processing of the adjacent lane information point cloud and the main lane information point cloud.
The applicant next argues that “a person having ordinary skill in the art to which the claimed invention pertains would not have been motivated to combine the teachings of Rosenblum, Wu, and Xiao to arrive at the claimed inventions with a reasonable expectation of success”
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Rosenblum teaches a correlation of LIDAR and camera information to aid in vehicle navigation (Rosenblum: Para. 8-9). Wu teaches historical LIDAR scans clustering the data into traffic flows (Wu: Pg. 5 Lines 21-29). Xiao teaches planning a vehicle’s optimal route creating location and velocity changes (Xiao: Para. 63, 75, 165). It would be obvious to one of ordinary skill of the art to modify Rosenblum’s LIDAR and camera driven vehicle navigation by concentrating historical data in the center of the lane (Wu: Pg. 10 Lines 7-11) in order to improve yield or pass decisions for an encountered object (Xiao: Para. 73, 765).
The applicant’s arguments have failed to point out the distinguishing characteristics of the amended claim language over the prior art. For the above reasons, Rosenblum’s LIDAR and camera information collections with Wu’s clustering and Xiao’s smoothing reads on applicant’s path planning system and path planning method thereof. The rejection is maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yang et al. US Publication 2021/0272304 A1 teaches a LIDAR and camera data combination to predict the locations of obstacles with lateral and longitudinal vehicle control.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAURA E LINHARDT whose telephone number is (571)272-8325. The examiner can normally be reached on M-TR, M-F: 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached on (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.E.L./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663