DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
This office action is in response to amendments filed 01/05/2026. Claims 1-20 are pending.
Applicant’s arguments and amendments to the claims with respect to rejections of Claims 1-20 under 35 USC 101 have been fully considered and are persuasive. The rejections of Claims 1-20 under 35 USC 101 have been withdrawn.
Applicant's arguments and amendments with respect to the prior art rejections of Claims 1-20 under 35 USC 102/103 have been fully considered but they are not persuasive.
With respect to applicant’s arguments that Okano fails to teach “determining a first travelable lane in which the host vehicle can be driven, wherein the first travelable lane is used for at least one of driving planning, navigation, or collision warning of the host vehicle.”, examiner respectfully disagrees. Okano teaches in par. 0049 that the determined lane can be used to either warn the driver against potential lane departure or automatically assist with steering to keep the vehicle within the lane, which falls within the scope of the claims as recited.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3, 13, 15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Okano et al (US 20170147889, hereinafter Okano).
Regarding Claim 1, Okano teaches:
a method for predicting a travelable lane (see at least "Images captured by the left side camera 10L, the right side camera 10R, the front camera 11F, and the rear camera 11B, and an output from the vehicle speed sensor 14 are input to the lane detection apparatus 100 to detect a lane by a method described below. The lane detection apparatus 100 outputs information on the detected lane to a vehicle control apparatus 200." in par. 0048) , comprising:
receiving, by a computing device and from a plurality of sensors of a host vehicle forward lane line information of a host vehicle and backward lane line information of the host vehicle (see at least "FIG. 1 illustrates a vehicle 1 equipped with a lane detection apparatus 100. As illustrated in FIG. 1, the vehicle 1 includes a left side camera 10L which images an area (a surround of the left side) around the left side from the vehicle 1, and a right side camera 10R which images an area (a surround of the right side) around the right side, a front camera 11F which images an area (a surround of the front) around the front from the vehicle 1, a rear camera 11B which images an area (a surround of the rear) around the rear, and a vehicle speed sensor 14 which detects a traveling speed of the vehicle 1, in addition to the lane detection apparatus 100." in par. 0046 and “The front detection portion 102 acquires an image of the front of the vehicle 1 from the front camera 11F, and analyzes the acquired image to detect a lane division existing in front of the vehicle 1.” In par. 0054 and “The rear detection portion 103 acquires an image of the rear of the vehicle 1 from the rear camera 11B, and analyzes the acquired image to detect a lane division existing in the rear of the vehicle 1.” In par. 0055) ;
determining, by the computing device, a first lane model based on the forward lane line information of the host vehicle (see at least “The front detection portion 102 acquires an image of the front of the vehicle 1 from the front camera 11F, and analyzes the acquired image to detect a lane division existing in front of the vehicle 1.” In par. 0054) ;
determining, by the computing device, a second lane model based on the backward lane line information of the host vehicle (see at least “The rear detection portion 103 acquires an image of the rear of the vehicle 1 from the rear camera 11B, and analyzes the acquired image to detect a lane division existing in the rear of the vehicle 1.” In par. 0055 and “The lane division detected in front of the vehicle 1 corresponds to a front lane division. The lane division detected in the rear of the vehicle 1 corresponds to a rear lane division.” In par. 0056); and
determining, by the computing device and based on the first lane model and the second lane model a first travelable lane in which the host vehicle can be driven (see at least "The checkup portion 109 compares the position of the lane division detected by the side detection portion 101, the position of the lane division corrected by the front correction portion 105, and the position of the lane division corrected by the rear correction portion 106, so that the checkup portion 109 determines whether these positions of the lane divisions are matched with each other." in par. 0064 and “When it is determined that the positions of the lane divisions are alignment each other, the position of the lane division detected by the side detection portion 101 is output to the vehicle control apparatus 200 as a lane detection result.” In par. 0065).
wherein the first travelable lane is used for at least one of driving planning, navigation, or collision warning of the host vehicle. (see at least "The vehicle control apparatus 200 gives a warning to a driver, or assists the driver to operate a steering wheel in a direction for maintaining the vehicle within the lane, for example, when the vehicle control apparatus 200 determines based on the received information that the driver is to deviate from the lane against the intention of the driver." in par. 0049)
Regarding Claim 3, Okano teaches:
the method according to claim 1, wherein determining the first travelable lane based on the first lane model and the second lane model comprises:
associating, by the computing device, lane lines in the first lane model and the second lane model with lane line sequence numbers (see at least "the time lag Tlf of imaging timing of the front camera 11F from imaging timing of the left side camera 10L (refer to FIG. 9), and the time lag Tlb of imaging timing of the rear camera 11B from imaging timing of the left side camera 10L (refer to FIG. 9) are acquired (S109). These time lags have been acquired beforehand in S100 and have been stored in the time lag memory portion 108 (refer to FIG. 2) at the start of the lane detection process." in par. 0099 and Fig. 9) ;
determining, by the computing device, whether lane lines associated with a same lane line sequence number in the first lane model and the second lane model are matched (see at least " Furthermore, the position of the lane division (the white line 2) in each of the front and the rear of the vehicle 1 is corrected such that the corresponding line is shifted in the X direction in the X-Y coordinate system. Accordingly, these lines are only compared in the X-Y coordinate system to determine whether the respective lines are matched (i.e., matched within tolerance).” in par. 0104) ;
supplementing, by the computing device, matched lane lines in the first lane model and the second lane model (see at least " When it is determined that the detected positions of the lane divisions (white lines 2) are matched at three points on the left side, in front, and in the rear of the vehicle 1 (S112: yes), this condition of alignment is recognized as a state that the lane division (the white line 2) on the left side has been correctly detected." in par. 0105 and “When it is determined that the detected positions of the lane divisions (white lines 2) are not matched at three points on the left side, in front, and in the rear of the vehicle 1 (S112: no), this condition of non-alignment is considered most likely to come from false detection of the lane division (the white line 2) on the left side. Accordingly, the position of the lane division (the white line 2) on the left side is switched to a position estimated based on the position of the lane division (the white line 2) on the left side of the vehicle 1 detected in a previous process (S113).” In par. 0106) ; and
determining, by the computing device, the first travelable lane based on supplemented lane lines (see at least "When it is determined that the positions of the lane divisions are alignment each other, the position of the lane division detected by the side detection portion 101 is output to the vehicle control apparatus 200 as a lane detection result." in par. 0065) .
Regarding Claim 13, Okano also teaches:
an apparatus, comprising at least one processor and one or more memories coupled to the at least one processor, wherein the one or more memories store programming instructions for execution by the at least one processor to perform operations (see at least " It should not be understood that the lane detection apparatus 100 is physically sectioned into ten portions. Accordingly, these “portions” may be realized as a computer program executed by a CPU, an electronic circuit including an LSI and a memory, or a combination of these program and electronic circuit." in par. 0051) comprising:
implementing the method of Claim 1 (see Claim 1 analysis for rejection of the method)
Regarding Claim 15, Okano also teaches:
An apparatus for implementing the method of Claim 3 (see Claim 3 analysis for rejection of the method)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 4, 6-8, 12, 14, 16, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okano et al (US 20170147889, hereinafter Okano). in view of Englard et al (US20190113927, hereinafter Englard).
Regarding Claim 2, Okano teaches:
the method according to claim 1,
Okano does not appear to explicitly teach all of the following, but Englard does teach:
wherein the forward lane line information of the host vehicle is detected by using a lidar (see at least " As another example, the lidar system 402 may include eight sensor heads 412, and each of the sensor heads 412 may provide a 45° to 60° horizontal FOR. As yet another example, the lidar system 402 may include six sensor heads 412, where each of the sensor heads 412 provides a 70° horizontal FOR with an overlap between adjacent FORs of approximately 10°. As another example, the lidar system 402 may include two sensor heads 412 which together provide a forward-facing horizontal FOR of greater than or equal to 30°." in par. 0120 and “Still other objects may be difficult to identify, classify and/or track due to their small size and/or low profile and/or their distance away from the lidar system. For example, while not shown in FIG. 8B, the segmentation module 210 may identify (and the classification module 212 may classify) lane markings within the point cloud 490. The lane markings may appear as small but abrupt deviations in the path of the scan lines, for example, with those deviations collectively forming a line pattern that aligns with the direction of travel of the autonomous vehicle (e.g., approximately normal to the curve of the scan lines). The lane markings may be classified according to type (e.g., broken white line, broken yellow line, solid yellow line, double solid yellow line, etc.).” in par. 0129) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein at the forward lane line information is detected by a lidar system. The motivation to incorporate the teachings of Englard would be to improve the diversity of the available sensor data (see par. 0097)
Regarding Claim 4, Okano teaches:
the method according to claim 1,
wherein the method further comprises:
Okano does not appear to explicitly teach all of the following, but Englard does teach:
Obtaining, by the computing device, a border of a freespace in a target direction of the host vehicle, wherein the target direction comprises at least one of a forward direction or a backward direction (see at least " Within the example cost map 650, heavily shaded areas 652 (i.e., areas filled with more closely spaced slanted lines) have a relatively low cost, non-shaded areas 654 have a relatively high cost, and medium-shaded areas 656 (i.e., areas filled with sparser slanted lines) have an intermediate cost. While FIG. 12 only shows three discrete cost levels for different areas for ease of explanation, it is understood that many more cost levels may be applied within a single cost map. For example, each “pixel” of the cost map 650 may be associated with its own cost level that is selected from among a virtually continuous range (e.g., 256 cost levels corresponding to 256 greyscale values, etc.)." in par. 0155 and Fig. 12) , and
the border of the freespace is determined based on at least one of a road border, a dynamic obstacle border, or a static obstacle border (see at least " In the simplified example of FIG. 12, lanes corresponding to the autonomous vehicle's current direction of travel (or potential direction of travel, after a single, 90-degree right or left turn) can be seen to generally have a much lower cost than lanes corresponding to the opposite direction of travel (i.e., where the autonomous vehicle would be driving straight into oncoming traffic). Moreover, the lane divider within a single direction of travel (e.g., at area 656A) may have a higher cost than the lanes on either side of the divider, but may not have a cost so high as to prevent all lane switching. The cost for locations within a lane in which the autonomous vehicle is traveling may vary across the lane (e.g., in a linear, exponential, Gaussian, or parabolic manner) so that the cost is low in the center of the lane and increases towards the edges of the lane. Areas in which other objects reside (e.g., the vehicle at area 658, which corresponds to the vehicle 556C in the occupancy grid 550 of FIG. 10) may have a very high cost to prevent collisions, with associated areas of gradually decreasing costs around the objects (e.g., the area 656B) to reflect the gradually decreasing risk of collision. " in par. 0156) ;
determining, by the computing device, a second travelable lane based on the border of the freespace in the target direction of the host vehicle (see at least " Within the example cost map 650, heavily shaded areas 652 (i.e., areas filled with more closely spaced slanted lines) have a relatively low cost" in par. 0155 and “In the simplified example of FIG. 12, lanes corresponding to the autonomous vehicle's current direction of travel (or potential direction of travel, after a single, 90-degree right or left turn) can be seen to generally have a much lower cost than lanes corresponding to the opposite direction of travel (i.e., where the autonomous vehicle would be driving straight into oncoming traffic). Moreover, the lane divider within a single direction of travel (e.g., at area 656A) may have a higher cost than the lanes on either side of the divider, but may not have a cost so high as to prevent all lane switching. Moreover, the lane divider within a single direction of travel (e.g., at area 656A) may have a higher cost than the lanes on either side of the divider, but may not have a cost so high as to prevent all lane switching. The cost for locations within a lane in which the autonomous vehicle is traveling may vary across the lane (e.g., in a linear, exponential, Gaussian, or parabolic manner) so that the cost is low in the center of the lane and increases towards the edges of the lane.” In par. 0156) ; and
correcting, by the computing device, the first travelable lane based on the second travelable lane (see at least " After the autonomous vehicle stops at the stop sign, the cost of the crosswalk may be reduced so that the autonomous vehicle may proceed. As another example, if a pedestrian is located in or near a crosswalk, the cost of the crosswalk may be assigned a relatively high value until the crosswalk is free of pedestrians. As another example, an adjacent lane (e.g., a yield lane) may have a relatively high cost if a vehicle is located in that lane, and the cost may be reduced when the vehicle moves ahead of or behind the autonomous vehicle." in par. 0156).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein the vehicle’s current and adjacent lanes are broken up into cells with cost values that are updated dynamically to avoid obstacles within the lanes. The motivation to incorporate the teachings of Englard would be to improve safety and efficiency of autonomous driving decision making (see par. 0196)
Regarding Claim 6, Okano teaches:
the method according to claim 1, wherein the method further comprises:
Okano does not appear to explicitly teach all of the following, but Englard does teach:
Obtaining, by the computing device, a traveling trajectory of a vehicle in a target direction of the host vehicle, wherein the target direction comprises at least one of a forward direction or a backward direction, determining, by the computing device, a third travelable lane based on the traveling trajectory of the vehicle in the target direction of the host vehicle and correcting, by the computing device, the first travelable lane based on the third travelable lane. (see at least " The numerical value, or “cost,” for a given cell of the cost map grid (for a cost map corresponding to time t) may represent a risk associated with the autonomous vehicle being in the area of the environment represented by that cell at time t. In some embodiments, the value/cost may also represent a deviation from some desired “target” location (e.g., from a waypoint along the intended route of the vehicle). The deviation may correspond to a distance from the target location, and the value/cost may increase with distance from the target location. In some embodiments, the value/cost may represent multiple deviations from multiple respective target locations (e.g., the target locations may represent waypoints along a route)" in par. 0213 ) ;
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein the costs of cells along a path are updated based on deviation from target waypoints of the vehicle navigating to its destination. The motivation to incorporate the teachings of Englard would be to improve safety and efficiency of autonomous driving decision making (see par. 0196)
Regarding Claim 7, Okano as modified by Englard teaches:
the method according to claim 6,
Okano does not appear to explicitly teach all of the following, but Englard does teach:
wherein the determining, by the computing device, the third travelable lane based on the traveling trajectory of the vehicle in the target direction of the host vehicle comprises:
determining, by the computing device, a trajectory model based on the traveling trajectory of the vehicle in the target direction of the host vehicle (see at least " The numerical value, or “cost,” for a given cell of the cost map grid (for a cost map corresponding to time t) may represent a risk associated with the autonomous vehicle being in the area of the environment represented by that cell at time t. In some embodiments, the value/cost may also represent a deviation from some desired “target” location (e.g., from a waypoint along the intended route of the vehicle). The deviation may correspond to a distance from the target location, and the value/cost may increase with distance from the target location. In some embodiments, the value/cost may represent multiple deviations from multiple respective target locations (e.g., the target locations may represent waypoints along a route)" in par. 0213 ); and
determining, by the computing device, the third travelable lane based on the trajectory model and a preset lateral deviation (see at least “The cost for locations within a lane in which the autonomous vehicle is traveling may vary across the lane (e.g., in a linear, exponential, Gaussian, or parabolic manner) so that the cost is low in the center of the lane and increases towards the edges of the lane.” In par. 0156 and "a third term may reflect the objective of staying at least some predetermined distance (e.g., 0.25 m, 0.5 m, etc.) away from any observed lane markings," in par. 0221) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein the costs of the vehicle laterally deviating from the center of a current lane or waypoint increases at a predetermined rate toward the edges of the lane. The motivation to incorporate the teachings of Englard would be to improve safety and efficiency of autonomous driving decision making (see par. 0196).
Regarding Claim 8, Okano teaches:
the method according to claim 1,
Okano does not appear to explicitly teach all of the following, but Englard does teach:
wherein the method further comprises:
calculating, by the computing device, a first curvature of the first travelable lane based on the first travelable lane and a heading angle of the host vehicle (see at least " A road or a lane marking may be tracked by tracking a geometric property (e.g., a shape, curvature, direction, or slope) of the road or lane marking over time." in par. 0091 and “The lane markings may appear as small but abrupt deviations in the path of the scan lines, for example, with those deviations collectively forming a line pattern that aligns with the direction of travel of the autonomous vehicle (e.g., approximately normal to the curve of the scan lines). The lane markings may be classified according to type (e.g., broken white line, broken yellow line, solid yellow line, double solid yellow line, etc.).” in par. 0129) ; and
correcting, by the computing device, the first travelable lane based on the first curvature (see at least " maintaining the heading of the autonomous vehicle based on the curvature of the lane or road on which the autonomous vehicle is traveling (e.g., turning the autonomous vehicle based on the curvature of the lane ahead)" in par. 0164).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein the curvature of the lane markings is tracked over time and used to steer the vehicle along the center of a curved lane. The motivation to incorporate the teachings of Englard would be to improve safety and efficiency of autonomous driving decision making (see par. 0196).
Regarding Claim 12, Okano teaches:
the method according to claim 1,
Okano does not appear to explicitly teach all of the following, but Englard does teach:
wherein the method further comprises:
predicting, based on a historical traveling trajectory of a vehicle in a target direction of the host vehicle, a traveling trajectory of the vehicle in the target direction of the host vehicle in a future target time period, wherein the target direction comprises at least one of a forward direction or a backward direction (see at least " For a given object, for example, the prediction component 220 may analyze the type/class of the object (as determined by the classification module 212) along with the recent tracked movement of the object (as determined by the tracking module 214) to predict one or more future positions of the object… In some embodiments, the prediction component 220 also predicts movement of objects based on more complex behaviors. For example, the prediction component 220 may assume that an object that has been classified as another vehicle will follow rules of the road (e.g., stop when approaching a red light), and will react in a certain way to other dynamic objects (e.g., attempt to maintain some safe distance from other vehicles). The prediction component 220 may inherently account for such behaviors by utilizing a neural network or other machine learning model, for example. " in par. 0092) ; and
performing path planning based on the first travelable lane and the traveling trajectory of the vehicle in the target direction of the host vehicle in the future target time period (see at least " Referring again now to FIG. 11, the cost map generator 640 may generate a current cost map (“Cost Map 1” in FIG. 11) as well as a number of future/predicted cost maps (“Cost Map 2” through “Cost Map T” in FIG. 11, with T being any suitable integer greater than one). In some embodiments, each cost map corresponds to a particular, different one of the occupancy grids. If the perception signals 608 include a current occupancy grid corresponding to the time t0, and if prediction signals 622 include future occupancy grids corresponding to the times t1, t2 and t3, for example, then the cost maps 644 may include a current cost map corresponding to the time t0 as well as future cost maps corresponding to the times t1, t2 and t3. In an alternative embodiment, the SDCA 600 does not include the prediction component 620 or signals 609, 622, and prediction functions are accomplished entirely by the cost map generator 640. For example, the cost map generator 640 may receive only a current occupancy grid (in signals 608), and use that to generate both the current cost map “Cost Map 1” and the future cost maps “Cost Map 2” through “Cost Map T.”" in par. 0157) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Englard wherein the future motion of obstacles in front of the host vehicle is predicted based on the tracked movement of the obstacles and path planning costs are updated accordingly. The motivation to incorporate the teachings of Englard would be to improve safety and efficiency of autonomous driving decision making (see par. 0196).
Regarding Claim 14, Okano as modified by Englard also teaches:
An apparatus for implementing the method of Claim 2 (see Claim 2 analysis for rejection of the apparatus)
Regarding Claim 16, Okano as modified by Englard also teaches:
An apparatus for implementing the method of Claim 4 (see Claim 4 analysis for rejection of the apparatus)
Regarding Claim 18, Okano as modified by Englard also teaches:
An apparatus for implementing the method of Claim 6 (see Claim 6 analysis for rejection of the apparatus).
Regarding Claim 19, Okano as modified by Englard also teaches:
An apparatus for implementing the method of Claim 7 (see Claim 7 analysis for rejection of the apparatus).
Regarding Claim 20, Okano as modified by Englard also teaches:
An apparatus for implementing the method of Claim 8 (see Claim 8 analysis for rejection of the apparatus).
Claim(s) 5, 10, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okano et al (US 20170147889, hereinafter Okano) in view of Englard et al (US20190113927, hereinafter Englard) and Tran (US 20210108926, hereinafter Tran)
Regarding Claim 5, Okano as modified by Englard teaches:
the method according to claim 4,
Okano and Englard do not appear to explicitly teach all of the following, but Tran does teach:
wherein the determining by the computing device, the second travelable lane based on the border of the freespace in the target direction of the host vehicle comprises:
determining, by the computing device, a border model based on feature information of the border of the freespace in the target direction of the host vehicle (see at least "The system includes capturing a point cloud from a vehicle street view and converting the point cloud to a 3D model; applying a trained neural network to detect street signs, cross walks, obstacles, or bike lanes; and update a high definition (HD) map with the neural network output" in par. 0123) ,
wherein the feature information comprises at least one of a border position (see at least "The system automatically detects and updates the HD map with the following. The obstacles include rock, construction, or semi-permanent structures on a street. The system can apply the neural network for detecting people or bicycles in the bike lane, detecting traffic lights. Text on the street can be detected and then converted into a rule, for example no crossing rule or stop. The system can detect the cross walk by detecting bars between two facing street sides. A concrete or painted street divider can be detected. The system includes detecting a street curb or railways on a street. The system can detect a transition zone from a street to grass or pavement and identifies the beginning of the lane by applying a minimum lane width as required by regulation." in par. 0124) ,
an angle (see at least " In another embodiment, the occupancy map 530 is represented using a 3D volumetric grid of cells at 5-10 cm resolution. Each cell indicates whether or not a surface exists at that cell, and if the surface exists, a direction along which the surface is oriented." in par. 0110) ,
a type (see at least " The HD map comprises information describing lanes including spatial location of lanes and semantic information about each lane. The spatial location of a lane comprises the geometric location in latitude, longitude and elevation at high prevision, for example, at or below 10 cm precision. The semantic information of a lane comprises restrictions such as direction, speed, type of lane (for example, a lane for going straight, a left turn lane, a right turn lane, an exit lane, and the like), restriction on crossing to the left, connectivity to other lanes and so on. The landmark map may further comprise information describing stop lines, yield lines, spatial location of crosswalks, safely navigable space, spatial location of speed bumps, curb, and road signs comprising spatial location and type of all signage that is relevant to driving restrictions. " in par. 0109) ; and
determining, by the computing device, the second travelable lane based on the border model, a position of the host vehicle, and a preset lane width (see at least “The software can determine the current location of the vehicle, for example, when the vehicle starts and as the vehicle moves along a route. A localize software determines an accurate location of the vehicle within the HD Map based on location provided by GPS, vehicle motion data provided by IMU, LIDAR scanner data, and camera images.” In par. 0072 and " The system can detect a transition zone from a street to grass or pavement and identifies the beginning of the lane by applying a minimum lane width as required by regulation." in par. 0124 and “n still other embodiments, the lane information may be derived from a leading vehicle that is in front of the vehicle in the lane and correlation with other information such as map data and independent lane analysis to prevent the blind-following-the blind situation… For example, the computer system may estimate the location of the lane to include the estimated path (e.g., extend by half of a predetermined lane width on either side of the estimated path). Other examples are possible as well.” In par. 0188) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano as modified by Englard to incorporate the teachings of Tran wherein online lidar data is used to update an occupancy grid with information like lane marking positions, lane type, lane direction, and determine open space to travel based on a predetermined lane width when needed. The motivation to incorporate the teachings of Tran would be to determine more precise vehicle location relative to lane geometry and more efficiently extract features of a given lane (see par. 0114).
Regarding Claim 10, Okano as modified by Englard teaches:
the method according to claim 8,
Okano and Englard do not appear to explicitly teach all of the following, but Tran does teach:
wherein the heading angle of the host vehicle is obtained by using a global positioning system (GPS) (see at least " In an embodiment, the driver monitoring system may also include a GPS receiver (or other similar technology designed to track location) configured to track the location and directional movement of the driver in either real-time or over-time modes. As is well known in the art, GPS signals may be used to calculate the latitude and longitude of a driver as well as allowing for tracking of driver movement by inferring speed and direction from positional changes. Signals from GPS satellites also allow for calculating the elevation and, hence, vertical movement, of the driver." in par. 0208) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano as modified by Englard to incorporate the teachings of Tran wherein GPS traces over time are used to determine the direction of travel of the vehicle. The motivation to incorporate the teachings of Tran would be to better infer vehicle behavior with speed and direction of travel information (see par. 0208).
Regarding Claim 17, Okano as modified by Englard and Tran also teaches:
An apparatus for implementing the method of Claim 5 (see Claim 5 analysis for rejection of the apparatus)
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okano et al (US 20170147889, hereinafter Okano) in view of Englard et al (US20190113927, hereinafter Englard) and Ku (US 20190193785, hereinafter Ku)
Regarding Claim 9, Okano as modified by Englard teaches:
the method according to claim 8,
wherein calculating, by the computing device, the first curvature of the first travelable lane based on the first travelable lane and the heading angle of the host vehicle comprises:
Okano further teaches: calculating, by the computing device, a lateral distance from the host vehicle to a lane line based on the first travelable lane (see at least " he left white line 2 is detected as a distance Lf by the front camera 11F (refer to FIG. 4B), and as a distance Lb by the rear camera 11B (refer to FIG. 4C)." in par. 0072) ; and
Okano and Englard do not appear to explicitly teach all of the following, but Ku does teach:
calculating, by the computing device, the first curvature of the first travelable lane by using the heading angle of the host vehicle and the lateral distance as input variables and using a vehicle dynamics model and a correlation between a road curvature and a steering wheel angle (see at least " Firstly, the target distance calculating unit 420 calculates the vehicle dynamic parameters 410 of the vehicle 110 (e.g., the vehicle speed or the present steering angle) and a target time period T.sub.F to generate the target distance D…The faster the vehicle speed is, the larger the target distance D is. However, if the present steering angle of a steering wheel of the vehicle 110 is large, the system reduces target time period T.sub.F and the target distance D. The target distance D can be one or more limiting ranges, or one or multiple values. Secondly, the target distance D is transmitted from the target distance calculating unit 420 to the image processing device 300… After that, the curve fitting unit 340 of the image processing device 300 multiplies the coordinate data (x.sub.i,y.sub.i) of each of the lane markers 310 with the target weighting w.sub.image(x.sub.i), and then uses a weighted least squares method to generate the lane fitting curve y. " in par. 0024) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano as modified by Englard to incorporate the teachings of Ku wherein the vehicle’s relative lateral position to lane lines and current steering angle are used to determine the lane fitting curve. The motivation to incorporate the teachings of Ku would be to more smoothly perform steering control and improve safety (see par. 0024)
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okano et al (US 20170147889, hereinafter Okano) in view of Tran (US 20210108926, hereinafter Tran)
Regarding Claim 11, Okano teaches:
the method according to claim 1, wherein the method further comprises:
Okano does not appear to explicitly teach all of the following, but Tran does teach:
determining, by the computing device, a second curvature of the first travelable lane based on a navigation map and a GPS (see at least " Once the cluster is selected, the unmatched segment module may produce a single road geometry for a cluster of unmatched GPS traces using a centerline fitting procedure in which the single road geometry describes a new road segment with the obstacle which is not described in the current map database. In one example, a polygonal principal curve algorithm or a Trace Clustering Algorithm (TC1) algorithm can be used. The digital map can be modified to include the new road, including possibly new intersections in the base map and any associated pointers or indices updated." in par. 0183 and “As another example, in embodiments where the at least one neighboring vehicle is traveling behind the vehicle in the lane in which the vehicle is traveling, the computer system may determine the estimated location of the lane to be an extrapolation (e.g., with constant curvature) of the estimated path.” In par. 0195) ; and
correcting, by the computing device, the first travelable lane based on the second curvature (see at least " The system includes a map update module that updates previously computed map data by receiving more recent information from vehicles that recently travelled along routes on which map information changed. For example, if certain road signs have changed or lane information has changed as a result of construction in a region, the map update module updates the maps accordingly. " in par. 0071 and “If vehicle sensors indicate a discrepancy in the map information provided by the online HD map, the discrepancy is reported and that may result in the online HD map updating the map data provided to other vehicles.” In par. 0072 and “The process to update maps using crowd-sourced data may begin with the unmatched segment module clustering the unmatched GPS traces received from the map matching module.” In par. 0183) .
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method taught by Okano to incorporate the teachings of Tran wherein GPS traces over time are used to update the curvature of a lane when discrepancies are identified in the HD map. The motivation to incorporate the teachings of Tran would be to reduce discrepancies between the vehicle’s HD map and the real world (see par. 0072).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DYLAN M KATZ/Examiner, Art Unit 3657