DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/21/2025 has been entered.
Response to Arguments
Applicant’s arguments (See remarks), filed 11/21/2025, with respect to the claims 1-20 have been fully considered but respectfully are unpersuasive.
Claim Rejections Under 35 U.S.C. §103
The applicant argues on page 11, “Jia fails to meet the requirements of "setting a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map" (emphasis added), as recited claim 1.”
In response, the Office agrees. However, this argument does not apply to the current grounds for rejection and/or current combination of references.
The applicant argues on page 12, “it is respectfully submitted that Jia does not suggest "determining a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data; and outputting road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point" (emphasis added), as recited in claim 1.”
In response, the Office respectfully does not find this argument to be persuasive. Based on the breadth of the claim language, the prior art by JIA et al. (US 20220274601 A1), explicitly teaches determining a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data (Fig. 3. Paragraph [0044]-JIA discloses the layer module 112 can generate multiple layer hypotheses 324 for multiple FOD layers. The grid-based road model 322 can include seven FODs, including a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer. In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability); and
outputting road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point (Fig. 3. Paragraph [0032]-JIA discloses the processor 202 can execute the input processing module 206 to extract mass values associated with different input data. The mass values indicate the confidence associated with the data contributing to layer hypotheses of the grid-based road model (wherein the layers include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer). In paragraph [0033]-JIA discloses the evidence fusion module 208 can compare data from one or more previous time instants with data from a current time instant to update a mass value associated with the input data. Please also see paragraph [0073]).
The applicant argues on page 12, “Based on all the foregoing, it is respectfully submitted that the cited references, whether taken alone or in combination, do not disclose, teach or suggest each and every feature in the particular combination embodied by claim 1.”
In response, the Office respectfully disagrees for the reasons stated above and below.
The applicant argues on page 12, “claims 11 and 12 and the claims dependent thereupon are allowable over the cited prior art for at least reasons similar to those discussed above with respect to claim 1.”
In response, the Office respectfully disagrees for the reasons stated above and below.
The applicant argues on page 12, “Applicant therefore submits that the rejections of the claims are overcome and respectfully requests that the rejections of the claims under 35 U.S.C. §§ 102/103 be withdrawn.”
In response, the Office respectfully disagrees for the reasons stated above and below.
Applicant is encouraged to amend to overcome the current grounds for rejection and/or combination of references based on the Allowable Subject Matter section further below.
Claim Objections
Claim 18 is objected to because of the following informalities:
At Lines 4-7, the term “set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3, measure a number of forespace point data belonging to each freespace grid of the set freespace grids” should be changed to “set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3, measure a number of freespace point data belonging to each freespace grid of the set freespace grids” to correct typographical error(s). Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 10-12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over JIA et al. (US 20220274601 A1), hereinafter referenced as JIA in view of SONNTAG et al. (US 20170021864 A1), hereinafter referenced as SONNTAG.
Regarding claim 1, JIA explicitly teaches an object detection method of a vehicle LiDAR system (Fig. 1. Paragraph [0016]-JIA discloses this document describes road-perception techniques to accurately model the roadway and elements thereof as a grid of cells. The described road-perception techniques fuse information regarding the roadway from multiple sources (e.g., camera systems, Lidar systems, maps, radar systems, ultrasonics) to develop a grid with multiple layers. The multiple layers represent different roadway attributes), comprising:
determining a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data (Fig. 3. Paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability); and
outputting road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point (Fig. 3. Paragraph [0032]-JIA discloses the processor 202 can execute the input processing module 206 to extract mass values associated with different input data. The mass values indicate the confidence associated with the data contributing to layer hypotheses of the grid-based road model. In paragraph [0033]-JIA discloses the evidence fusion module 208 can compare data from one or more previous time instants with data from a current time instant to update a mass value associated with the input data. Please also see paragraph [0073]).
Although JIA explicitly teaches setting a plurality of lane grids each having a width of a predetermined lane width (Fig. 3. Paragraph [0066]-JIA discloses the input processing module 206 can assign W.sub.L as the default lane width for a specific region (e.g., 3.66 meters in the United States) or based on the vision data 306. For a dashed lane marker, the input processing module 206 can assign the cells both towards and away from the vehicle 102 within a certain distance (e.g., 0.5 W.sub.L) as more likely to be lane-center cells. Please also read paragraph [0058-0060]) on a grid map (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0041]-JIA discloses the grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. The grid 302 can include the grid size, the grid resolution, and cell-center coordinates. In paragraph [0042]-JIA discloses the grid-based road model 322 includes multiple layer hypotheses 324 for each layer or frame of discernment defined by the road-perception system 108 and cell values 326 that indicate the respective belief parameters, plausibility parameters, and probabilities associated with the layer hypotheses 324 (wherein frames of discernment may include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer)) which is generated based on freespace point data and object information (Fig. 3. Paragraph [0058]-JIA discloses the FOD layer for the grid 302 can be defined in Equation (13) as: Θ={F,SO,DO} (wherein F indicates a free space, SO indicates a cell that is statically occupied, and DO indicates a cell that is dynamically occupied). In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability)), wherein the plurality of lane grids includes a lane grid of a host vehicle lane (Fig. 1. Paragraph [0018]-JIA discloses FIG. 1 illustrates an example environment 100 in which a road-perception system 108 generates a grid-based road model with multiple layers. In the depicted environment 100, the road-perception system 108 is mounted to, or integrated within, a vehicle 102. The vehicle 102 can travel on a roadway 120, which includes lanes 122 (e.g., a first lane 122-1 and a second lane 122-2). The vehicle 102 is traveling in the first lane 122-1).
JIA is silent on setting a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
However, SONNTAG explicitly teaches setting a plurality of lane grids (Fig. 1. Paragraph [0050]-SONNTAG discloses according to a step 101, a surroundings of the vehicle is detected (wherein the detection uses surround sensors, including LIDAR). According to a step 103, the detected surroundings are subdivided into cells of an occupancy grid, the cells each having two opposite lateral boundaries relative to a longitudinal axis of the vehicle, the lateral boundaries being formed by lane markings. In paragraph [0057]-SONNTAG discloses FIG. 3 shows an occupancy grid 301. In paragraph [0058]-SONNTAG discloses occupancy grid 301 includes multiple cells 303, which are numbered consecutively from 1 through 15. In paragraph [0059]-SONNTAG discloses vehicle 309, which has detected its surroundings, is provided in cell “8”) each including a cell having a width of a predetermined lane width on a grid map (Fig. 1. Paragraph [0061]-SONNTAG discloses the Lateral boundaries 307 of cells 303 correspond to lane markings or line markings and thus advantageously define the individual lane widths. Please also see Fig. 2 and read paragraph [0028 and 0069-0070]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA of having an object detection method of a vehicle LiDAR system, with the teachings of SONNTAG of having setting a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
Wherein JIA’s method having setting a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map which is generated based on freespace point data and object information, wherein the plurality of lane grids includes a lane grid of a host vehicle lane.
The motivation behind the modification would have been to obtain a method that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and SONNTAG concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while SONNTAG’s systems and methods generates an occupancy grid that takes into account driving behavior and generates cells that model the width of a lane and dynamically adjust longitudinally. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and SONNTAG et al. (US 20170021864 A1), Abstract and Paragraph [0031, 0035 and 0038].
Regarding claim 3, JIA in view of SONNTAG explicitly teaches the object detection method of claim 1, JIA further teaches wherein the determining of the road boundary candidate comprises:
selecting a road boundary lane candidate from among the plurality of lane grids (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0041]-JIA discloses the grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. In paragraph [0042]-JIA discloses the grid-based road model 322 includes multiple layer hypotheses 324 for each layer or frame of discernment defined by the road-perception system 108 and cell values 326 that indicate the respective belief parameters, plausibility parameters, and probabilities associated with the layer hypotheses 324 (wherein frames of discernment may include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer). Please also see Fig. 4); and
selecting the road boundary candidate by setting freespace grids which are obtained by dividing a lane selected as the road boundary lane candidate by 'n', wherein 'n' is a natural number (Fig. 4. Paragraph [0058]-JIA discloses each cell of the grid 302 indicates the mass value for each layer hypothesis 324 formed for the FOD layers. In paragraph [0059]-JIA discloses the input processing module 206 can convert the mass values provided by the grid 302 to probability values for each cell available (wherein probabilities are generated for the cell being either free space, statically occupied or dynamically occupied). In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein: A cell with low velocity and high statically-occupied probability is more likely to be a barrier; A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center; A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. Please also read paragraph [0066]).
Regarding claim 10, JIA in view of SONNTAG explicitly teaches the object detection method of claim 1, JIA further teaches further comprising obtaining, by a LiDAR sensor (Fig. 1, #104 called sensors. Paragraph [0021]-JIA discloses the vehicle 102 includes one or more sensors 104 to provide input data to one or more processors of the road-perception system 108. The sensors 104 can include a camera, a radar system and a lidar system. The radar system or a lidar system can use electromagnetic signals to detect objects in the roadway 120 or features of the roadway 120), the freespace point data and the object information before the setting of the lane grids (Fig. 3. Paragraph [0041]-JIA discloses the architecture 300 illustrates sources that are input to the fused-grid module 110. The input sources can include a grid 302, lidar data 304, vision data 306, vehicle-state data 308, and the map 106. The grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. In paragraph [0053]-JIA discloses example operations of the fused-grid module 110 and the layer module 112 to generate the grid-based road model 322 from the input data are now described. At function 310, the input processing module 206 validates the input data, including the grid 302, the lidar data 304, the vision data 306, the vehicle-state data 308, and the map 106).
Regarding claim 11, JIA explicitly teaches a non-transitory computer-readable recording medium storing a program for executing an object detection method of a vehicle LiDAR system, wherein execution of the program causes a processor to:
determine a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data (Fig. 3. Paragraph [0058]-JIA discloses the FOD layer for the grid 302 can be defined in Equation (13) as: Θ={F,SO,DO} (wherein F indicates a free space, SO indicates a cell that is statically occupied, and DO indicates a cell that is dynamically occupied). In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability); and
output road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point (Fig. 3. Paragraph [0032]- JIA discloses the processor 202 can execute the input processing module 206 to extract mass values associated with different input data. The mass values indicate the confidence associated with the data contributing to layer hypotheses of the grid-based road model. In paragraph [0033]-JIA discloses the processor 202 can execute the evidence fusion module 208 to update mass values associated with the input data recursively. The evidence fusion module 208 can compare data from one or more previous time instants with data from a current time instant to update a mass value associated with the input data. Please also see paragraph [0073 and 0078]).
Although JIA explicitly teaches set a plurality of lane grids each having a width of a predetermined lane width on a grid map (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0042]-JIA discloses the output of the layer module 112 is the grid-based road model 322. The grid-based road model 322 includes multiple layer hypotheses 324 for each layer or frame of discernment defined by the road-perception system 108 and cell values 326 that indicate the respective belief parameters, plausibility parameters, and probabilities associated with the layer hypotheses 324 (wherein frames of discernment may include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer). Further in paragraph [0066]-JIA discloses the input processing module 206 can assign W.sub.L as the default lane width for a specific region (e.g., 3.66 meters in the United States) or based on the vision data 306. For a dashed lane marker, the input processing module 206 can assign the cells both towards and away from the vehicle 102 within a certain distance (e.g., 0.5 W.sub.L) as more likely to be lane-center cells. Please also read paragraph [0078]) which is generated based on freespace point data and object information (Fig. 3. Paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. Please also read paragraph [0056-0059]), wherein the plurality of lane grids includes a lane grid of a host vehicle lane (Fig. 3. Paragraph [0018]-JIA discloses FIG. 1 illustrates an example environment 100 in which a road-perception system 108 generates a grid-based road model with multiple layers. In the depicted environment 100, the road-perception system 108 is mounted to, or integrated within, a vehicle 102. The vehicle 102 can travel on a roadway 120, which includes lanes 122 (e.g., a first lane 122-1 and a second lane 122-2). In this implementation, the vehicle 102 is traveling in the first lane 122-1). In paragraph [0023]-JIA discloses the road-perception system 108 can use a fused-grid module 110 and a layer module 112 to represent the roadway 120 as a grid-based road model. The road-perception system 108 uses multiple frames of discernment (FOD) to define and describe the possible states or hypotheses for a specific attribute of the roadway 120. The FODs can include at least two of the following: cell category, lane number, lane-marker type, lane-marker color, traffic signage, pavement marking, and lane type. The road-perception system 108 can define additional FODs as required by vehicle-based systems 114. Please also read paragraph [0026]);
JIA is silent on set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
However, SONNTAG explicitly teaches set a plurality of lane grids (Fig. 1. Paragraph [0050]-SONNTAG discloses according to a step 101, a surroundings of the vehicle is detected (wherein the detection uses surround sensors, including LIDAR). According to a step 103, the detected surroundings are subdivided into cells of an occupancy grid, the cells each having two opposite lateral boundaries relative to a longitudinal axis of the vehicle, the lateral boundaries being formed by lane markings. In paragraph [0057]-SONNTAG discloses FIG. 3 shows an occupancy grid 301. In paragraph [0058]-SONNTAG discloses occupancy grid 301 includes multiple cells 303, which are numbered consecutively from 1 through 15. In paragraph [0059]-SONNTAG discloses vehicle 309, which has detected its surroundings, is provided in cell “8”) each including a cell having a width of a predetermined lane width on a grid map (Fig. 1. Paragraph [0061]-SONNTAG discloses the Lateral boundaries 307 of cells 303 correspond to lane markings or line markings and thus advantageously define the individual lane widths. Please also see Fig. 2 and read paragraph [0028 and 0069-0070]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA of having a non-transitory computer-readable recording medium storing a program for executing an object detection method of a vehicle LiDAR system, with the teachings of SONNTAG of having set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
Wherein JIA’s non-transitory computer-readable recording medium having set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map which is generated based on freespace point data and object information, wherein the plurality of lane grids includes a lane grid of a host vehicle lane.
The motivation behind the modification would have been to obtain a non-transitory computer-readable recording medium that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and SONNTAG concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while SONNTAG’s systems and methods generates an occupancy grid that takes into account driving behavior and generates cells that model the width of a lane and dynamically adjust longitudinally. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and SONNTAG et al. (US 20170021864 A1), Abstract and Paragraph [0031, 0035 and 0038].
Regarding claim 12, JIA explicitly teaches a vehicle LiDAR system (Fig. 2, #108 called a Road-Perception System. Paragraph [0028]) comprising:
a LiDAR sensor (Fig. 1, #104 called sensors. Paragraph [0021]) configured to obtain freespace point data and object information (Fig. 1. Paragraph [0021]-JIA discloses the vehicle 102 includes one or more sensors 104 to provide input data to one or more processors of the road-perception system 108. The sensors 104 can include a camera, a radar system and a lidar system. The radar system or a lidar system can use electromagnetic signals to detect objects in the roadway 120 or features of the roadway 120); and
Although JIA explicitly teaches a LiDAR signal processing device including a non-transitory memory storing computer instructions (Fig. 2, # 204 called computer-readable storage media (CRM). Paragraph [0028]) and one or more processors (Fig. 1, #202 called processors. Paragraph [0021]. Further in paragraph [0028]-JIA discloses the road-perception system 108 can include one or more processors 202 and computer-readable storage media (CRM) 204). Further in paragraph [0031]-JIA discloses the processor 202 executes computer-executable instructions stored within the CRM 204) configured to:
execute the computer instructions to cause the LiDAR signal processing device to set a plurality of lane grids (Fig. 3. Paragraph [0031]-JIA discloses the processor 202 can execute the fused-grid module 110 to generate a grid-based road model of the roadway 120. The fused-grid module 110 can generate the grid-based road model using data from the map 106 stored in the CRM 204 or obtained from the sensors 104) each including a cell having a width of a predetermined lane width on a grid map (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0042]-JIA discloses the output of the layer module 112 is the grid-based road model 322. The grid-based road model 322 includes multiple layer hypotheses 324 for each layer or frame of discernment defined by the road-perception system 108 and cell values 326 that indicate the respective belief parameters, plausibility parameters, and probabilities associated with the layer hypotheses 324 (wherein frames of discernment may include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer). In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. Further in paragraph [0066]-JIA discloses the input processing module 206 can assign W.sub.L as the default lane width for a specific region (e.g., 3.66 meters in the United States) or based on the vision data 306. For a dashed lane marker, the input processing module 206 can assign the cells both towards and away from the vehicle 102 within a certain distance (e.g., 0.5 W.sub.L) as more likely to be lane-center cells. Please also read paragraph [0056-0059]), which is generated based on the freespace point data and the object information obtained through the LiDAR sensor (Fig. 1, #104 called sensors. Paragraph [0021]), the plurality of lane grids including a lane grid of a host vehicle lane (Fig. 3. Paragraph [0018]-JIA discloses FIG. 1 illustrates an example environment 100 in which a road-perception system 108 generates a grid-based road model with multiple layers. In the depicted environment 100, the road-perception system 108 is mounted to, or integrated within, a vehicle 102. The vehicle 102 can travel on a roadway 120, which includes lanes 122 (e.g., a first lane 122-1 and a second lane 122-2). In this implementation, the vehicle 102 is traveling in the first lane 122-1). In paragraph [0023]-JIA discloses the road-perception system 108 can use a fused-grid module 110 and a layer module 112 to represent the roadway 120 as a grid-based road model. The road-perception system 108 uses multiple frames of discernment (FOD) to define and describe the possible states or hypotheses for a specific attribute of the roadway 120. The FODs can include at least two of the following: cell category, lane number, lane-marker type, lane-marker color, traffic signage, pavement marking, and lane type. The road-perception system 108 can define additional FODs as required by vehicle-based systems 114. Please also read paragraph [0026]), determine a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data (Fig. 3. Paragraph [0058]-JIA discloses the FOD layer for the grid 302 can be defined in Equation (13) as: Θ={F,SO,DO} (wherein F indicates a free space, SO indicates a cell that is statically occupied, and DO indicates a cell that is dynamically occupied). In paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability), and output road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point (Fig. 3. Paragraph [0032]-JIA discloses the processor 202 can execute the input processing module 206 to extract mass values associated with different input data. The mass values indicate the confidence associated with the data contributing to layer hypotheses of the grid-based road model. In paragraph [0033]-JIA discloses the processor 202 can execute the evidence fusion module 208 to update mass values associated with the input data recursively. For example, the evidence fusion module 208 can compare data from one or more previous time instants with data from a current time instant to update a mass value associated with the input data. Please also see paragraph [0073]).
JIA is silent on execute the computer instructions to cause the LiDAR signal processing device to set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
However, SONNTAG explicitly teaches execute the computer instructions to cause the LiDAR signal processing device to set a plurality of lane grids (Fig. 1. Paragraph [0050]-SONNTAG discloses according to a step 101, a surroundings of the vehicle is detected (wherein the detection uses surround sensors, including LIDAR). According to a step 103, the detected surroundings are subdivided into cells of an occupancy grid, the cells each having two opposite lateral boundaries relative to a longitudinal axis of the vehicle, the lateral boundaries being formed by lane markings. In paragraph [0057]-SONNTAG discloses FIG. 3 shows an occupancy grid 301. In paragraph [0058]-SONNTAG discloses occupancy grid 301 includes multiple cells 303, which are numbered consecutively from 1 through 15. In paragraph [0059]-SONNTAG discloses vehicle 309, which has detected its surroundings, is provided in cell “8”. Please also see Fig. 2 and read paragraph [0028 and 0069-0070]) each including a cell having a width of a predetermined lane width on a grid map (Fig. 1. Paragraph [0061]-SONNTAG discloses the Lateral boundaries 307 of cells 303 correspond to lane markings or line markings and thus advantageously define the individual lane widths. Please also see Fig. 2 and read paragraph [0028 and 0069-0070]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA of having a vehicle LiDAR system, with the teachings of SONNTAG of having execute the computer instructions to cause the LiDAR signal processing device to set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map.
Wherein JIA’s system having execute the computer instructions to cause the LiDAR signal processing device to set a plurality of lane grids each including a cell having a width of a predetermined lane width on a grid map which is generated based on the freespace point data and the object information obtained through the LiDAR sensor, the plurality of lane grids including a lane grid of a host vehicle lane, determine a road boundary candidate based on occupation percentages of objects for each of the plurality of lane grids calculated based on the object information and distributions of the freespace point data, and output road boundary information by correcting the calculated road boundary candidate based on information on a road boundary candidate determined at a previous time point.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and SONNTAG concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while SONNTAG’s systems and methods generates an occupancy grid that takes into account driving behavior and generates cells that model the width of a lane and dynamically adjust longitudinally. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and SONNTAG et al. (US 20170021864 A1), Abstract and Paragraph [0031, 0035 and 0038].
Regarding claim 14, JIA in view of SONNTAG explicitly teaches the vehicle LiDAR system of claim 12, JIA explicitly teaches wherein the one or more processors (Fig. 1, #202 called processors. Paragraph [0021]) are further configured to execute the computer instructions to cause LiDAR signal processing device (Fig. 1, #202 called processors. Paragraph [0021]. Further in paragraph [0028]-JIA discloses the road-perception system 108 can include one or more processors 202 and computer-readable storage media (CRM) 204). Further in paragraph [0031]-JIA discloses the processor 202 executes computer-executable instructions stored within the CRM 204) select a road boundary lane candidate from among the plurality of lane grids (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0041]-JIA discloses the grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. In paragraph [0042]-JIA discloses the grid-based road model 322 includes multiple layer hypotheses 324 for each layer or frame of discernment defined by the road-perception system 108 and cell values 326 that indicate the respective belief parameters, plausibility parameters, and probabilities associated with the layer hypotheses 324 (wherein frames of discernment may include a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer)), and
select the road boundary candidate by setting freespace grids (Fig. 4. Paragraph [0058]-JIA discloses each cell of the grid 302 indicates the mass value for each layer hypothesis 324 formed for the FOD layers. In paragraph [0059]-JIA discloses the input processing module 206 can convert the mass values provided by the grid 302 to probability values for each cell available (wherein probabilities are generated for the cell being either free space, statically occupied or dynamically occupied)) which are obtained by dividing a lane grid selected as the road boundary lane candidate by 'n', wherein 'n' is a natural number (Fig. 4. Paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein: A cell with low velocity and high statically-occupied probability is more likely to be a barrier; A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center; A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. Further in paragraph [0066]-JIA discloses the input from the vision data 306 or the lidar data 304 includes polylines, or other parametric or non-parametric models, indicating lane markers and edges of the roadway 120. The vision data 306 and the lidar data 304 can also indicate semantic information for the lanes 122. The input processing module 206 can use mapping functions to map the vision data 306 or the lidar data 304 to the confidence of a cell to be a specific category within the cell-category layer. Another mapping function provides that the cell towards the vehicle 102 with a distance 0.5 W.sub.L from solid lane-marker cells to the cells intersected by the lane marker are more likely to be lane-center cells, L.sub.c. The input processing module 206 can assign W.sub.L as the default lane width for a specific region (e.g., 3.66 meters in the United States) or based on the vision data 306. For a dashed lane marker, the input processing module 206 can assign the cells both towards and away from the vehicle 102 within a certain distance (e.g., 0.5 W.sub.L) as more likely to be lane-center cells).
Claims 2, 7, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over JIA et al. (US 20220274601 A1), hereinafter referenced as JIA in view of SONNTAG et al. (US 20170021864 A1), hereinafter referenced as SONNTAG and in further view of ABBOTT et al. (US 20210042535 A1), hereinafter referenced as ABBOTT.
Regarding claim 2, JIA in view of SONNTAG explicitly teaches the object detection method of claim 1, JIA in view of SONNTAG fails to explicitly teach wherein the determining of the road boundary candidate comprises: extracting point data of a region-of-interest from the freespace point data; deleting points which are not matched to an object, among the extracted point data in the region-of-interest; and generating the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
However, ABBOTT explicitly teaches wherein the determining of the road boundary candidate comprises:
extracting point data of a region-of-interest from the freespace point data (Fig. 2C. Paragraph [0044]-ABBOTT discloses the output of freespace detection 106 may represent portions of the environment—represented within images captured by camera(s) of the vehicle 900—that correspond to drivable freespace and/or non-drivable space on a driving surface. In paragraph [0065]-ABBOTT discloses FIG. 2C illustrates defining an object fence using a drivable freespace determination. The drivable freespace 232 may be cropped out of the bounding shape 204 and/or the cropped bounding shape 222 to generate an object fence 110. In paragraph [0044]-ABBOTT discloses the output of freespace detection 106 may represent portions of the environment—represented within images captured by camera(s) of the vehicle 900—that correspond to drivable freespace and/or non-drivable space on a driving surface);
deleting points which are not matched to an object, among the extracted point data in the region-of-interest (Fig. 2D. Paragraph [0065]-ABBOTT discloses the points corresponding to drivable freespace may be removed from the points corresponding to the bounding shape (or cropped version thereof) to generate the object fence 110); and
generating the grid map based on freespace point data remaining after deleting the points which are not matched to the object (Fig. 9C. Paragraph [0105]-ABBOTT discloses the outputs may include information such as vehicle velocity, speed, time, map data (e.g., the HD map 922 of FIG. 9C), location data (e.g., the vehicle's 900 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by the controller(s) 936, etc.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG of having an object detection method of a vehicle LiDAR system, with the teachings of ABBOTT of having wherein the determining of the road boundary candidate comprises: extracting point data of a region-of-interest from the freespace point data; deleting points which are not matched to an object, among the extracted point data in the region-of-interest; and generating the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
Wherein JIA’s method having wherein the determining of the road boundary candidate comprises: extracting point data of a region-of-interest from the freespace point data; deleting points which are not matched to an object, among the extracted point data in the region-of-interest; and generating the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
The motivation behind the modification would have been to obtain a method that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Regarding claim 7, JIA in view of SONNTAG explicitly teaches the object detection method of claim 3, JIA further teaches wherein the selecting of the road boundary candidate comprises:
measuring a number of freespace point data belonging to each freespace grid of the set freespace grids (Fig. 3. Paragraph [0058]-JIA discloses each cell of the grid 302 indicates the mass value for each layer hypothesis 324 formed for the FOD layers. To distinguish from the FOD layers of the grid-based road model 322, the FOD layer for the grid 302 can be defined in Equation (13) as:Θ={F,SO,DO} where F indicates a free space, SO indicates a cell that is statically occupied, and DO indicates a cell that is dynamically occupied); and
selecting a freespace grid of which the measured number of freespace point data is equal to or greater than a threshold, as the road boundary candidate (Fig. 3. Paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability).
JIA in view of SONNTAG fails to explicitly teach setting the freespace grids by dividing a lane grid of the lane selected as the road boundary lane candidate by 3.
However, ABBOTT explicitly teaches setting the freespace grids by dividing a lane grid of the lane selected as the road boundary lane candidate by 3 (Fig. 7A. Paragraph [0056]-ABBOTT discloses virtual lane generation 124 may be executed where one or more lanes (e.g., below a threshold number of lanes) are not detected. As such, virtual lane generation 124 may be used when there is limited lane data 126 and/or sensor data 102 for lane detection 112. In embodiments, a pre-determined number of lanes may be used to determine whether virtual lanes should be generated. For example, where a threshold is three lanes, when three lanes are not detected, virtual lanes may be generated to fill in the gaps. The three lanes, in such an example, may include at least an ego-lane of the vehicle 900, and an adjacent lane on either side of the ego-lane. Similar to lane extension 122, an algorithm or a machine learning model may be used to determine a number of virtual lanes to be generated and locations of the virtual lanes to be generated. By generating virtual lanes, even if not as accurate as actual detections, objects may be assigned to virtual lanes to provide a better understanding to the vehicle 900 of locations of objects relative to a path of the ego-vehicle).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG of having an object detection method of a vehicle LiDAR system, with the teachings of MCGILL of having setting the freespace grids by dividing a lane grid of the lane selected as the road boundary lane candidate by 3.
Wherein JIA’s method having wherein the selecting of the road boundary candidate comprises: setting the freespace grids by dividing a lane grid of the lane selected as the road boundary lane candidate by 3; measuring the number of freespace point data belonging to each freespace grid of the set freespace grids; and selecting a freespace grid of which the measured number of freespace point data is equal to or greater than a threshold, as the road boundary candidate.
The motivation behind the modification would have been to obtain a method that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Regarding claim 13, JIA in view of SONNTAG explicitly teaches the vehicle LiDAR system of claim 12, JIA in view of SONNTAGE fail to explicitly teach wherein the one or more processors are further configured to execute the computer instructions to cause the LiDAR signal processing device comprises: extract point data of a region-of-interest from the freespace point data, and to delete points which are not matched to an object, among the extracted point data in the region-of-interest; and generate the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
However, ABBOTT explicitly teaches wherein the one or more processors (Fig. 9C, #906 called a CPU. Paragraph [0120]) are further configured to execute the computer instructions (Fig. 7B. Paragraph [0060]-ABBOTT discloses where an object may be in more than one lane, a set of points that crosses the lane boundary may be determined. A cross point 722A) to cause the LiDAR signal processing device (Fig. 9C. Paragraph [0116] FIG. 9C is a block diagram of an example system architecture for the example autonomous vehicle 900 of FIG. 9A. In paragraph [0104]-ABBOTT discloses the sensor data may be received from LIDAR sensor(s) 964) comprises:
extract point data of a region-of-interest from the freespace point data (Fig. 2C. Paragraph [0044]-ABBOTT discloses the output of freespace detection 106 may represent portions of the environment—represented within images captured by camera(s) of the vehicle 900—that correspond to drivable freespace and/or non-drivable space on a driving surface. The points corresponding to drivable freespace may be removed from the points corresponding to the bounding shape (or cropped version thereof) to generate the object fence 110), and to delete points which are not matched to an object, among the extracted point data in the region-of-interest (Fig. 2D. Paragraph [0044]-ABBOTT discloses the points corresponding to drivable freespace may be removed from the points corresponding to the bounding shape (or cropped version thereof) to generate the object fence 110); and
generate the grid map based on freespace point data remaining after deleting the points which are not matched to the object (Fig. 9C. Paragraph [0105]-ABBOTT discloses the outputs may include information such as vehicle velocity, speed, time, map data (e.g., the HD map 922 of FIG. 9C), location data (e.g., the vehicle's 900 location, such as on a map), direction, location of other vehicles (e.g., an occupancy grid), information about objects and status of objects as perceived by the controller(s) 936, etc.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG of having a non-transitory computer-readable recording medium storing a program for executing an object detection method of a vehicle LiDAR system, with the teachings of ABBOTT of having wherein the LiDAR signal processing device comprises: a point extraction unit configured to extract point data of a region-of-interest from the freespace point data, and to delete points which are not matched to an object, among the extracted point data in the region-of-interest; and a grid map generation unit configured to generate the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
Wherein JIA’s system having wherein the LiDAR signal processing device comprises: a point extraction unit configured to extract point data of a region-of-interest from the freespace point data, and to delete points which are not matched to an object, among the extracted point data in the region-of-interest; and a grid map generation unit configured to generate the grid map based on freespace point data remaining after deleting the points which are not matched to the object.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Regarding claim 18, JIA in view of SONNTAG explicitly teach the vehicle LiDAR system of claim 14, although JIA explicitly teaches instructions to cause the LiDAR signal processing device to:
set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate (Fig. 3. Paragraph [0041]-JIA discloses the architecture 300 illustrates example information sources that are input to the fused-grid module 110. The input sources can include a grid 302, lidar data 304, vision data 306, vehicle-state data 308, and the map 106. The grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. The grid 302 can include the grid size, the grid resolution, and cell-center coordinates. Each cell of the grid 302 can also indicate the velocity of the vehicle 102. The lidar data 304 provides information about objects on the roadway 120 and roadway features. The vision data 306 can include still images or video of the roadway 120 and provide information about lane boundaries, objects on the roadway 120, and other roadway features. The map 106 can provide information about the features of the roadway 120 and the lanes 122), measure a number of forespace point data belonging to each freespace grid of the set freespace grids (Fig. 3. Paragraph [0058]-JIA discloses the input from the grid 302 includes grid size, grid resolution, and cell center coordinates. Each cell of the grid 302 indicates the mass value for each layer hypothesis 324 formed for the FOD layers. To distinguish from the FOD layers of the grid-based road model 322, the FOD layer for the grid 302 can be defined in Equation (13) as: Θ={F,SO,DO}, where F indicates a free space, SO indicates a cell that is statically occupied, and DO indicates a cell that is dynamically occupied. Each cell of the grid 302 also indicates the velocity V, which the input processing module 206 can use for mass value extraction), and select a freespace grid of which the measured number of freespace point data is equal to or greater than a threshold, as the road boundary candidate (Fig. 3. Paragraph [0059]-JIA discloses the input processing module 206 can convert the mass values provided by the grid 302 to probability values using a pignistic transformation. The input processing module 206 then has the following probability value for each cell available: p.sub.F(.Math.), which indicates the probability of a cell to be a free space; p.sub.SO (.Math.), which indicates the probability of a cell to be statically occupied; and p.sub.DO(.Math.) which indicates the probability of a cell to be dynamically occupied. Further in paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability).
JIA in view of SONNTAG fail to explicitly teach set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3.
However, ABBOTT explicitly teaches set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3 (Fig. 7A. Paragraph [0056]-ABBOTT discloses virtual lane generation 124 may be executed where one or more lanes (e.g., below a threshold number of lanes) are not detected. As such, virtual lane generation 124 may be used when there is limited lane data 126 and/or sensor data 102 for lane detection 112. In embodiments, a pre-determined number of lanes may be used to determine whether virtual lanes should be generated. For example, where a threshold is three lanes, when three lanes are not detected, virtual lanes may be generated to fill in the gaps. The three lanes, in such an example, may include at least an ego-lane of the vehicle 900, and an adjacent lane on either side of the ego-lane. Similar to lane extension 122, an algorithm or a machine learning model may be used to determine a number of virtual lanes to be generated and locations of the virtual lanes to be generated. By generating virtual lanes, even if not as accurate as actual detections, objects may be assigned to virtual lanes to provide a better understanding to the vehicle 900 of locations of objects relative to a path of the ego-vehicle).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG of having a vehicle LiDAR system, with the teachings of ABBOTT of having set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3, measure a number of forespace point data belonging to each freespace grid of the set freespace grids, and select a freespace grid of which the measured number of freespace point data is equal to or greater than a threshold, as the road boundary candidate.
Wherein JIA’s system having set the freespace grids by dividing the lane grid of the lane selected as the road boundary lane candidate by 3, measure a number of forespace point data belonging to each freespace grid of the set freespace grids, and select a freespace grid of which the measured number of freespace point data is equal to or greater than a threshold, as the road boundary candidate.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Claims 8-9 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over JIA et al. (US 20220274601 A1), hereinafter referenced as JIA in view of SONNTAG et al. (US 20170021864 A1), hereinafter referenced as SONNTAG and in further view of ABBOTT et al. (US 20210042535 A1), hereinafter referenced as ABBOTT and in further view of SITHIRAVEL et al. (US 20210089791 A1), hereinafter referenced as SITHIRAVEL.
Regarding claim 8, JIA in view of SONNTAG and in further view of ABBOTT explicitly teach the object detection method of claim 7, JIA further teaches wherein the outputting of the road boundary information comprises:
selecting left and right freespace grids adjacent to the host vehicle among road boundary candidates including the road boundary candidate (Fig. 4. Paragraph [0093]-JIA discloses by comparing the position of the vehicle 102 with edge information for the nearest row to the vehicle 102, the input processing module 206 can define the cell corresponding to the left lane boundary of the left adjacent lane C.sub.la,l(0), the left lane boundary of the ego lane C.sub.e,l(0), the right lane boundary of the ego lane C.sub.e,r(0), and the right lane boundary of the right adjacent lane C.sub.ra,r(0). In paragraph [0094]-JIA discloses by following the edge line, the input processing module 206 can iteratively expand the cell lists of the lane boundary 404 by iteratively appending row by row, as illustrated in FIG. 4 (wherein the cell lists can be shifted in the left and right directions). Please also read paragraph [0095]); and
outputting the road boundary information by correcting the selected left and right freespace grids according to the predicted value (Fig. 3. Paragraph [0060]-JIA discloses the input processing module 206 can use the following rules to design mapping functions based on the grid 302 and the cell values therein. A cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. Further in paragraph [0093]-JIA discloses by comparing the position (v.sub.0) of the vehicle 102 with edge information for the nearest row to the vehicle 102, the input processing module 206 can define the cell corresponding to the left lane boundary of the left adjacent lane C.sub.la,l(0), the left lane boundary of the ego lane C.sub.e,l(0), the right lane boundary of the ego lane C.sub.e,r(0), and the right lane boundary of the right adjacent lane C.sub.ra,r(0)).
JIA in view of SONNTAG and in further view of ABBOTT fail to explicitly teach calculating a predicted value of a road boundary at a current time point by reflecting a lateral speed of a host vehicle on the information on the road boundary candidate determined at the previous time point.
However, SITHIRAVEL explicitly teaches calculating a predicted value of a road boundary at a current time point by reflecting a lateral speed of a host vehicle on the information on the road boundary candidate determined at the previous time point (Fig. 6. Paragraph [0042]-SITHRAVEL discloses constraints on B-splines can make a vehicle path polynomial a steerable path polynomial by limiting the rates of longitudinal and lateral accelerations required to pilot a vehicle along the vehicle path polynomial, where braking torque and powertrain torque are applied as positive and negative longitudinal accelerations and clockwise and counter clockwise steering torque are applied as left and right lateral accelerations. By determining lateral and longitudinal accelerations to achieve predetermined target values within predetermined constraints within predetermined numbers of time periods, the vehicle path polynomial can be constrained to provide a vehicle path can be operated upon by vehicle 110 without exceeding limits on lateral and longitudinal accelerations while avoiding contact with an object, for example).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG and in further view of MCGILL of having a vehicle LiDAR system, with the teachings of SITHIRAVEL of having calculating a predicted value of a road boundary at a current time point by reflecting a lateral speed of a host vehicle on the information on the road boundary candidate determined at the previous time point.
Wherein JIA’s system having calculating a predicted value of a road boundary at a current time point by reflecting a lateral speed of a host vehicle on the information on the road boundary candidate determined at the previous time point.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and SITHIRAVEL concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while SITHIRAVEL systems and methods improve the operation of vehicles and accuracy of free space mapping. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and SITHIRAVEL et al. (US 20210089791 A1), Abstract and Paragraph [0049, 0052].
Regarding claim 9, JIA in view of SONNTAG and in further view of ABBOTT explicitly teach the object detection method of claim 8, JIA in view of SONNTAG and in further view of SITHRAVEL fail to explicitly teach further comprising initializing the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
However, ABBOTT explicitly teaches further comprising initializing the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary (Fig. 7B. Paragraph [0060]-ABBOTT discloses where an object may be in more than one lane, a set of points that crosses the lane boundary may be determined. A cross point 722A (or new vertex) may be determined by finding a point between the vertices 702B and 702C that is on the lane line. Once the cross point or new vertex may then be used to determine the distance (e.g., a pixel distance, along a straight line, along a boundary of the object fence 100, etc.) between the new vertex and each other perimeter pixel or vertex of the object fence 110 on either side of the crossing. A first sum of distances between the new vertex and a first set of perimeter pixels corresponding to the object fence 110 in a first lane may be calculated and a second sum of distances between the new vertex and a second set of perimeter pixels corresponding to the object fence 110 in a second lane may be calculated (wherein a ratio of intersection per lane may be determined based on the sum of distances to determine switching lanes, swerving, lane keeping and assign an object to lane(s))).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG and in further view of ABBOTT and in further view of SITHRAVEL of having an object detection method of a vehicle LiDAR system, with the teachings of ABBOTT of having further comprising initializing the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
Wherein JIA’s method having further comprising initializing the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
The motivation behind the modification would have been to obtain a method that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Regarding claim 19, JIA in view of SONNTAG and in further view of ABBOTT explicitly teach the vehicle LiDAR system of claim 18, although JIA further teaches wherein the one or more processors (Fig. 2, #202 called processors. Paragraph [0029]. Further in paragraph [0021]-JIA discloses the vehicle 102 includes one or more sensors 104 to provide input data to one or more processors (not illustrated in FIG. 1) of the road-perception system 108) are further configured to execute the computer instructions (Fig. 3. Paragraph [0031]-JIA discloses the processor 202 executes computer-executable instructions stored within the CRM 204. As an example, the processor 202 can execute the fused-grid module 110 to generate a grid-based road model of the roadway 120. The fused-grid module 110 can generate the grid-based road model using data from the map 106 stored in the CRM 204 or obtained from the sensors 104) to cause the LiDAR signal processing device (Fig. 3. Paragraph [0040]-JIA discloses FIG. 3 illustrates an example architecture 300 of the road-perception system 108 to generate a grid-based road model 322 with multiple layers. In paragraph [0044]-JIA discloses the layer module 112 can generate multiple layer hypotheses 324 for multiple FOD layers. For example, the grid-based road model 322 can include seven FODs, including a cell-category layer, lane-number layer, lane-marker-type layer, lane-marker-color layer, traffic-sign layer, pavement-marking layer, and lane-type layer) to:
calculate a predicted value of a road boundary at a current time point by reflecting a speed (Fig. 3. Paragraph [0041]-JIA discloses the grid 302 provides a grid representation of the roadway 120 and can be a static occupancy grid and/or a dynamic grid. The grid 302 can include the grid size, the grid resolution, and cell-center coordinates. Each cell of the grid 302 can also indicate the velocity of the vehicle 102. The lidar data 304 provides information about objects on the roadway 120 and roadway features. The vision data 306 can include still images or video of the roadway 120 and provide information about lane boundaries, objects on the roadway 120, and other roadway features. The vehicle-state data 308 can provide information about the velocity, location, and heading of the vehicle 102) of a host vehicle on the information on the road boundary candidate determined at the previous time point (Fig. 3. Paragraph [0032]-JIA discloses the processor 202 can execute the input processing module 206 to extract mass values associated with different input data. The mass values indicate the confidence associated with the data contributing to layer hypotheses of the grid-based road model. In paragraph [0033]-JIA discloses the processor 202 can execute the evidence fusion module 208 to update mass values associated with the input data recursively. For example, the evidence fusion module 208 can compare data from one or more previous time instants with data from a current time instant to update a mass value associated with the input data. Further in paragraph [0058]-JIA discloses each cell of the grid 302 also indicates the velocity V, which the input processing module 206 can use for mass value extraction. In paragraph [0060]-JIA discloses a cell with low velocity and high statically-occupied probability is more likely to be a barrier. A cell with high velocity and high dynamically-occupied probability is more likely to be a lane center. A cell with high free-space probability is more likely to be a lane boundary if the nearby cell (e.g., 0.5 W.sub.L, where W.sub.L represents the lane width) has a high velocity and high dynamically-occupied probability. In paragraph [0073]-JIA discloses at function 314, the evidence fusion module 208 updates and fuses mass values for each cell of each layer. The data can include data from past measurements and data from current sensor measurements), select left and right freespace grids adjacent to the host vehicle among road boundary candidates including the selected road boundary candidate (Fig. 4. Paragraph [0092]-JIA discloses the input processing module 206 can then perform a morphological closing operation to fill small gaps between different cells for the lane boundary 404. In paragraph [0094]-JIA discloses by following the edge line, the input processing module 206 can iteratively expand the cell lists of the lane boundary 404 by iteratively appending row by row, as illustrated in FIG. 4. In paragraph [0095]-JIA discloses the input processing module 206 can then use the lane-boundary cell lists as evidence to assign mass values to different cells 402. In paragraph [0097]-JIA discloses the cell list of the left lane boundary of the ego lane C.sub.e,l can also be shifted to the left. In paragraph [0099]-JIA discloses the cell list of the right lane boundary of the ego lane Ce,r can be shifted to the right. Further in paragraph [0110]-JIA discloses FIGS. 5A and 5B illustrate example lane sets 500 and 550, respectively, that the input processing module 206 of a grid-based road model with multiple layers can use to shift input evidence for determining mass values for other layer hypotheses. The input processing module 206 can shift the lane marker around a half-width of the lane to provide evidence for the lane-center hypothesis (wherein the input processing module 206 obtains the coordinate of the shifted point 516 from the original point 514 using Equation (44): ({tilde over (x)},{tilde over (y)})=(x,y±Δ), where “+” is used when the shifting direction is left and “−” is used when the shifting direction is right). Please also read paragraph [0111-0112]), and(Fig. 3. Paragraph [0034]-JIA discloses the layer module 112 can include an output processing module 210. The output processing module 210 can estimate belief parameters and plausibility parameters associated with the one or more layer hypotheses of each cell. [0042] The output of the layer module 112 is the grid-based road model 322).
JIA in view of SONNTAG fail to explicitly teach a lateral speed of a host vehicle.
However, SITHIRAVEL explicitly teaches a lateral speed of a host vehicle (Fig. 6. Paragraph [0042]-SITHRAVEL discloses constraints on B-splines can make a vehicle path polynomial a steerable path polynomial by limiting the rates of longitudinal and lateral accelerations required to pilot a vehicle along the vehicle path polynomial, where braking torque and powertrain torque are applied as positive and negative longitudinal accelerations and clockwise and counter clockwise steering torque are applied as left and right lateral accelerations. By determining lateral and longitudinal accelerations to achieve predetermined target values within predetermined constraints within predetermined numbers of time periods, the vehicle path polynomial can be constrained to provide a vehicle path can be operated upon by vehicle 110 without exceeding limits on lateral and longitudinal accelerations while avoiding contact with an object, for example).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG and in further view of ABBOTT of having a vehicle LiDAR system comprising: a LiDAR sensor configured to obtain freespace point data and object information; and a LiDAR signal processing device configured to set lane grids including a host vehicle lane according to a lane width on a grid map which is generated based on the freespace point data and the object information obtained through the LiDAR sensor, with the teachings of SITHIRAVEL of having a lateral speed of a host vehicle.
Wherein JIA’s system having move a position of the lane grid in left and right directions, wherein the one or more processors are further configured to execute the computer instructions to cause the LiDAR signal processing device to: calculate a predicted value of a road boundary at a current time point by reflecting a lateral speed of a host vehicle on the information on the road boundary candidate determined at the previous time point, select left and right freespace grids adjacent to the host vehicle among road boundary candidates including the selected road boundary candidate, and output the road boundary information by correcting the selected left and right freespace grids according to the predicted value.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and SITHIRAVEL concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while SITHIRAVEL systems and methods improve the operation of vehicles and accuracy of free space mapping. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and SITHIRAVEL et al. (US 20210089791 A1), Abstract and Paragraph [0049, 0052].
Regarding claim 20, JIA in view of SONNTAG and in further view of ABBOTT and in further view of SITHRAVEL explicitly teach the vehicle LiDAR system of claim 19, JIA in view of SONNTAG fails to explicitly teach wherein the one or more processors are further configured to execute the computer instructions to cause the LiDAR signal processing device to: initialize the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
However, ABBOTT explicitly teaches wherein the one or more processors (Fig. 9C, #906 called a CPU. Paragraph [0120]) are further configured to execute the computer instructions (Fig. 7B. Paragraph [0060]-ABBOTT discloses where an object may be in more than one lane, a set of points that crosses the lane boundary may be determined. A cross point 722A) to cause the LiDAR signal processing device (Fig. 9C. Paragraph [0116] FIG. 9C is a block diagram of an example system architecture for the example autonomous vehicle 900 of FIG. 9A. In paragraph [0104]-ABBOTT discloses the sensor data may be received from LIDAR sensor(s) 964) to:
initialize the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary (Fig. 7B. Paragraph [0060]-ABBOTT discloses where an object may be in more than one lane, a set of points that crosses the lane boundary may be determined. A cross point 722A (or new vertex) may be determined by finding a point between the vertices 702B and 702C that is on the lane line. Once the cross point or new vertex may then be used to determine the distance (e.g., a pixel distance, along a straight line, along a boundary of the object fence 100, etc.) between the new vertex and each other perimeter pixel or vertex of the object fence 110 on either side of the crossing. A first sum of distances between the new vertex and a first set of perimeter pixels corresponding to the object fence 110 in a first lane may be calculated and a second sum of distances between the new vertex and a second set of perimeter pixels corresponding to the object fence 110 in a second lane may be calculated (wherein a ratio of intersection per lane may be determined based on the sum of distances to determine switching lanes, swerving, lane keeping and assign an object to lane(s))).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of JIA in view of SONNTAG and in further view of ABBOTT and in further view of SITHRAVEL of having a vehicle LiDAR system, with the teachings of ABBOTT of having wherein the one or more processors are further configured to execute the computer instructions to cause the LiDAR signal processing device to: initialize the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
Wherein JIA’s system having wherein the one or more processors are further configured to execute the computer instructions to cause the LiDAR signal processing device to: initialize the road boundary information when the corrected road boundary invades the host vehicle lane or there is no static object at a position of the corrected road boundary.
The motivation behind the modification would have been to obtain a system that improves the computational efficiency and accuracy of object and road boundary detections, since both JIA and ABBOTT concern systems and methods for road boundary analysis. Wherein JIA provides systems and methods that improve the accuracy of modeling a roadway and its elements, while ABBOTT’s systems and methods provide for leveraging object detections, freespace detections, object fence detections, and/or lane detections to efficiently and accurately assign objects to respective lanes or other defined portions of an environment. Please see JIA et al. (US 20220274601 A1), Paragraph [0016, 0022] and ABBOTT et al. (US 20210042535 A1), Abstract and Paragraph [0003, 0062, 0126, 0154].
Allowable Subject Matter
Claims 4, and 15, along with their dependent claims 5-6, and 16-17 are objected to as being dependent upon a rejected base claims 1 and 12, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 4, the prior arts fail to explicitly teach, selecting a corresponding lane grid from the plurality of lane grids as the road boundary lane candidate when the ratio of the length occupied by the static objects to the length occupied by the objects is equal to or greater than a second reference (wherein the lane grid includes a cell having a width of a predetermined lane width), as claimed in claim 4.
Regarding claim 15, the prior arts fail to explicitly teach, select a corresponding lane grid from the plurality of lane grids as the road boundary lane candidate when the ratio of the length occupied by the static objects to the length occupied by the objects is equal to or greater than a second reference (wherein the lane grid includes a cell having a width of a predetermined lane width), as claimed in claim 15.
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
MARCHETTI-BOWICK et al. (US 20210004012 A1)- An autonomous vehicle can obtain state data associated with an object in an environment, obtain map data including information associated with spatial relationships between at least a subset of lanes of a road network, and determine a set of candidate paths that the object may follow in the environment based at least in part on the spatial relationships between at least two lanes of the road network. Each candidate path can include a respective set of spatial cells. The autonomous vehicle can determine, for each candidate path, a predicted occupancy for each spatial cell of the respective set of spatial cells of such candidate path during at least a portion of a prediction time horizon. The autonomous vehicle can generate prediction data associated with the object based at least in part on the predicted occupancy for each spatial cell of the respective set of spatial cells for at least one candidate path...................... Please see Fig. 4-5. Abstract.
McGill et al. (US 20210302960 A1)- Systems and methods for generating driving recommendations are disclosed herein. One embodiment divides automatically a roadway into a plurality of lane-level cells; generates a graph network that represents the plurality of lane-level cells; gathers information pertaining to one or more detected road agents; projects onto the graph network the gathered information pertaining to the one or more detected road agents to update a current status of the plurality of lane-level cells; processes the graph network based on the updated current status of the plurality of lane-level cells to predict a future status of the plurality of lane-level cells, the predicted future status including at least occupancy, by a detected road agent, of the respective lane-level cells in the plurality of lane-level cells; and generates a driving recommendation based, at least in part, on the predicted future status of the plurality of lane-level cells..................... Please see Fig. 3 and 5. Abstract.
WANTANABE et al. (US 20220207883 A1)- An information processing apparatus according to an embodiment of the present technology includes a classification unit and a generation unit. The classification unit classifies an object detected in a space on a basis of a predetermined criterion. The generation unit sets a priority for the object on a basis of a classification result by the classification unit, and generates position-related information regarding a position in the space on a basis of the set priority. Use of the position-related information makes it possible to improve the accuracy of autonomous movement control. This makes it possible to improve the accuracy of autonomous movement control..................... Please see Fig. 3 and 9-11. Abstract.
COLGATE et al. (US 20190271554 A1)- The autonomous vehicle generates an overlapped image by overlaying HD map data over sensor data and rendering the overlaid images. The visualization process is repeated as the vehicle drives along the route. The visualization may be displayed on a screen within the vehicle or at a remote device. The system performs reverse rendering of a scene based on map data from a selected point. For each line of sight originating at the selected point, the system identifies the farthest object in the map data. Accordingly, the system eliminates objects obstructing the view of the farthest objects in the HD map as viewed from the selected point. The system further allows filtering of objects using filtering criteria based on semantic labels. The system generates a view from the selected point such that 3D objects matching the filtering criteria are eliminated from the view....................... Please see Fig. 5-9. Abstract.
LIN et al. (US 20200183011 A1)- The present disclosure provides a method and an apparatus for creating an occupancy grid map, as well as a processing apparatus. The method includes: creating a current occupancy grid map based on a location of the vehicle and a previous occupancy grid map; and determining a current probability that each grid in the current occupancy grid map belongs to each of occupancy categories based on last environment perception information received from the sensors and updating an occupancy category to which each grid in the current occupancy grid map belongs based on the current probability that the grid belongs to each of the occupancy categories, in accordance with an asynchronous updating policy.................... Please see Fig. 1-2 and 8-9. Abstract.
HUDACEK et al. (US 20200398894 A1)- Techniques for generating trajectories and drivable areas for navigating a vehicle in an environment are discussed herein. The techniques can include receiving a reference trajectory representing an initial trajectory for a vehicle, such as an autonomous vehicle, to traverse the environment in a first drivable area. An object within a distance threshold can be detected in the environment and a second drivable area can be determined. Further, the techniques can include determining a target trajectory based at least in part on the reference trajectory and/or the second drivable area which can provide a region for the object to traverse the environment, and controlling the autonomous vehicle to traverse the environment based at least in part on the target trajectory...................... Please see Fig. 3-6. Abstract.
TSAI et al. (US 20190178989 A1)- A dynamic road surface detecting method based on a three-dimensional sensor is provided. The three-dimensional sensor receives a plurality of laser-emitting points reflected from a road surface to generate a plurality of three-dimensional sensor scan point coordinates which is transmitted to a point cloud processing module. The point cloud processing module transforms the three-dimensional sensor scan point coordinates to a plurality of vehicle scan point coordinates according to a coordinate translation equation, and then transforms a plurality of vehicle coordinate height values of the vehicle scan point coordinates to a road surface height reference line according to a folding line fitting algorithm. An absolute difference of two scan point height values of any two adjacent scan points on each of the scan lines is analyzed to generate a discontinuous point. The point cloud processing module links the discontinuous points to form a road boundary...................... Please see Fig. 3 and 4B Abstract.
Ichinokawa et al. (US 20200168180 A1)- A display system includes a display configured to superimpose an image on a landscape in front of a vehicle and cause the image to be visually recognized by an occupant of the vehicle, a road shape acquirer configured to acquire information indicating a shape of a road around the vehicle, and a display controller configured to change a display aspect of an image regarding a road to be displayed on the display, on the basis of a form of a roundabout including one or both of a shape component and an outer diameter of the roundabout obtained from the information in a case that the roundabout has been determined to be included in the road shape by referring to the information acquired by the road shape acquirer...................... Please see Fig. 6-7. Abstract.
Sadeghi et al. (US 20230084578 A1)- Systems, methods and computer-readable media for selecting a trajectory for an autonomous vehicle are disclosed that include computing a current vehicle state for the autonomous vehicle based on observations by a sensing system; computing respective collision probability scores for a plurality of candidate trajectories based on the current vehicle state; computing respective information gain scores for the plurality of candidate trajectories based on the current vehicle state, the information gain score for each candidate trajectory indicating an respective information gain for a next planning horizon interval that is subsequent to the current planning horizon interval; and selecting a planned trajectory from the plurality of candidate trajectories based on the respective collision probability scores and respective information gain scores..................... Please see Fig. 3 and 5-6. Abstract.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673