DETAILED ACTION
This is a response to Applicant’s submissions filed on 12/30/2025. Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to the rejections of claims 1-20 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
It is noted that Applicant’s amendments to the claims have overcome the previous rejections under 35 U.S.C. § 101.
In response to Applicant’s argument that the amendments to paragraph 38 and claims 4 and 15, along with the error values defined in paragraphs 29 and 31, clearly define the most probable associative relationship (Applicant’s Remarks; pp. 16 & 18), the Examiner respectfully disagrees. It is unclear how Applicant’s recitation of x-coordinate values of keypoints and keypoint labels relate to the squared errors, therefore, it remains unclear what the errors measure, and how the average value of a line is calculated, and it remains unclear how the most probable associative relationship is determined. See objection and rejections below.
In response to Applicant’s argument that the amendments to claims 7 and 18 render the size of the line pairs clear (Applicant’s Remarks; p. 18), the Examiner respectfully disagrees. The disclosure does not appear to include an explicit definition for a line pair and the Applicant appears to suggest a line pair can be any pair of lines relative to a vehicles (i.e., a line pair can be a line from a first vehicle and a line from a second vehicle, each line can be either a left line or a right line relative to a vehicle, a vehicle detecting lane lines from an adjacent lane relative to the current travel lane). Paragraph 33 appears to disclose a line pair includes a left and right line enclosing a lane, however, it is unclear if multiple line pair embodiments are disclosed. See objections below.
Specification
The amendments to the abstract and specification were received on 12/30/2025.
The disclosure is objected to because of the following informalities:
In paragraph 16, lines 7-9, it is unclear what a line pair comprises. Paragraph 16 discloses a line pair can be formed with lines from multiple vehicles that overlap along similar paths, however, paragraph 17 further discloses left and right lines within a line pair that do not overlap. Paragraph 33 appears to disclose computing an overlap between left and right lines within a line pair, however, it is unclear if a line pair comprises a left and right line in all embodiments.
In paragraphs 38 and 46, it is unclear what the errors measure that are used to output the probabilistic score. Although paragraph 4 discloses sensor errors, paragraph 15 discloses identification errors, and paragraph 32 discloses false negative and false positive errors, there does not appear to be disclosure of what the errors are in the model used to predict the probabilistic estimate. It is further unclear how these errors would be squared to calculate the probabilistic score.
Appropriate correction is required.
Claim Objections
Claims 1, 10 and 12 are objected to because of the following informalities:
In claims 1, 10 and 12, lines 5, 5 and 6, respectively, it is unclear whether the compared line pairs comprise single lines formed by connecting keypoints detected from different vehicles, a left and right lane line formed by connecting keypoints detected from a vehicle, or left and right lane lines formed by connecting keypoints detected from different vehicles. Paragraph 33 appears to disclose computing an overlap between left and right lines within a line pair, however, it is unclear if a line pair comprises a left and right line in all embodiments.
In claims 1, 10 and 12, lines 7, 7 and 6, respectively, “the associative relationships identify groups” should read “the associative relationships identifying groups” to maintain consistency with the preceding phrase “the similarity metrics including associative relationships”.
In claims 1 and 10, lines 9-10, “generate a map … and controlling steering” should read “generate a map … and control steering”. This appears to be a typographical error.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 1, 10 and 12, lines 7-8, 7-8 and 6-7, respectively, the limitation “the associative relationships identify groups of the different vehicles co-occupying a lane and traveling on different lanes” appears to be new matter because the associative relationships appear to be a relationship between two vehicles that either travel in the same or different lanes, but there does not appear to be disclosure of the associative relationships identifying groups of vehicles that co-occupy and travel in different lanes. Paragraphs 32-33 disclose determining if vehicles 1001 and 1002 travel in the same or different lanes. Paragraph 37 similarly discloses an example of determining if a vehicle 100 and a pickup truck 430 travel in the same or different lanes. Although associative relationships between more than two vehicles could be used to infer groupings of vehicles traveling in the same and/or different lanes, there does not appear to be explicit disclosure of the associative relationships identifying the groups.
Regarding claims 2, 11 and 13, lines 4-6, the limitation “the different overlaps are associated with a line size, an area between the line pairs, and a probabilistic estimate for the line pairs” appears to be new matter because there does not appear to be disclosure of an overlap that is merely associated with a line size, an area between the line pairs, and a probabilistic estimate. Paragraph 25 appears to disclose detected overlaps are intersecting lines and corresponding sizes, an area between corresponding points of line pairs, or a probabilistic estimate for the line pairs. Additionally, if one were to assume that the claim is directed to different overlaps that are a line size, an area, and a probabilistic estimate, there does not appear to be explicit disclosure of an overlap comprising more than one of the line size, area, and probabilistic estimate.
Claims 2-9, 11 and 13-20 are rejected as being dependent on a rejected claim and for failing to cure the deficiencies listed above.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1, 10 and 12, lines 3-4, 3-4 and 2-3, respectively, the limitation “keypoints detected from different vehicles using sensor data acquired by a vehicle system” renders each claim indefinite because it is unclear how the sensor data acquired by a single vehicle system is used to detect keypoints from other vehicles. Paragraphs 27 and 40 disclose keypoints generated by different vehicles using sensor data, therefore, for the purposes of examination, it will be assumed that the keypoints are detected by different vehicles using sensor data acquired by a vehicle system of each respective vehicle.
Regarding claims 1, 10 and 12, lines 7-8, 7-8 and 6-7, respectively, the limitation "the associative relationships identify groups of the different vehicles co-occupying a lane and traveling on different lanes” renders each claim indefinite because the associative relationships appear to identify if two vehicles travel in the same or different lanes, therefore, it is unclear how they identify groups of different vehicles co-occupying a lane and traveling on different lanes. Paragraphs 32-33 disclose determining if vehicles 1001 and 1002 travel in the same or different lanes. Paragraph 37 similarly discloses an example of determining if a vehicle 100 and a pickup truck 430 travel in the same or different lanes. For the purposes of examination, it will be assumed that the associative relationships identify different vehicles co-occupying a lane or traveling on different lanes.
Regarding claims 1 and 10, lines 10-11, the limitation “controlling steering by one of the different vehicles” renders the claim indefinite because the relationship between the processor and the different vehicles is unclear, therefore, it is further unclear how the steering control is performed by the processor. Figure 1 discloses the estimation system is on the vehicle, however, it is unclear how the vehicle controls the steering of a different vehicle. Although a server is disclosed, there does not appear to be disclosure of the server directly controlling the steering of the vehicles. Paragraphs 21 and 27 appear to disclose that at least some of the line and boundary line identification may be performed by the server to reduce the computational load on the vehicle’s processor. For the purposes of examination, it will be assumed that a vehicle controls its own steering according to the identified boundary line.
Regarding claims 2, 11 and 13, lines 3-6, the limitation “compute different overlaps … associated with a line size, an area between the line pairs, and a probabilistic estimate for the line pairs” renders each claim indefinite because it is unclear whether an overlap is associated with a line size, an area, and a probabilistic estimate, or a combination thereof. Paragraphs 25 and 37 appear to disclose the overlap is one of the line size, area, lateral gap, or probabilistic estimate. Paragraphs 36 and 47 appear to disclose the line size, area, and probabilistic estimate are combined to select the lines pairs, not to compute an overlap. For the purposes of examination, it will be assumed that three different overlaps are computed, each overlap associated with one of: a line size, an area between the line pairs, and a probabilistic estimate for the line pairs.
Regarding claims 2, 5, 11, 13 and 16, lines 3-4, 4-5, 3-4, 3-4 and 4-5, respectively, the limitation “[a/the] first vehicle and [a/the] second vehicle that are associated with the different vehicles” renders each claim indefinite because it is unclear if the first and second vehicles are included in the different vehicles or are merely associated with them. For the purposes of examination, it will be assumed that the first and second vehicles are included in the different vehicles.
Regarding claims 2, 11 and 13, line 7, the limitation “estimat[e/ing] that a first vehicle and a second vehicle are co-occupying the lane” renders each claim indefinite because it is unclear if the first and second vehicles are the first and second vehicles recited in lines 2-3. For the purposes of examination, it will be assumed that each claim is directed to a single pair of vehicles.
Regarding claims 4 and 15, lines 2-3, the limitation “minimizes squared errors between the keypoints” renders each claim indefinite because it is unclear what the errors measure. Amended paragraph 38 discloses the most probable associative relationship is one that minimizes the sum of the squared errors between the actual keypoints, and paragraph 46 discloses outputting a probabilistic score by minimizing squared errors between the actual keypoints and averaged lines. Although paragraph 4 discloses sensor errors, paragraph 15 discloses identification errors, and paragraph 32 discloses false negative and false positive errors, there does not appear to be disclosure of what the errors are in the model used to predict the probabilistic estimate. For the purposes of examination, it will be assumed that the errors are distances between keypoints that are assumed to be at the same location.
Regarding claims 4 and 15, lines 2-3, the limitation “minimizes squared errors between average values for the lines” renders each claim indefinite because it is unclear what the errors measure, and how the average value of a line is calculated. Amended paragraph 38 discloses the most probable associative relationship is one that minimizes the squared errors of averaged lines, and paragraph 46 discloses outputting a probabilistic score by minimizing squared errors between the actual keypoints and averaged lines. Although paragraph 4 discloses sensor errors, paragraph 15 discloses identification errors, and paragraph 32 discloses false negative and false positive errors, there does not appear to be disclosure of what the errors measure in the model used to predict the probabilistic estimate. For the purposes of examination, it will be assumed that the errors are distances between lines that are assumed to be at the same location, and the lines are lines fit to keypoints.
Regarding claims 7 and 18, lines, the limitation “according to the line size of the line pairs, the line size being associated with a length of a line segment that is included in the line pairs” renders each claim indefinite because it is unclear whether the line size is a length of a line segment included in the line pairs or is merely associated with a length of a line segment included in the line pairs. It is further unclear whether the line segment must be included in the line pair that is measured. Paragraphs 5, 16, 25 and 37 appear to disclose the line size is a length of an overlapping segment of a line pair, therefore, for the purposes of examination, it will be assumed that the line size of the line pairs is the length of a line segment that is included in both lines of the line pair.
Claims 2-9, 11 and 13-20 are rejected as being dependent on a rejected claim and for failing to cure the deficiencies listed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-6, 8-14, 16-17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (WO 2023/131203) in view of Agarwal et al. (US 2024/0096111), hereinafter He and Agarwal, respectively, and Gu (CN 115422312).
Regarding claims 1, 10 and 12, as best understood, He discloses an estimation system comprising a processor, and a memory storing instructions (He; para. 256: this method can be implemented by the processor calling computer-readable instructions stored in memory) that, when executed by the processor, cause the processor to: form lines by connecting keypoints (He; para. 37: Object point cloud refers to the point cloud of objects (such as traffic signs) needed to construct a semantic map. The objects needed for a semantic map include … lane lines; paras. 218-219: a computer device acquires a set of three-dimensional sampling points representing lane lines in each frame of a target image. The individual 3D sampling points are collected and combined into a fused sampling point set. For example, a computer device combines multiple sets of three-dimensional sampling points into a fused set of sampling points.; para. 222: the computer device connects any two adjacent target sampling points with line segments, and the target sampling points and the line segments between adjacent target sampling points form the target lane line) detected from different vehicles using sensor data acquired by a vehicle system (He; para. 36: Users collect road data using the sensors of autonomous vehicles or other low-cost sensors, transmit the data to the cloud for data fusion, and improve data accuracy through this fusion method to complete the creation of crowdsourced high-precision maps or semantic maps. Crowdsourced maps or crowdsourced maps refer to point cloud data collected by LiDAR from other vehicles and uploaded to the cloud by the terminal.); compare similarity metrics for line pairs from the lines along a longitudinal path (He; para. 72: the semantic distance between each first point cloud object and the second point cloud object in the semantic map is obtained; para. 74: If the semantic distance is less than a predetermined threshold, it indicates that there is a correlation between the corresponding first point cloud object and the second point cloud object.); and upon satisfying criteria for the similarity metrics, generate a map with a boundary line for the road (He; para. 52: The semantic map is then updated based on the semantic information of the new point cloud object. When the first point cloud object in the crowdsourced map is a newly added object, it is added to the semantic map.) identified with the line pairs using scores (He; para. 72: the semantic distance between each first point cloud object and the second point cloud object in the semantic map is obtained by weighting the center position coordinate distance difference score, the point cloud object orientation difference score, the calibration box size difference score, and the appearance feature difference score) and controlling steering by one of the different vehicles according to the boundary line (He; para. 262: Based on the original semantic map, traffic information such as roads, traffic signs, lane lines, and obstacles in the semantic map is obtained. Adjustments are made to the steering, speed, route planning, lane changing, etc. of the vehicles to achieve safe driving on the road. The original semantic map is obtained through the semantic map update method described above).
He does not explicitly disclose the compared line pairs include both the left and right borders of a lane.
Gu, in the same field of endeavor (crowdsourced mapping), discloses comparing line pairs that include both left and right borders of a lane (Gu; para. 72: Each vector segment contains vehicle trajectory information and lane line information. During the matching process, the trajectory in vector segment 1 is matched with the trajectory in vector segment 2, and the lane line in vector segment 1 is matched with the lane line in vector segment 2.; fig. 2: vector segments 1 and 2 include left and right lane lines).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the line pairs compared in the processor of He to include a left and right lane boundary, as disclosed by Gu, to yield the predictable result of defining the lane width in the map.
He, as modified, does not explicitly disclose the similarity metrics include associative relationships between the different vehicles and the line pairs on a road, and the associative relationships identify groups of the different vehicles co-occupying a lane and traveling on different lanes on the road.
Agarwal, in the same field of endeavor (image-based vehicle tracking), discloses similarity metrics include associative relationships between different vehicles and line pairs on a road (Agarwal; para. 70: The systems and techniques may determine a lane association for target vehicle 804c based on lane boundary 814 and/or lane boundary 818 in instances in which lane boundary 816 (one of a pair of lane boundaries defining lane 822 and lane 824) is missing … the systems and techniques may determine whether target vehicle 804c is associated with lane 824 based on the distance between the two-dimensional bounding box based on three-dimensional bounding box 802c and lane boundary), and the associative relationships identify groups of the different vehicles co-occupying a lane and traveling on different lanes on the road (Agarwal; para. 79: Such techniques may determine when a target vehicle is in a lane of the tracking vehicle by determining whether the target vehicle is within a lane width from a lane boundary of the lane of the tracking vehicle.; para. 85: the systems and techniques may change a lane association of the target vehicle from lane 0 (the lane of the tracking vehicle) to +1/−1 depending upon distance d from the lane boundary; para. 96: tracking vehicle 102 may determine lane associations for multiple target vehicles, (e.g., target vehicle 104, target vehicle 106, and target vehicle 108)).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to have modified the semantic distance calculated in the processor of He, as modified, to include lane associations for the different vehicles, as disclosed by Agarwal, with the motivation of determining that different vehicles are in the same lane because it may be safety critical to determine when a target vehicle is in the lane of a tracking vehicle (Agarwal; para. 79) to avoid collisions (Agarwal; para. 95).
Regarding claims 2, 11 and 13, as best understood, He, as modified, discloses computing different overlaps of the line pairs associated with a first vehicle and a second vehicle that are associated with the different vehicles, wherein the different overlaps are associated with a line size (He; para. 140: Determine whether there is a bounding box collision between each unmatched point cloud object in the second unmatched point cloud object set and the corresponding point cloud object within the preset semantic distance range in the crowdsourced semantic map; para. 137: Bounding box collision, or OBB collision (Oriented Bounding Box), uses, but is not limited to, the separating axis theorem. It can be understood that if an axis can be found on which the projections of two convex shapes do not overlap, then the two shapes do not intersect. If the axis does not exist, and the shapes are convex, then it can be determined that the two shapes intersect (this does not apply to concave shapes, such as crescent shapes; even if the separating axis cannot be found, the two crescent shapes may not intersect).), an area between the line pairs (He; para. 150: the newly collected crowdsourced semantic map and the original semantic map of the same collection area are matched), and a probabilistic estimate for the line pairs (He; para. 153: The point cloud percentage refers to the percentage of points in the object point cloud corresponding to the first unmatched point cloud object within the convex hull of the first colliding point cloud object, relative to the total number of points in the object point cloud.; para. 156: when the proportion of point cloud is greater than or equal to a set threshold, the first unmatched point cloud object and the first colliding point cloud object are determined to be the same object, and the original point cloud object in the original semantic map is retained; when the proportion of point cloud is less than the set threshold, the first unmatched point cloud object and the first colliding point cloud object are determined to be different objects, and the first unmatched point cloud object is added to the original semantic map); and estimating that a first vehicle and a second vehicle are co-occupying the lane using the different overlaps (Agarwal; para. 70: The systems and techniques may determine a lane association for target vehicle 804c based on lane boundary 814 and/or lane boundary 818 in instances in which lane boundary 816 (one of a pair of lane boundaries defining lane 822 and lane 824) is missing … the systems and techniques may determine whether target vehicle 804c is associated with lane 824 based on the distance between the two-dimensional bounding box based on three-dimensional bounding box 802c and lane boundary; para. 79: Such techniques may determine when a target vehicle is in a lane of the tracking vehicle by determining whether the target vehicle is within a lane width from a lane boundary of the lane of the tracking vehicle.; para. 85: the systems and techniques may change a lane association of the target vehicle from lane 0 (the lane of the tracking vehicle) to +1/−1 depending upon distance d from the lane boundary; para. 96: tracking vehicle 102 may determine lane associations for multiple target vehicles, (e.g., target vehicle 104, target vehicle 106, and target vehicle 108)).
Regarding claims 3 and 14, as best understood, He, as modified, discloses selecting the line pairs according to one of the scores being elevated for the line size (He; para. 146: when the bounding box collision result of the second unmatched point cloud object is that the first unmatched point cloud object does not have a bounding box collision, the first unmatched point cloud object is added to the semantic map) and diminished for one of the area, a lateral gap (He; paras. 233-234: Based on the target sampling point set and the reference sampling point set, the separation degree between the target lane line and the reference lane line is calculated, where the separation degree refers to the degree of separation between objects. Distancing can be represented by the distance between objects, or by the average distance between objects, etc. For example, the computer device calculates the phase separation between the target lane line and the reference lane line based on the target sampling point set and the reference sampling point set. Furthermore, the lane line processing method also includes: comparing the phase separation degree with the phase separation degree threshold; if the phase separation degree is less than the phase separation degree threshold, then performing curve fitting and sampling on the reference sampling point set and the target sampling point set to obtain an updated sampling point set, and generating an updated lane line based on the updated sampling point set; if the phase separation degree is equal to or greater than the phase separation degree threshold, then generating an updated lane line based on the target sampling point set.), and the probabilistic estimate; and predicting a lateral offset between the first vehicle and the second vehicle within the lane using the different overlaps (Agarwal; para. 70: The systems and techniques may determine a lane association for target vehicle 804c based on lane boundary 814 and/or lane boundary 818 in instances in which lane boundary 816 (one of a pair of lane boundaries defining lane 822 and lane 824) is missing … the systems and techniques may determine whether target vehicle 804c is associated with lane 824 based on the distance between the two-dimensional bounding box based on three-dimensional bounding box 802c and lane boundary; para. 79: Such techniques may determine when a target vehicle is in a lane of the tracking vehicle by determining whether the target vehicle is within a lane width from a lane boundary of the lane of the tracking vehicle.; para. 85: the systems and techniques may change a lane association of the target vehicle from lane 0 (the lane of the tracking vehicle) to +1/−1 depending upon distance d from the lane boundary; para. 96: tracking vehicle 102 may determine lane associations for multiple target vehicles, (e.g., target vehicle 104, target vehicle 106, and target vehicle 108)).
Regarding claims 5 and 16, as best understood, He, as modified, discloses predicting that a first vehicle and a second vehicle are traveling in the different lanes from a first overlap being elevated and a second overlap being diminished for the line pairs using different ones of the associative relationships, wherein the first vehicle and the second vehicle are associated with the different vehicles (Agarwal; para. 70: The systems and techniques may determine a lane association for target vehicle 804c based on lane boundary 814 and/or lane boundary 818 in instances in which lane boundary 816 (one of a pair of lane boundaries defining lane 822 and lane 824) is missing … the systems and techniques may determine whether target vehicle 804c is associated with lane 824 based on the distance between the two-dimensional bounding box based on three-dimensional bounding box 802c and lane boundary; para. 79: Such techniques may determine when a target vehicle is in a lane of the tracking vehicle by determining whether the target vehicle is within a lane width from a lane boundary of the lane of the tracking vehicle.; para. 85: the systems and techniques may change a lane association of the target vehicle from lane 0 (the lane of the tracking vehicle) to +1/−1 depending upon distance d from the lane boundary; para. 96: tracking vehicle 102 may determine lane associations for multiple target vehicles, (e.g., target vehicle 104, target vehicle 106, and target vehicle 108)).
Regarding claims 6 and 17, as best understood, He, as modified, discloses forming the lines further includes: ordering the keypoints along a trajectory as a trace for one of the different vehicles; and connecting consecutive keypoints relative to the trace and one of the line pairs (Gu; para. 72: Each vector segment contains vehicle trajectory information and lane line information. During the matching process, the trajectory in vector segment 1 is matched with the trajectory in vector segment 2, and the lane line in vector segment 1 is matched with the lane line in vector segment 2.; fig. 2: vector segments are connected along trajectory).
Regarding claims 8 and 19, as best understood, He, as modified, discloses the line pairs include labels with instance identifiers (He; para. 39: The semantic information includes point cloud object identifiers) and the line pairs indicate estimated structure for one of a current lane and an adjacent lane (Agarwal; para. 47: Lane boundary 310 and lane boundary 312 may be a pair and may define a lane 318, which may be a lane of the tracking vehicle, and which therefore may be labeled lane 0. Lane boundary 308 and lane boundary 310 may be a pair and may define a lane 316, which may be to the left of the lane of the tracking vehicle and may therefore be labelled lane −1.).
Regarding claims 9 and 20, as best understood, He, as modified, discloses the criteria include meeting one of the associative relationships or the criteria meeting a minimum for the scores (He; para. 66: determining the semantic distance of point cloud objects further includes: weighting the difference values of center position coordinate distance, point cloud object orientation, calibration box size, and appearance feature to obtain the semantic distance between each first point cloud object and the second point cloud object in the semantic map; para. 74: If the semantic distance is less than a predetermined threshold, it indicates that there is a correlation between the corresponding first point cloud object and the second point cloud object.).
Claim(s) 4, 7, 15 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over He in view of Agarwal and Gu as applied to claims 3 and 14 above, and further in view of Lingenfelter et al. (US 2025/0131740), hereinafter Lingenfelter.
Regarding claims 4 and 15, as best understood, He, as modified, discloses the invention substantially as claimed as described above.
He, as modified, does not explicitly disclose predicting the probabilistic estimate by a model that minimizes squared errors between the keypoints, and the model minimizes squared errors between average values for the lines.
Lingenfelter, in the same field of endeavor (lane mapping systems), discloses predicting a probabilistic estimate by a model that minimizes squared errors between keypoints and, the model minimizes squared errors between average values for lines (Lingenfelter; para. 17: For a data point corresponding to a map patch, the network server identifies locations within the map patch as measured lane marker positions and compares the measured lane marker positions for the data point to the closest reference lane marker positions in the reference lane marker geometry to determine an error metric based on the differences between the measured lane marker positions and the reference lane marker positions. For example, the error metric may be a root mean square error of the pointwise differences across each of the measured lane marker positions in a given map patch.).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, with a reasonable expectation of success, to match lane lines using a root mean square error metric between measured lane marker positions and reference lane marker geometry, as disclosed by Lingenfelter, in the processor of He, as modified, to yield the predictable result of accurately determining a vehicle's location on a road.
Regarding claims 7 and 18, as best understood, He, as modified, discloses the associative relationships include the different vehicles co-occupying the lane or traveling in the different lanes (Agarwal; para. 70: The systems and techniques may determine a lane association for target vehicle 804c based on lane boundary 814 and/or lane boundary 818 in instances in which lane boundary 816 (one of a pair of lane boundaries defining lane 822 and lane 824) is missing … the systems and techniques may determine whether target vehicle 804c is associated with lane 824 based on the distance between the two-dimensional bounding box based on three-dimensional bounding box 802c and lane boundary; para. 79: Such techniques may determine when a target vehicle is in a lane of the tracking vehicle by determining whether the target vehicle is within a lane width from a lane boundary of the lane of the tracking vehicle.; para. 85: the systems and techniques may change a lane association of the target vehicle from lane 0 (the lane of the tracking vehicle) to +1/−1 depending upon distance d from the lane boundary; para. 96: tracking vehicle 102 may determine lane associations for multiple target vehicles, (e.g., target vehicle 104, target vehicle 106, and target vehicle 108)) according to the line size of the line pairs, the line size being associated with a length of a line segment that is included in the line pairs (He; para. 140: Determine whether there is a bounding box collision between each unmatched point cloud object in the second unmatched point cloud object set and the corresponding point cloud object within the preset semantic distance range in the crowdsourced semantic map; para. 137: Bounding box collision, or OBB collision (Oriented Bounding Box), uses, but is not limited to, the separating axis theorem. It can be understood that if an axis can be found on which the projections of two convex shapes do not overlap, then the two shapes do not intersect. If the axis does not exist, and the shapes are convex, then it can be determined that the two shapes intersect (this does not apply to concave shapes, such as crescent shapes; even if the separating axis cannot be found, the two crescent shapes may not intersect).); and the line pairs indicate an estimated structure for both a current lane and an adjacent lane within the different lanes (Agarwal; para. 47: Lane boundary 310 and lane boundary 312 may be a pair and may define a lane 318, which may be a lane of the tracking vehicle, and which therefore may be labeled lane 0. Lane boundary 308 and lane boundary 310 may be a pair and may define a lane 316, which may be to the left of the lane of the tracking vehicle and may therefore be labelled lane −1.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH THOMPSON whose telephone number is (571)272-3660. The examiner can normally be reached Mon-Thurs 9:00AM-3:00PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at (571)270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH THOMPSON/Examiner, Art Unit 3665
/Erin D Bishop/Supervisory Patent Examiner, Art Unit 3665