Prosecution Insights
Last updated: April 19, 2026
Application No. 18/124,150

LIDAR-BASED OBJECT DETECTION APPARATUS AND AUTONOMOUS DRIVING CONTROL APPARATUS HAVING THE SAME

Non-Final OA §103
Filed
Mar 21, 2023
Examiner
UNDERWOOD, BAKARI
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
137 granted / 196 resolved
+17.9% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
235
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 196 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/2025 has been entered. Status of Claims This is a Non-Final Rejection office action in response to application Serial No. 18/124,150. Claim(s) 1-14 and 16-17 have been examined and fully considered. Claim(s) 1, 14 and 17 have been amended. Claim(s) 15 is cancelled. Claim(s) 1-14 and 16-17 are pending in Instant Application. Response to Arguments/Rejections Applicant’s arguments with respect to claim(s) 1-14 and 16-17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant' s arguments, see remarks, filed 11/24/2025, with respect to 35 USC § 101 rejection have been fully considered and are persuasive. The claims 1-7 of 35 USC § 101 rejection has been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim Rejections - 35 USC § 103 Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (Pub. No.: US 2022/0099838; previously recorded) in view of Xu et al. (Pub. No.: US 2019/0096086; previously recorded), hereinafter, referred to as “Xu”), and in view of Saranin et al. (Pub. No.: US 2022/0128700), hereinafter, referred to as “Saranin” . Regarding [claim 1], Kim discloses a Light Detection And Ranging (LiDAR)-based object detection apparatus (see at least Abstract; Figure 2 “500” and “640” and [0054]: “The LiDAR sensor 500 may include a transmitter (not shown), which transmits a laser pulse, and a receiver (not shown), which receives the laser reflected from the surface of an object present within a detection range”) comprising: a LiDAR sensor (“LiDAR sensor 500”) configured to obtain a point cloud (see at least Paragraphs [0056]: “The LiDAR sensor 500 outputs point cloud data (hereinafter referred to as “LiDAR data” ) composed of a plurality of points for a single object”; and [0062]: “the clustering unit 620 groups the point cloud data, which is the LiDAR data consisting of a plurality of points for the object obtained through the LiDAR sensor 500”); and a processor (see at least Paragraph [0052]: “the object-tracking device 600 may further include a preprocessing unit 610.”) configured to detect at least one object of interest from the point cloud see at least Paragraph [0059]: “The preprocessing unit 610 may preprocess LiDAR data (step 100). To this end, the preprocessing unit 610 may perform calibration to match the coordinates between the LiDAR sensor 500 and the vehicle 1000. That is, the preprocessing unit 610 may convert LiDAR data into data suitable for the reference coordinate system according to the positional angle at which the LiDAR sensor 500 is mounted to the vehicle 1000. In addition , the preprocessing unit 610 may perform filtering to remove points having low intensity or reflectance using intensity or confidence information of the LiDAR data.” and [0060]: “In addition, the preprocessing unit 610 may remove data reflected by the body of the host vehicle 1000. That is, since there is a region that is shielded by the body of the host vehicle 1000 according to the mounting position and the field of view of the LiDAR sensor 500, the preprocessing unit 610 may remove data reflected by the body of the host vehicle 1000 using the reference coordinate system.”, wherein the processor is configured to perform: determining representative points from LiDAR points corresponding to the object among the point cloud (see at least Paragraphs [0062]; [0077]: “FIGS. 6A and 6B are diagrams for explaining the concepts of the current representative point, the tracking representative point, and the previous representative point” and [0078]: “FIG. 6A shows each B' of a plurality of segment boxes at the current time t, and FIG. 6B shows an associated segment box B4-1 selected at a time t - 1 prior to the current time t . In addition, FIG. 6B shows a tracking box TB of the target object estimated using history information at the current time t. For example, a tracking box TB may be generated by estimating tracking information, such as the current position, shape, and speed of the target object that is being tracked, using history information” and [0083]: “The tracking representative point is a representative point of the tracking box TB at the current time t. For example, the tracking representative point may include a representative point located at the periphery (or the edge) of the tracking box TB (hereinafter referred to as a "second peripheral representative point ") and a representative point located at the center of the tracking box TB (hereinafter referred to as a “second central representative point”). As shown in FIG . 6B, reference numerals P0, P1, P2 and P3 are assigned to the second peripheral representative points of the tracking box TB at the current time t , in the clockwise direction from the lower left-hand corner thereof, and reference numeral P. is assigned to the second central representative point located at the center thereof”); determining outer points among the representative points, the outer points defining an outline of the object (see at least Paragraph [0063]: “As examples of the clustering unit 620, there are a 2D clustering unit and a 3D clustering unit. The 2D clustering unit is a unit that performs clustering in units of points or a specific structure by projecting data onto the X-Y plane without considering height information. The 3D clustering unit is a unit that performs clustering in the X-Y-Z plane in consideration of height information Z.” and [0064]: “After step 200, the shape analysis unit 630 generates information on a plurality of segment boxes for each channel using the result of clustering from the clustering unit 620 (step 300). Here, the segment box may be the result of converting the result of clustering into a geometric box shape. In addition , the information on the segment box may be at least one of the width, length, position , or direction ( or heading ) of the segment box” and [0065]: “the presence or absence of the preprocessing unit 610 or to any specific type of operation performed by the preprocessing unit 610 , the clustering unit 620 , or the shape analysis unit 630. That is , step 400 and the object tracking unit 640 according to the embodiments may also be applied when the preprocessing unit 610 is omitted ( i.e. when step 100 is omitted ) , when the preprocessing unit 610 performing step 100 processes LiDAR data in a manner different from that described above, when the clustering unit 620 performing step 200 clusters LiDAR data in a manner different from that described above , or when the shape analysis unit 630 performing step 300 generates segment box information in a manner different from that described above” and [0066]-[0068]); and determining a…score for each of segments connecting at least two of the outer points (see at least Paragraph [0128]: “With regard to the correlation according to an embodiment, the correlation may be determined using the ratio of the area of each candidate segment box that overlaps the associated segment box selected previously to the entire area of each candidate segment box. Specifically, the third score may be assigned to the correlation in proportion to the ratio of the area of each candidate segment box that overlaps the previously selected associated segment box to the entire area of each candidate segment box. That is, the higher the ratio, the higher the third score that may be assigned.” and [0129]: “ The score calculation unit 920 may sum the first to third scores SCORE1, SCORE2 and SCORE3 assigned to each candidate segment box to calculate a final score TSCORE.” [0130]: “The score comparison unit 930 may select, among the candidate segment boxes, the candidate segment box having the highest final score TSCORE as an associated segment box at the current time t, and may output the selected associated segment box through the output terminal OUT1.” ***It is noted that Kim discloses a “final score TSCORE as an associated segment box” within the reference, however, it is being interpreted that the points of the segment box of as each of the segments connecting have least two of the outer points***). However, in addition and/or in the alternative, Xu teaches determining a confidence score for each of segments connecting at least two of the outer points (see at least Abstract; Figure 5; and Paragraphs [0049]: “At 514, the feature vectors are passed through a machine learning algorithm, which may be an artificial neural network , such as a convoluted neural network. Because of the inclusion of the per point feature vectors , the convoluted neural network will, for each point, output at 516 the prediction of displacements or offsets associated with corners of a three-dimensional bounding box and at 518, a confidence score. Thus, in this example implementation, for each point in the point cloud, the convoluted neural network will produce eight offset parameters for each point, with each of the eight offset parameters corresponding to a different corner of the three-dimensional bounding box. The confidence scores may be numbers between 0 and 1, and as described above in connection with FIG. 3, the neural network may be trained in either a supervised manner or an unsupervised manner to determine the confidence scores.” [0050]: “Method 500 also includes, at 520, outputting a three-dimensional bounding box based on the confidence scores . In implementations of this disclosure , the three dimensional bounding box may correspond to the offset parameters associated with the point in the point cloud having the highest confidence score”), and… Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to further modify Xu teaching determining a confidence value associated with the offsets for each point defining the three-dimensional bounding box, and by combining LiDAR-based object detection apparatus as disclosed by Kim with a reasonable expectation of success. One would be motivated to make this modification in order to convey that three-dimensional objects present in an environment. For example, various autonomous systems, such as autonomous vehicles and autonomous drones, utilize three -dimensional data of objects for collision and obstacle avoidance. In order to effectively navigate a three-dimensional environment, such autonomous systems need information about the obstacle, including information about the size and location of the obstacle , for example (see, Xu). … However, Saranin teaches … … excluding segments having the confidence score lower than a predetermined threshold, wherein the excluding is performed before a map-matching process (see, Abstract; Paragraphs [0117]-[0118]: “One or more points of the LiDAR dataset may be identified for downsampling relative to the map . More specifically , down sampling is performed for LiDAR data points that are located below a minimum height threshold value on the map . For example , an assumption is made that most LiDAR points of interest to an AV correspond to objects that have heights that exceed a certain height measurement ( e.g. , two feet ) . Points are removed from the LiDAR dataset that are associated with heights less than the minimum height thresh old value ( e.g. , two feet ) . An assumption may also be made that most LiDAR points of interest to an AV correspond to objects that have heights below a maximum height threshold a value ( e.g. , 100 feet ). Thus, points are removed from the LiDAR dataset that are associated with heights exceeding the maximum threshold value. The present solution is not limited to the particulars of this example”; [0121]; [0125] and [0127]: “confidence values for each cell, LiDAR point identifiers, LiDAR point coordinates, extrinsic LiDAR sensor and camera calibration parameters, and intrinsic camera calibration parameters. These inputs are used in subsequent operations 918-920 to: determine (for each point of the LiDAR dataset) a probability distribution of pixels to which a LiDAR data point may project taking into account a projection uncertainty in view of camera calibration uncertainties; and determine (for each point of the LiDAR dataset) a probability distribution over a set of object detections in which a LiDAR data point is likely to be, based on the confidence values”); performing the map-matching process that matches non-excluded segments with High Definition (HD) map data to determine a location of a vehicle (see, Paragraph [0103]: “The machine learned classification technique is trained to determine which segments should be merged with each other . The same image detection information that was used in segmentation is now aggregated over the constituent points of the segment in order to compute segment - level features . In addition to that the ground height and lane information features from HD map are also used to aid segment merging”; [0101]; and [0181]); and controlling autonomous driving operation of the vehicle based on the location of the vehicle (see, Paragraph [0085]: “Referring now to FIG. 6, there is provided a flow diagram of an illustrative method 600 for controlling a vehicle (e.g., vehicle 1021 of FIG. 1). At least a portion of method 600 is performed by a vehicle on-board computing device (e.g., vehicle on-board computing device 220 of FIG. 2). Method 600 is performed for each object (e.g., vehicle 1022 of FIG. 1, cyclist 104 of FIG. 1, and/or pedestrian 106 of FIG. 1) that has been detected to be within a distance range from the vehicle at any given time”). Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to further modify determining a confidence value associated with the offsets for each point in view of Kim and Xu and combining excluding segments having the confidence score lower than a predetermined threshold as taught by Saranin. One would be motivated to make this modification in order to convey a framework for integrating additional information from the LiDAR sensors; defining the problem to ensure that the output is structured in a fashion which is more amendable to downstream processing; and improving performance by reducing under-segmentation and improving boundary recall. As to [claim 2], the combination of Kim, Xu and Saranin teaches the LiDAR-based object detection apparatus of claim 1. Kim teaches wherein the determining of outer points includes: determining a number of the outer points as N; determining both end points among the representative points; determining one of the both end points as a 1st outer point and the other one of the both end points as an Nt outer point; and determining 2nd to N-1thouter points by selecting among the representative points in a sequential order starting from the 1st outer point toward the Nh outer point (see at least Paragraphs [0038]: “FIG. 9 is a diagram showing an example in which a segment box overlaps a tracking box at the current time”; and [0039]: “FIGS. l0A to l0I are diagrams showing various examples in which the segment box and the tracking box overlap each other” and [0069]-[0072]; [0076]; [0080]-[0081]; and [0083]-[0084]***It is notes these figures and paragraphs indicates determining of outer points of the segment box***). As to [claim 3], the combination of Kim, Xu and Saranin teaches the LiDAR-based object detection apparatus of claim 2. Kim discloses wherein the confidence score of a segment is determined to be lower than others, in response to one or more of a first determination that the segment is a segment connecting the N-lth and Nth outer points which is longer than a first predetermined length, a second determination that the segment is a segment other than the last segment which is longer than a second predetermined length, a third determination that an angle between the segment and an adjacent segment sharing one outer point with the segment is less than or equal to a first predetermined angle, a fourth determination that the segment has a predetermined middle region which does not have a representative point and a fifth determination that the segment includes at least one representative point which is not vertically overlapped in a region of the segment in a coordinate system representing the point cloud (see at least Paragraphs (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.”; and [0112]-[0113]) However, it would have obvious in one ordinary skill in the art at the effective date of the application to , a fourth determination that the segment has a predetermined middle region which does not have a representative point and a fifth determination that the segment includes at least one representative point which is not vertically overlapped in a region of the segment in a coordinate system representing the point cloud. As s design choice “Making Continuous” [[see MPEP 21044. 04], In re Dilnot, 319 F.2d 188, 138 USPQ 248 (CCPA 1963)]. Motivation to continue the steps comes from the knowledge well known in the art that doing so would achieve the same end result which is to use teaches the LiDAR-based object detection apparatus fourth determination that the segment has a predetermined middle region which does not have a representative point and a fifth determination that the segment includes at least one representative point which is not vertically overlapped in a region of the segment. As to [claim 4], the combination of Kim, Xu and Harmsen teaches the LiDAR-based object detection apparatus of claim 3. Kim discloses wherein the confidence score of the segment is determined to be 0 and the others are determined to be 1 (see at least Paragraphs [0121]: “The final selection unit 648A shown in FIG. 11 may include a score assignment unit 910, a score calculation unit 920, and a score comparison unit 930” and [0122]-[0124]). As to [claim 5], the combination of Kim, Xu and Saranin teaches the LiDAR-based object detection apparatus of claim 3. Kim discloses wherein in response to the representative points located below or equal to a predetermined height from ground see at least Paragraphs [0053]: “The 3D LiDAR sensor is capable of obtaining a plurality of 3D points and thus of predicting the height information of an obstacle, thus helping in accurate and precise detection and tracking of an object. The 3D LiDAR sensor may be composed of multiple 2D LiDAR sensor layers, and may generate LiDAR data including 3D information” and [0063]: “As examples of the clustering unit 620, there are a 2D clustering unit and a 3D clustering unit. The 2D clustering unit is a unit that performs clustering in units of points or a specific structure by projecting data onto the X-Y plane without considering height information. The 3D clustering unit is a unit that performs clustering in the X-Y-Z plane in consideration of height information Z”), the processor performs one or more of: the fourth determination and the fifth determination (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.”). However, it would have obvious in one ordinary skill in the art at the effective date of the application to the processor performs one or more of: the fourth determination and the fifth determination. As s design choice “Making Continuous” [[see MPEP 21044. 04], In re Dilnot, 319 F.2d 188, 138 USPQ 248 (CCPA 1963)]. Motivation to continue the steps comes from the knowledge well known in the art that doing so would achieve the same end result which is to use teaches the LiDAR-based object detection apparatus the processor performing one or more of: the fourth determination and the fifth determination. As to [claim 6], the combination of Kim, Xu and Saranin teaches the LiDAR-based object detection apparatus of claim 5. Kim teaches wherein only when a longitudinal length of the object is shorter than or equal to a first predetermined value (see at least Paragraph [0148]: “In the case of the second comparative example, the center of the rear side of the segment box is used as a representative point. For example, according to the second comparative example, the centers of the rear sides of the candidate segment boxes CB1and CB2 for association are used as representative points RP1 and RP2. Since the density of the point cloud is high at the center of the rear side of the segment box with respect to the mounting position of the LiDAR sensor 500, the second comparative example is robust to a change in the size of the segment box according to the shape of a target object, thereby stably providing the position of the measured value in the longitudinal direction. However, since the second comparative example is incapable of accurately recognizing the heading when generating information on the segment box, there is a problem in that the reference of the rear side is changed (e.g. a problem in that the width of the segment box and the length thereof are switched to each other), thus incurring a large error in the position of the measured value. In this way, according to the second comparative example, in which "association" is performed on the basis of the rear side of the segment box, when the correlation is determined using the distance, tracking loss may occur due to incorrect association”), the processor performs one or more of the fourth determination and the fifth determination (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.”). As to [claim 7], the combination of Kim, Xu and Saranin teaches the LiDAR-based object detection apparatus of claim 1. Kim discloses wherein the processor is configured to determine the confidence score only when a longitudinal length of the object is longer than or equal to a second predetermined value (see at least Paragraph [0151]: “a candidate segment box is primarily selected using a factor related to distance. At this time, whether a segment box that has not been primarily selected as a candidate segment box overlaps the tracking box TB, i.e. is in surface contact with the tracking box TB, is determined in order to secondarily select a candidate segment box. Thus, it is possible to prevent an associated segment box from being incorrectly selected due to selection of a candidate segment box using only the distance factor”). Claim(s) 8-14, and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (Pub. No.: US 2022/0099838; previously recorded) in view of Chen et al. (Pub. No.: US 2018/0335307; previously recorded), hereinafter, referred to as “Chen”; and in view of Wang et al. (Pub . No . : US 2018/0216942), hereinafter, referred to as “Wang”. Regarding to [claim 8], Kim discloses an autonomous driving control apparatus see at least Paragraph [0139]: “The vehicle device 700 may control the vehicle 1000 based on the determined information on an object, received from the object-tracking device 600. For example, the vehicle device 700 may include a lane-keeping assist system for preventing a vehicle from deviating from a lane while maintaining the distance to a preceding vehicle, an obstacle detection system for detecting obstacles present around a vehicle, a collision prevention system for detecting the risk of a collision, an autonomous driving system for controlling a vehicle to travel autonomously while detecting obstacles present ahead of the vehicle”) comprising: a Light Detection And Ranging (LiDAR) sensor (“LiDAR sensor 500”) configured to obtain a point cloud (see at least Paragraphs [0056]: “The LiDAR sensor 500 outputs point cloud data (hereinafter referred to as “LiDAR data” ) composed of a plurality of points for a single object”; and [0062]: “the clustering unit 620 groups the point cloud data, which is the LiDAR data consisting of a plurality of points for the object obtained through the LiDAR sensor 500”); a first processor (“a preprocessing unit 610”) configured to detect at least one object of interest from the point cloud (see at least Paragraph [0059]: “The preprocessing unit 610 may preprocess LiDAR data (step 100). To this end, the preprocessing unit 610 may perform calibration to match the coordinates between the LiDAR sensor 500 and the vehicle 1000. That is, the preprocessing unit 610 may convert LiDAR data into data suitable for the reference coordinate system according to the positional angle at which the LiDAR sensor 500 is mounted to the vehicle 1000. In addition , the preprocessing unit 610 may perform filtering to remove points having low intensity or reflectance using intensity or confidence information of the LiDAR data.” and [0060]: “In addition, the preprocessing unit 610 may remove data reflected by the body of the host vehicle 1000. That is, since there is a region that is shielded by the body of the host vehicle 1000 according to the mounting position and the field of view of the LiDAR sensor 500, the preprocessing unit 610 may remove data reflected by the body of the host vehicle 1000 using the reference coordinate system.”)… wherein the first processor is configured to: determine representative points from LiDAR points corresponding to the object among the point cloud (see at least Paragraphs [0062]; [0077]: “FIGS. 6A and 6B are diagrams for explaining the concepts of the current representative point, the tracking representative point, and the previous representative point” and [0078]: “FIG. 6A shows each B' of a plurality of segment boxes at the current time t, and FIG. 6B shows an associated segment box B4-1 selected at a time t - 1 prior to the current time t . In addition, FIG. 6B shows a tracking box TB of the target object estimated using history information at the current time t. For example, a tracking box TB may be generated by estimating tracking information, such as the current position, shape, and speed of the target object that is being tracked, using history information” and [0083]: “The tracking representative point is a representative point of the tracking box TB at the current time t. For example, the tracking representative point may include a representative point located at the periphery (or the edge) of the tracking box TB (hereinafter referred to as a "second peripheral representative point ") and a representative point located at the center of the tracking box TB (hereinafter referred to as a “second central representative point”). As shown in FIG . 6B, reference numerals P0, P1, P2 and P3 are assigned to the second peripheral representative points of the tracking box TB at the current time t , in the clockwise direction from the lower left-hand corner thereof, and reference numeral P. is assigned to the second central representative point located at the center thereof”); determine outer points among the representative points, the outer points defining an outline of the object (see at least Paragraph [0063]: “As examples of the clustering unit 620, there are a 2D clustering unit and a 3D clustering unit. The 2D clustering unit is a unit that performs clustering in units of points or a specific structure by projecting data onto the X-Y plane without considering height information. The 3D clustering unit is a unit that performs clustering in the X-Y-Z plane in consideration of height information Z.” and [0064]: “After step 200, the shape analysis unit 630 generates information on a plurality of segment boxes for each channel using the result of clustering from the clustering unit 620 (step 300). Here, the segment box may be the result of converting the result of clustering into a geometric box shape. In addition , the information on the segment box may be at least one of the width, length, position , or direction ( or heading ) of the segment box” and [0065]: “the presence or absence of the preprocessing unit 610 or to any specific type of operation performed by the preprocessing unit 610 , the clustering unit 620 , or the shape analysis unit 630. That is , step 400 and the object tracking unit 640 according to the embodiments may also be applied when the preprocessing unit 610 is omitted ( i.e. when step 100 is omitted ) , when the preprocessing unit 610 performing step 100 processes LiDAR data in a manner different from that described above, when the clustering unit 620 performing step 200 clusters LiDAR data in a manner different from that described above , or when the shape analysis unit 630 performing step 300 generates segment box information in a manner different from that described above” and [0066]-[0068]); and determine a…score for each of segments connecting at least two of the outer points (see at least Paragraph [0128]: “With regard to the correlation according to an embodiment, the correlation may be determined using the ratio of the area of each candidate segment box that overlaps the associated segment box selected previously to the entire area of each candidate segment box. Specifically, the third score may be assigned to the correlation in proportion to the ratio of the area of each candidate segment box that overlaps the previously selected associated segment box to the entire area of each candidate segment box. That is, the higher the ratio, the higher the third score that may be assigned.” and [0129]: “ The score calculation unit 920 may sum the first to third scores SCORE1, SCORE2 and SCORE3 assigned to each candidate segment box to calculate a final score TSCORE.” [0130]: “The score comparison unit 930 may select, among the candidate segment boxes, the candidate segment box having the highest final score TSCORE as an associated segment box at the current time t, and may output the selected associated segment box through the output terminal OUT1.” ***It is noted that Kim discloses a “final score TSCORE as an associated segment box” within the reference, however, it is being interpreted that the points of the segment box of as each of the segments connecting have least two of the outer points***)… wherein the confidence score is determined to be 0 or 1(see at least Paragraphs [0121]: “The final selection unit 648A shown in FIG. 11 may include a score assignment unit 910, a score calculation unit 920, and a score comparison unit 930” and [0122]-[0124]). Kim does not explicitly disclose …a second processor configured to perform a map-matching process of matching data of the object received from the first processor with a high definition (HD) map data…and wherein the second processor is configured to perform the map-matching process using shape information of the segments and the confidence score. However, Chen teaches …a second processor configured to perform a map-matching process of matching data of the object received from the first processor with a high definition (HD) map data (see at least Paragraphs [0051]: “In an embodiment of localizing autonomous or other vehicles, nearby cartographic topology can also include distance to localized objects in a High Definition Map (HD Map) that record the locations of the localized objects with a high degree of accuracy and precision. For instance, the closeness to nearby topology can include a distance from the position of the car to a stop sign, pole-like objects (e.g., telephone poles), or other similar objects represented in the HD Map. As another example, if the car (e.g., autonomous car) is using LIDAR, closeness to nearby topology can include sampled distance to other observed co-located cars (e.g., as determined from an intensity of the LID AR points).” and [0103]: “the map matching platform 101 and/or any of the modules 501-507 of the map matching platform 101 as shown in FIG. 5 may perform one or more portions of the process 600 and may be implemented in, for instance, a chip set including a processor and a memory as shown in FIG. 12. As such, the map matching platform 101 and/or the modules 501-507 can provide means for accomplishing various parts of the process 600, as well as means for accomplishing embodiments of other processes described herein in conjunction with other components of the system 100. Although the process 600 is illustrated and described as a sequence of steps, its contemplated that various embodiments of the process 600 may be performed in any order or combination and need not include all of the illustrated steps” and [0126]: “the matched set can be used to locate a vehicle 105 that generated the probe points in the set. For example, the probe point can be collected from a vehicle 105 (e.g., an autonomous vehicle) as it travels in a road network. The map matching results of the probe points collected from the vehicle 105 can then represent an estimation of the location of the vehicle 105. In one embodiment, the machine learning classifier 505 can be trained on features or attributes related to sensor data from the vehicle 105 such as, but not limited to, distance from objects whose locations have been precisely mapped (e.g., in an HD Map).”)…and …wherein the second processor is configured to perform the map-matching process using shape information of the segments and the confidence score (see at least Paragraph [0034]: “a system 100 of FIG. 1 introduces the capability to apply machine learning to the point-based map matching problem based on attributes or features of each probe point, and attributes or features of the links to which the probe points are map matched to generate a matching probability or score” and [0054]: “the map matcher classifier of the map matching platform 101 reports the matching score (or matching probability) instead of the class label (e.g., matched or unmatched). This probability gives, for instance, some kind of confidence on the prediction. However, in one embodiment, because the map matching platform 101 can use any type of machine learning classifier or model (e.g., logistic regression, Random Forest, neural network, etc.) and because not all classifiers provide well-calibrated probabilities, the map matching platform 107 may perform a separate calibration step to calibrate the probabilities. This calibration step can be a post-processing depending on the classifier chosen. For example, logistic regression returns well calibrated predictions by default as it directly optimizes log-loss. However, Random Forest classifiers tend to average predictions which have difficulty making predictions near 0 and 1. In one embodiment, calibration methods such as Brier's score or equivalent process can be applied to obtain well calibrated probability prediction as confidence scores”) Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate a map feature represented by a link of a geographic database. The probe points are collected from sensors of devices traveling near the map feature as taught Chen, ad by combining LiDAR-based object detection apparatus as disclosed by Kim. One would be motivated to make this modification in order to convey a need for a machine learning approach for point-based map matchers that, for instance, can be used for map data analysis, map data creation, map data update, and/or localization of device/vehicle (see at least Chen Paragraph [0003]). … Additionally, Wang teaches … wherein the confidence score is determined to be 0 or 1,wherein the second processor is configured to exclude the shape information of segments having the confidence score of 0 before performing the map-matching process (see, Paragraphs [0037]: “the segmentation types (e.g., first type and second type mentioned above) are based on map matching confidences of the respective trajectory segments. For example, the first type can be trajectory segments with map matching confidences that are above a certain upper threshold value, the second type can be trajectory segments with map matching confidences that are below a certain lower threshold, and the unknown type can be those trajectory segments that are between the upper and lower thresholds or have undetermined map matching confidences. In other words, the segmentation of probe trajectories is performed using map matching to correlate individual probe points or segments of the probe trajectory to road segments or links of a digital map (e.g., the geographic database 109) using any type of map matcher (e.g., publicly available map matchers or map matchers that are proprietary to a map service provider). Map matchers typically include a numerical confidence of match value (i.e., map matching confidence) to represent the confidence that the map matchers have accurately matched a probe point or segment to a road segment or link of the digital map.”; [0038]: “ In one embodiment, trajectory segmentation results in partitioning of a probe trajectory into a (typically small) number of pieces, which are called trajectory segments. The segmentation module 201, for instance, can use the map matching confidence of the probe points of the probe trajectory to ground continuous sequences of probe points with the similar map matching confidence values together. In one embodiment, the segmentation module 201 can define an upper threshold value for the map matching confidence above which a probe point or segment would be classified as “Good” (e.g., indicating a good match to a road segment or link, and therefore is likely to be an on-road probe point or segment), and a lower threshold value for the map matching below which a probe or segment would be classified as “Bad” (e.g., indicating a bad match to any road segment or link, and there is likely to be an off-road probe point or segment). The upper threshold and the lower threshold can be different values with the upper threshold value being higher than the lower threshold value. In this case, probe points or segments with map matching confidences between the upper and lower thresholds or with map matching confidences not calculated or available from the map matcher can be classified as “Unknown”. To generalize, in one embodiment, the segmenting of the probe trajectory is based on a map-matching confidence of the probe trajectory to a digital map storing map features of the first type (e.g., on-road type), the second type (e.g., off-road type), or an unknown type.”; and [0039]: “FIG. 4A illustrates an example segmentation of a probe trajectory 401 into trajectory segments 403a-403j (also collectively referred to as trajectory segments 403) based on map matching confidence. In this example, the categories or segment types of “Good” (e.g., a first type), “Unknown”, and “Bad” (e.g., a second type) map matching are respectively represented and labeled as “1”, “0”, and “−1”. As discussed above, here, a label of “Good(1)” denotes a segment 403 whose points have been map-matched with a higher confidence (e.g., map matching confidence above an upper threshold value) and signifies an on-road segment, while a label of “Bad(−1)” denotes a segment whose points have been map-matched with a low confidence (e.g., map matching confidence below a lower threshold value) and signifies an off-road segment. Similarly, a label of “Unknown(0)” denotes a segments whose points have been map-matched with a confidence that lies between the lower and upper thresholds.” Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to further modify to determine the actual high accuracy location of the ADV by combining an autonomous driving control apparatus as taught by Kim in view of Chen. One would be motivated to make this modification in order to convey to determine with high accuracy a location of the ADV. ADV location can be approximated by a location coordinate, e.g., a GPS coordinate. The GPS coordinate can be used to retrieve a local version of a localization map (i.e., a high definition (HD) map) (see, Paragraph [0043]). As to [claim 9], the combination of Kim, Chen and Wang teaches the autonomous driving control apparatus of claim 8. Xu discloses wherein the determining of outer points includes: determining a number of the outer points as N; determining both end points among the representative points; determining one of the both end points as a 1st outer point and the other one of the both end points as an Nt outer point; and determining 2nd to N-1thouter points by selecting among the representative points in a sequential order starting from the 1st outer point toward the Nt outer point (see at least Paragraphs [0038]: “FIG. 9 is a diagram showing an example in which a segment box overlaps a tracking box at the current time”; and [0039]: “FIGS. l0A to l0I are diagrams showing various examples in which the segment box and the tracking box overlap each other” and [0069]-[0072]; [0076]; [0080]-[0081]; and [0083]-[0084]***It is notes these figures and paragraphs indicates determining of outer points of the segment box***). As to [claim 10], the combination of Xu, Chen and Wang teaches the autonomous driving control apparatus of claim 9. Kim discloses wherein the confidence score of a segment is determined to be lower than others, in response to one or more of a first determination that the segment is a segment connecting the N-ith and Nth outer points which is longer than a first predetermined length, a second determination that the segment is a segment other than the last segment which is longer than a second predetermined length, a third determination that an angle between the segment and an adjacent segment sharing one outer point with the segment is less than or equal to a first predetermined angle, a fourth determination that the segment has a predetermined middle region which does not have a representative point and a fifth determination that the segment includes at least one representative point which is not vertically overlapped in a region of the segment in a coordinate system representing the point cloud (see at least Paragraphs (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.” [0112]-[0113]). As to [claim 11], the combination of Kim, Chen and Wang teaches the autonomous driving control apparatus of claim 10. Kim discloses wherein the confidence score of the segment is determined to be 0 and the others are determined to be 1 (see at least Paragraphs [0121]: “The final selection unit 648A shown in FIG. 11 may include a score assignment unit 910, a score calculation unit 920, and a score comparison unit 930” and [0122]-[0124]). As to [claim 12], the combination of Xu, Chen and Wang teaches the autonomous driving control apparatus of claim 10. Kim discloses wherein in response to the representative points located below or equal to a predetermined height from ground (see at least Paragraphs [0053]: “The 3D LiDAR sensor is capable of obtaining a plurality of 3D points and thus of predicting the height information of an obstacle, thus helping in accurate and precise detection and tracking of an object. The 3D LiDAR sensor may be composed of multiple 2D LiDAR sensor layers, and may generate LiDAR data including 3D information” and [0063]: “As examples of the clustering unit 620, there are a 2D clustering unit and a 3D clustering unit. The 2D clustering unit is a unit that performs clustering in units of points or a specific structure by projecting data onto the X-Y plane without considering height information. The 3D clustering unit is a unit that performs clustering in the X-Y-Z plane in consideration of height information Z”),, the first processor performs one or more of the fourth determination and the fifth determination (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.”). As to [claim 13], the combination of Kim, Chen and Wang teaches the autonomous driving control apparatus of claim 12. Kim discloses wherein only when a longitudinal length of the object is shorter than or equal to a first predetermined value (see at least Paragraph [0148]: “In the case of the second comparative example, the center of the rear side of the segment box is used as a representative point. For example, according to the second comparative example, the centers of the rear sides of the candidate segment boxes CB1and CB2 for association are used as representative points RP1and RP2. Since the density of the point cloud is high at the center of the rear side of the segment box with respect to the mounting position of the LiDAR sensor 500, the second comparative example is robust to a change in the size of the segment box according to the shape of a target object, thereby stably providing the position of the measured value in the longitudinal direction. However, since the second comparative example is incapable of accurately recognizing the heading when generating information on the segment box, there is a problem in that the reference of the rear side is changed (e.g. a problem in that the width of the segment box and the length thereof are switched to each other), thus incurring a large error in the position of the measured value. In this way, according to the second comparative example, in which "association" is performed on the basis of the rear side of the segment box, when the correlation is determined using the distance, tracking loss may occur due to incorrect association”), the first processor performs one or more of the fourth determination and the fifth determination (see at least Paragraph [0110]: “Step 424 and step 428 may be performed by the box selection unit 840 and the overlap determination unit 710.” [0111]: “The box selection unit 840 may generate a control signal CS in response to the results of the comparison by the first to third comparison units 810, 820 and 830. When it is determined that there is a segment box that is not selected as the candidate segment box as a result of detecting the plurality of segment boxes using the correlation indices in response to the control signal CS, the overlap determination unit 710 may determine whether this segment box B' overlaps the tracking box TB, and may output the result of the determination to the box selection unit 840 (step 424). For example, as shown in FIG. 9, the segment box B' and the tracking box TB may overlap each other.”). As to [claim 14], the combination of Kim, Chen and Wang teaches the autonomous driving control apparatus of claim 8. Kim disclose wherein the first processor is configured to determine the confidence score only when a longitudinal length of the object is greater than or equal to a second predetermined value (see at least Paragraph [0151]: “a candidate segment box is primarily selected using a factor related to distance. At this time, whether a segment box that has not been primarily selected as a candidate segment box overlaps the tracking box TB, i.e. is in surface contact with the tracking box TB, is determined in order to secondarily select a candidate segment box. Thus, it is possible to prevent an associated segment box from being incorrectly selected due to selection of a candidate segment box using only the distance factor”). As to [claim 16 the combination of Kim, Chen and Wang teaches the autonomous driving control apparatus of claim 8. Kim does not explicitly discloses wherein the second processor is configured to perform a localization using a result of the map-matching. However, Chen teaches wherein the second processor is configured to perform a localization using a result of the map-matching (see, at least Paragraph [0051]: “localizing autonomous or other vehicles , nearby cartographic topology can also include distance to localized objects in a High Definition Map ( HD Map ) that record the locations of the localized objects with a high degree of accuracy and precision. For instance, the closeness to nearby topology can include a distance from the position of the car to a stop sign, pole-like objects (e.g., telephone poles), or other similar objects represented in the HD Map. As another example, if the car (e.g., autonomous car ) is using LIDAR, closeness to nearby topology can include sampled distance to other observed co-located cars (e.g., as determined from an intensity of the LIDAR points).”). Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate a map feature represented by a link of a geographic database. The probe points are collected from sensors of devices traveling near the map feature as taught Chen, ad by combining LiDAR-based object detection apparatus as disclosed by Kim. One would be motivated to make this modification in order to convey a need for a machine learning approach for point-based map matchers that, for instance, can be used for map data analysis, map data creation, map data update, and/or localization of device/vehicle (see at least Chen Paragraph [0003]). As to [claim 17], the combination of Kim, Chen and Wang teaches the LiDAR-based object detection apparatus. Kim discloses wherein the processor is configured to transmit shape information (see at least Paragraph [0064]: “the shape analysis unit 630 generates information on a plurality of segment boxes for each channel using the result of clustering from the clustering unit 620 (step 300). Here, the segment box may be the result of converting the result of clustering into a geometric box shape. In addition, the information on the segment box may be at least one of the width, length, position, or direction (or heading) of the segment box”…, Kim does not expressly disclose nor does Xu discloses …the confidence score of each segment to a second processor for a localization of a vehicle based on the shape information and the confidence score of each segment and controlling the vehicle based on the localization. However, Chen discloses …the confidence score of each segment to a second processor for a localization of a vehicle based on the shape information (see at least Paragraph [0051]: “In an embodiment of localizing autonomous or other vehicles, nearby cartographic topology can also include distance to localized objects in a High Definition Map (HD Map) that record the locations of the localized objects with a high degree of accuracy and precision. For instance, the closeness to nearby topology can include a distance from the position of the car to a stop sign, pole-like objects (e.g., telephone poles), or other similar objects represented in the HD Map. As another example, if the car (e.g., autonomous car) is using LIDAR, closeness to nearby topology can include sampled distance to other observed co-located cars (e.g., as determined from an intensity of the LID AR points).”) and the confidence score of each segment (see at least Paragraph [0054]: “the map matcher classifier of the map matching platform 101 reports the matching score (or matching probability) instead of the class label (e.g., matched or unmatched). This probability gives, for instance, some kind of confidence on the prediction. However, in one embodiment, because the map matching platform 101 can use any type of machine learning classifier or model (e.g., logistic regression, RandomForest, neural network, etc.) and because not all classifiers provide well-calibrated probabilities, the map matching platform 107 may perform a separate calibration step to calibrate the probabilities.”) and controlling the vehicle based on the localization (see at least Paragraph [0051]: “For instance, the closeness to nearby topology can include a distance from the position of the car to a stop sign, pole-like objects (e.g., telephone poles), or other similar objects represented in the HD Map. As another example, if the car (e.g., autonomous car) is using LIDAR, closeness to nearby topology can include sampled distance to other observed co-located cars (e.g., as determined from an intensity of the LID AR points).”). Accordingly, it would have been obvious to one of ordinary skill in the art before the filing of the invention to incorporate a map feature represented by a link of a geographic database. The probe points are collected from sensors of devices traveling near the map feature to control as taught Chen, ad by combining LiDAR-based object detection apparatus as disclosed by Kim. One would be motivated to make this modification in order to convey a need for a machine learning approach for point-based map matchers that, for instance, can be used for map data analysis, map data creation, map data update, and/or localization of device/vehicle (see at least Chen Paragraph [0003]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAKARI UNDERWOOD whose telephone number is (571)272-8462. The examiner can normally be reached M - F 8:00 TO 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached on (571)-272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.U./Examiner, Art Unit 3663 /JAMES M MCPHERSON/Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Mar 21, 2023
Application Filed
Feb 06, 2025
Non-Final Rejection — §103
May 09, 2025
Response Filed
Aug 14, 2025
Final Rejection — §103
Nov 24, 2025
Request for Continued Examination
Dec 06, 2025
Response after Non-Final Action
Feb 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594987
ELECTRONIC POWER STEERING SYSTEM RACK FORCE OBSERVER VEHICLE DIAGNOSTICS
2y 5m to grant Granted Apr 07, 2026
Patent 12576690
REEFER POWER CONTROL
2y 5m to grant Granted Mar 17, 2026
Patent 12575493
SYSTEM AND METHOD FOR CONTROLLING MACHINE BASED ON COST OF HARVEST
2y 5m to grant Granted Mar 17, 2026
Patent 12576876
Method for Implementing Autonomous Driving, Medium, Vehicle-Mounted Computer, and Control System
2y 5m to grant Granted Mar 17, 2026
Patent 12546626
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR PROBE DATA-BASED GEOMETRY GENERATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
89%
With Interview (+19.1%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 196 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month