Prosecution Insights
Last updated: April 19, 2026
Application No. 18/224,807

METHOD AND SYSTEM FOR SENSOR FUSION FOR VEHICLE

Non-Final OA §103
Filed
Jul 21, 2023
Examiner
DOROS, KAYLA RENEE
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
76%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
19 granted / 26 resolved
+21.1% vs TC avg
Minimal +3% lift
Without
With
+2.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This non-final office action is a response to the RCE received on 10/14/2025. Claims 1-20 are pending. Claims 1, 8, 12-13,and 15-16 have been amended. Response to Arguments The amendments to claims 15-16 overcome the previous claim objections. The amendments to claims 8 and 12 overcome the previous 112(b) rejections. Regarding the 101 rejection, the examiner is interpreting the disclosure of autonomous control as support for the limitation "controlling, by the processor, the vehicle based on the updated sensor fusion track" as being disclosed in specification ¶0047 via "In the present situation, when the vehicle is performing autonomous driving control, fast longitudinal control of the vehicle is required, but longitudinal control of the vehicle is delayed due to a mismatch between collision prediction points of the sensor fusion track and the LiDAR track, and thus a collision accident may occur". Thus, the 101 rejection is withdrawn. The arguments with respect to the 103 rejection have been considered, but are not persuasive. Applicant asserts on Page 11 of the remarks that Zeng discloses closest point a* which is a point that represents the closest point to the vehicle when the vehicle reaches point P*, and thus represents future positional information, not current. The applicant states that because of this, the point a* cannot be considered as the closest point based on a LiDAR track used to determine a fusion track of the vehicle at the current time. However, the amended claim states "determining, by the processor, a first point corresponding to a closest point of the target object from the vehicle with respect to a potential collision based on the LiDAR track of the first time frame". The amended claim does not require such narrow of an interpretation as argued, considering the claims are directed towards a 'potential' (future) collision point. Furthermore, avoiding potential collision includes considering the future positions of the vehicle/target object. If the applicant intends for the closest point of the target object to be the current closest point to a vehicle at a current pose/time, then the claim language as written does not reflect that. The claim as written does not exclude predicted vehicle movement/future positions, or a closest point evaluated at a future position. Under the broadest reasonable interpretation, "time frame" includes a single point in time as well as a plurality of points in time (such as a range or period of time). The time frame can include the future evasive steering path of Zeng, as a time frame can include multiple time points. Additionally, a time frame does not need to include the entire evasive maneuver and can merely include the time of a future point of the closest point relative to a potential collision. Therefore, the prior art still reads on the amended claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 7, 10-14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cao et. al. (US 20220262129 A1) in view of Zeng et. al. US 20150120138 A1). Regarding Claim 1, Cao discloses: A method for controlling a vehicle, the method comprising: (See at least Figure 5 via an example method for multiple hypothesis-based fusion of sensor data and ¶0015 via "The sensor-fusion system 104 can capture the FOV 114 from any exterior surface of the vehicle 102") generating, by a processor of the vehicle, a sensor fusion track of a first time frame for a target object based on data received from at least two of a Light Detection and Ranging (LiDAR), a radar, or a camera mounted on the vehicle; (See at least ¶0003 via "The method follows with outputting an indication of the bounding box as a match between one or more object-tracks from the plurality of first object-tracks with at least one object-track from the set of second object-tracks." and ¶0016 via "the fusion module 108 executes on a processor or other hardware. During execution, the fusion module 108 can track objects based on sensor data obtained at the radar interface 106-1 and the vision camera interface 106-2." *Wherein the generation of multiple object tracks from the at least two sensors by a processor corresponds to the generation of a sensor fusion track, and the time in which the sensor fusion track is processed corresponds to a first time frame). determining, by the processor, a second point corresponding to a closest point of the target object from the vehicle with respect to a potential collision based on the sensor fusion track of the first time frame; (See at least Figure 2 via Processor 204-1, and see ¶0034 via "The fusion module 108 reports out information, relative to a reference point, which corresponds to an estimated point of collision between the vehicle 102 and the other vehicle 110 given their current trajectories." and "Note that for some fusion trackers, such as the fusion module 108-1, the reference point associated with each low-level track may be synchronized during fusion.") updating, by the processor, the sensor fusion track based on the first point and the second point; and (See at least Figure 2 via Processor 204-1, and see ¶0048-¶0057 which explains that the sensor data from Combination A, Combination B, and Combination C is compared based on a probability metric in order to determine which sensor(s)/combination should be relied on, and thus is interpreted as updating the sensor fusion track. See ¶0057 via "With the probability values for each hypothesis well defined using Equations 1-12, the fusion module 108-1 can fuse sensor data from the multiple interfaces 106 using the most accurate of the three combinations A, B, and C, of pseudo measurement types, for a particular situation." Furthermore, see at least Figures 4-1 and 4-2 which illustrate points corresponding to the three combinations A, B, and C and are thus being interpreted as having a first and second point that are being used when updating/determining which combination should be relied on) PNG media_image1.png 397 268 media_image1.png Greyscale controlling, by the processor, the vehicle based on the updated sensor fusion track (See at least ¶0023 via "The sensor-fusion system 104-1 and the controller 202 communicate over a link 212. The link 212 may be a wired or wireless link and in some cases includes a communication bus. The controller 202 performs operations based on information received over the link 212, such as an indication of a bounding box output from the sensor-fusion system 104-1 as objects in the FOV are identified from processing and merging object-tracks." and ¶0024 via "The controller 202 includes a processor 204-1 and a computer-readable storage medium (CRM) 206-1 (e.g., a memory, long-term storage, short-term storage), which stores instructions for an automotive module 208"). Furthermore, although Cao discloses that a LiDAR sensor track can be utilized in the sensor fusion method [¶0016 ; ¶0027]; Cao does not explicitly disclose, but Zeng--who is in the same field of endeavor--discloses: generating, by the processor, a LiDAR track of (See at least ¶0007 via "Radar and LiDAR sensors that are sometimes employed on vehicles to detect objects around the vehicle and provide a range to and orientation of those objects provide reflections from the objects as multiple scan points that combine as a point cluster range map, where a separate scan point is provided for every 1/2.degree. across the field-of-view of the sensor." ) determining, by a processor, a first point corresponding to a closest point of the target object from the vehicle (See at least Figure 2 and ¶0036 via Processors. See at least Figures 12, 13, and ¶0081 via "the algorithm determines a curve point p* that is a predefined safe distance d.sub.t from the closest scan point 26 on the target vehicle 14") with respect to a potential collision (See at least Figures 12, 13, and ¶0081 via "The algorithm then determines a target line 92 representing a possibility for the virtual target path 24 that is parallel to the heading of the subject vehicle 12 before starting the evasive steering maneuver") based on the LiDAR track of (see at least Figures 12-13 via the plurality of scan points which together are being interpreted as the LiDAR track of the target object. Also see ¶0036 via "A LiDAR sensor included in the processor 34 provides the data scan point map to a perception module processor 42 that processes the data and provides sensor data fusion, object detection, object tracking, etc.") and a heading of (See at least Figures 12, 13, and ¶0081 via "The algorithm then determines a target line 92 representing a possibility for the virtual target path 24 that is parallel to the heading of the subject vehicle 12 before starting the evasive steering maneuver"); PNG media_image2.png 341 274 media_image2.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Cao's sensor fusion method in view of Zeng's LiDAR track with a closest point corresponding to a point of potential collision in order to provide an additional sensor track using LiDAR data to be evaluated/compared to the other utilized sensor combinations in Cao. One of ordinary skill would be motivated to include a LiDAR track within the evaluation during Cao's processing/time frame because LiDAR sensors can detect arbitrary shapes and distances which is important in determining a point of potential collision: "By providing a cluster of scan return points, objects having various and arbitrary shapes, such as trucks, trailers, bicycle, pedestrian, guard rail, K-barrier, etc., can be more readily detected, where the bigger and/or closer the object to the subject vehicle the more scan points are provided." [Zeng ¶0007]. Furthermore, by incorporating the LiDAR track into Cao's method, data from multiple types of sensors can be analyzed and compared to each other (such as radar, visual camera, and LiDAR) in order to be able to select/update with the most accurate track/combination to rely on when determining how to further maneuver. Regarding Claim 2, Cao in view of Zeng discloses the method of Claim 1. Furthermore Zeng discloses: wherein the determining of the first point includes: determining a first heading of the LiDAR track based on the heading of the vehicle; (See at least Figures 12-13 which illustrates the vehicle's heading *the positive X direction as illustrated is the vehicle moving forward, and the negative X direction which is not illustrated, is where the vehicle has traveled already. As illustrated, the vehicle is performing collision avoidance, thus, showing that the vehicle is determining where the target object's LiDAR track is--including its headings--which causes it to maneuver to avoid collision based on the respective headings of the vehicle and the target object/LiDAR track.) determining a first midpoint of a first track side corresponding to an opposite side of the first heading in the LiDAR track; and determining, in the LiDAR track, whether the first point is located at a left corner or a right corner of the first track side based on the first midpoint (See at least Figures 12-13 which illustrate the LiDAR track of the target obstacle, and wherein the data acquired from LiDAR includes a left point, right point, and midpoint of the target object, which is the left point, right point, and midpoint of the detected edge of the target object. Furthermore, based on the readings from a LiDAR point cloud, the vehicle determines which points of the LiDAR track/target obstacle are closest to that of the vehicle, and determines what kind of evasive maneuver should be made if one is necessary--See safe distance dt in Fig 12/13 ; ¶0081 via "closest scan point on target vehicle 14 is designated by a*" ; and ¶0065 via " Divide the scan points into left and right points based on their relative position to the shortest path, i.e., those above the path are among the left points and those below the path are among the right points"). PNG media_image3.png 748 603 media_image3.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Cao's sensor fusion method in view of Zeng's LiDAR track with a closest point corresponding to a point of potential collision on the left or right side in order to determine if the target object is a threat and whether or not an evasive maneuver needs to be taken. Furthermore, by using LiDAR to gather data points representing the location of a target object, such as a target vehicle, the controlled vehicle is able to safely maneuver while avoiding collision: "Based on the determination of which data points represent what objects and their location relative to the subject vehicle 12, the algorithm determines in the pre-processing operation what is the best or safest side of the target vehicle 14 for the virtual target path 24 so that the subject vehicle 12 can more safely avoid the target vehicle 14." [Zeng ¶0037]. Furthermore, by incorporating the LiDAR track into Cao's method, data from multiple types of sensors can be analyzed and compared to each other (such as radar, visual camera, and LiDAR) in order to be able to select/update with the most accurate track/combination to rely on when determining how to further maneuver. Regarding Claim 5, Cao in view of Zeng disclose the method of Claim 2. Furthermore Zeng discloses: wherein the determining of whether the first point is located at the left corner or the right corner is based on an equation of a straight line connecting a center point of the LiDAR track and the first midpoint and coordinate values of the first point, wherein the coordinate values are determined based on a first coordinate system (Wherein Zeng determines which direction to evade the target obstacle by considering other obstacles and safe distance dt from potential collision point a' (such as in Figure 12). See at least ¶0056 via "The algorithm to find the best left or right direction for the virtual target path 24 around the target vehicle 14 is described below in the following ten-step algorithm, numbered 1-10. The algorithm uses the scan points 26 from the LiDAR sensor 18 as inputs" and ¶0055 via "Define vertices 54, shown in FIG. 9, as the mid-point of edges 56 of the Delaunay Triangles". Additionally, see Figure 9 which shows scan points and defined mid-points of the various straight lines. Furthermore, see ¶0066 via "If the target vehicle scan points are among the right points, the safe direction of the lane change is "left" and the non-target-vehicle scan points in the left group are considered as the object points, otherwise, the safe direction is "right" and the non-target-vehicle scan points in the right group are considered as the object points." Furthermore, the 2D plane as illustrated in Figure 9 is interpreted as having straight lines from one node to the next node. Additionally, see at least ¶0039 via “global coordinates” which illustrate a coordinate system) PNG media_image4.png 308 463 media_image4.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the determination of whether the collision would occur on the right or left side of the target vehicle using scanned LiDAR points from Zeng to the method previously disclosed by Cao in view of Zeng in order to properly determine evasive action necessary to avoid collision while utilizing the shortest path. Regarding Claim 7, Cao in view of Zeng disclose the method of Claim 5. Furthermore Zeng discloses: wherein the determining of the first point is based on a linear distance of the LiDAR track from the origin of a first coordinate system, a lateral distance of the LiDAR track from the origin, or a longitudinal distance of the LiDAR track from the origin, wherein the origin of the first coordinate system is located at a point of the vehicle (See at least Figure 12 which illustrates safe distance dt and the origin, this point is located at a future point of the vehicle. When vehicle 12 reaches spot p' on evasive path 20, the distance dt represents the lateral distance from the origin to the first point). PNG media_image5.png 328 530 media_image5.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method previously disclosed by Cao in view of Zeng with the lateral distance from the origin to the first point in order to maintain a safety distance between the host vehicle and target object when an evasive maneuver is necessary to avoid collision. Regarding Claim 10, Cao in view of Zeng disclose the method of Claim 2. Furthermore Cao discloses: wherein the determining of the second point includes: (See ¶0034 via "The fusion module 108 reports out information, relative to a reference point, which corresponds to an estimated point of collision between the vehicle 102 and the other vehicle 110 given their current trajectories." and "Note that for some fusion trackers, such as the fusion module 108-1, the reference point associated with each low-level track may be synchronized during fusion." and See fusion model 108 in Figure 1) However Cao does not explicitly disclose, but Zeng discloses: determining a second heading determining that the second point is located at a left corner of a second track side corresponding to an opposite side of the second heading when the first point in the LiDAR track is located at the left corner of the first track side; and determining that the second point is located at a right corner of the second track side when the first point in the LiDAR track is located at the right corner of the first track side (See at least Figure 13 which illustrate the second heading based off of the first initial heading as the vehicle needs to move to avoid the obstacles 14 and 22, also the left and right points which are determined on either track sides of path 20 with safety distances dt, Dbm, etc. in order to ensure the vehicle can avoid collision) PNG media_image6.png 355 514 media_image6.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method disclosed by Cao in view of Zeng in order to account for the locations of the first and second points such as in Zeng, in order to safely maneuver around the obstacles: "Based on the determination of which data points represent what objects and their location relative to the subject vehicle 12, the algorithm determines in the pre-processing operation what is the best or safest side of the target vehicle 14 for the virtual target path 24 so that the subject vehicle 12 can more safely avoid the target vehicle 14" [Zeng ¶0034]. Regarding Claim 11, Cao in view of Zeng disclose the method of Claim 10. Furthermore Cao discloses: further including: determining a position of the left corner of the second track side and a position of the right corner of the second track side based on a center coordinate value of the sensor fusion track, a length of the sensor fusion track, a width of the sensor fusion track, and an angle of the second heading (See at least ¶0018 via "The fusion module 108 is ultimately concerned with correlating the bounding box 112-1 with the bounding box 112-2 so they appear similarly size, shaped, and positioned to correspond to the same part of the same vehicle 110, rather than track and follow different parts of one or two different vehicles." *Wherein the size includes a length and width and also ¶0029 via "Similar to the radar interface 106-1, the vision camera interface 106-2 provides a list of vision-camera-based object-tracks. The vision camera interface 106-2 outputs sensor data, which can be provided in various forms, such as a list of candidate objects being tracked, along with estimates for each of the objects' position, velocity, object class, and reference angles (e.g., an azimuth angle to a “centroid” reference point on the object, such as a center of a rear face of the moving vehicle 110, other “extent angles” to near corners of the rear face of the moving vehicle 110)" and also ¶0039 via "However, this time a radar-based bounding box 406-2 is offset to the right and above a camera-based bounding box 408-2" which illustrates that which side (right or left) is able to be determined from the collected data) Regarding Claim 12, Cao in view of Zeng discloses the method of Claim 10. Furthermore Cao discloses: wherein the updating of the sensor fusion track includes: adjusting a first longitudinal coordinate value of a (See at least Figure 4-1 via A(Xpseudo,Ypseudo) ; B(Xpseudo,Ypseudo) ; C(Xpseudo,Ypseudo) which indicate the lateral and longitudinal coordinates of the corner* points. And also see ¶0057 via "With the probability values for each hypothesis well defined using Equations 1-12, the fusion module 108-1 can fuse sensor data from the multiple interfaces 106 using the most accurate of the three combinations A, B, and C, of pseudo measurement types, for a particular situation" which shows the updating/adjusting, also ¶0061 via "At 506, a pseudo measurement type that has a greater chance of being accurate than each other pseudo measurement type is selected from a plurality of pseudo measurement types") PNG media_image7.png 831 567 media_image7.png Greyscale However, Cao does not explicitly disclose the updating/adjusting based on the midpoint. Nevertheless, it would have been obvious to one of ordinary skill in the art to choose any corresponding point between the tracks in order to compare them. Regarding Claim 13, Cao discloses: A system for controlling a vehicle, the system comprising: (See at least Figure 2 via Sensor Fusion System 104-1.) a memory configured to store a sensor fusion track generated with respect to a target object; and (See at least Figure 2 via Computer-Readable Storage as the Memory, which is linked to the sensor fusion system. Also see ¶0024 via The sensor-fusion system 104-1 may include processing hardware that includes a processor 204-2 (e.g., a hardware processor, a processing unit) and a computer-readable storage medium (CRM) 206-2, which stores instructions associated with a fusion module 108-1.) a processor electrically or communicatively connected to the memory, wherein the memory stores instructions which are executable by the processor and the processor is configured, by executing the instructions, to: (See at least Figure 2 via Processor 204-1 and Computer-Readable Storage Medium 206-1, and see ¶0024 via "The controller 202 includes a processor 204-1 and a computer-readable storage medium (CRM) 206-1 (e.g., a memory, long-term storage, short-term storage), which stores instructions for an automotive module 208."). (Regarding the instructions, see Claim 1 rejection because the steps are the same) Regarding Claim 14, Cao in view of Zeng disclose the system of Claim 13. Furthermore Zeng discloses: wherein the processor, to determine the first point, is further configured to: determine a first heading of the LiDAR track based on the heading of the vehicle; (See at least Figures 12-13 which illustrates the vehicle's heading *the positive X direction as illustrated is the vehicle moving forward, and the negative X direction which is not illustrated, is where the vehicle has traveled already. As illustrated, the vehicle is performing collision avoidance, thus, showing that the vehicle is determining where the target object's LiDAR track is--including its headings--which causes it to maneuver to avoid collision based on the respective headings of the vehicle and the target object/LiDAR track.) determines a first midpoint of a first track side corresponding to an opposite side of the first heading in the LiDAR track; and determine, in the LiDAR track, whether the first point is located at a left corner or a right corner of the first track side based on the first midpoint (See at least Figures 12-13 which illustrate the LiDAR track of the target obstacle, and wherein the data acquired from LiDAR includes a left point, right point, and midpoint of the target object, which is the left point, right point, and midpoint of the detected edge of the target object. Furthermore, based on the readings from a LiDAR point cloud, the vehicle determines which points of the LiDAR track/target obstacle are closest to that of the vehicle, and determines what kind of evasive maneuver should be made if one is necessary--See safe distance dt in Fig 12/13 ; ¶0081 via "closest scan point on target vehicle 14 is designated by a*" ; and ¶0065 via " Divide the scan points into left and right points based on their relative position to the shortest path, i.e., those above the path are among the left points and those below the path are among the right points"). PNG media_image8.png 799 646 media_image8.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Cao's sensor fusion system in view of Zeng's LiDAR track with a closest point corresponding to a point of potential collision on the left or right side in order to determine if the target object is a threat and whether or not an evasive maneuver needs to be taken. Furthermore, by using LiDAR to gather data points representing the location of a target object, such as a target vehicle, the controlled vehicle is able to safely maneuver while avoiding collision: "Based on the determination of which data points represent what objects and their location relative to the subject vehicle 12, the algorithm determines in the pre-processing operation what is the best or safest side of the target vehicle 14 for the virtual target path 24 so that the subject vehicle 12 can more safely avoid the target vehicle 14." [Zeng ¶0037]. Furthermore, by incorporating the LiDAR track into Cao's method, data from multiple types of sensors can be analyzed and compared to each other (such as radar, visual camera, and LiDAR) in order to be able to select/update with the most accurate track/combination to rely on when determining how to further maneuver. Regarding Claim 16, Cao in view of Zeng disclose the system of Claim 14. Furthermore Zeng discloses: wherein the processor is further configured to determine whether the first point is located at the left corner or the right corner based on an equation of a straight line connecting a center point of the LiDAR track and the first midpoint and coordinate values of the first point, wherein the coordinate values are determined based on a first coordinate system (Wherein Zeng determines which direction to evade the target obstacle by considering other obstacles and safe distance dt from potential collision point a' (such as in Figure 12). See at least ¶0056 via "The algorithm to find the best left or right direction for the virtual target path 24 around the target vehicle 14 is described below in the following ten-step algorithm, numbered 1-10. The algorithm uses the scan points 26 from the LiDAR sensor 18 as inputs" and ¶0055 via "Define vertices 54, shown in FIG. 9, as the mid-point of edges 56 of the Delaunay Triangles". Additionally, see Figure 9 which shows scan points and defined mid-points of the various straight lines. Furthermore, see ¶0066 via "If the target vehicle scan points are among the right points, the safe direction of the lane change is "left" and the non-target-vehicle scan points in the left group are considered as the object points, otherwise, the safe direction is "right" and the non-target-vehicle scan points in the right group are considered as the object points." Furthermore, the 2D plane as illustrated in Figure 9 is interpreted as having straight lines from one node to the next node.) PNG media_image4.png 308 463 media_image4.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to include the determination of whether the collision would occur on the right or left side of the target vehicle using scanned LiDAR points from Zeng to the method previously disclosed by Cao in view of Zeng in order to properly determine evasive action necessary to avoid collision while utilizing the shortest path. Regarding Claim 17, Cao in view of Zeng disclose the system of Claim 16. Furthermore Zeng discloses: wherein the processor is further configured to determine the first point based on a linear distance of the LiDAR track from the origin of the first coordinate system, a lateral distance of the LiDAR track from the origin, or a longitudinal distance of the LiDAR track from the origin (See at least Figure 2 via Processor 34 and ¶0036 via "A LiDAR sensor included in the processor 34 provides the data scan point map to a perception module processor 42 that processes the data and provides sensor data fusion, object detection, object tracking, etc.", Additionally see Figure 12 which illustrates safe distance dt and vehicle origin. When vehicle 12 reaches spot p' on evasive path 20, the distance dt represents the lateral distance from the origin to the first point). PNG media_image5.png 328 530 media_image5.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the system previously disclosed by Cao in view of Zeng with the lateral distance from the origin to the first point in order to maintain a safety distance between the host vehicle and target object when an evasive maneuver is necessary to avoid collision. Regarding Claim 18, Cao in view of Zeng disclose the system of Claim 14. Furthermore Cao discloses: wherein the processor, to determine the second point, is further configured to: (See ¶0034 via "The fusion module 108 reports out information, relative to a reference point, which corresponds to an estimated point of collision between the vehicle 102 and the other vehicle 110 given their current trajectories." and "Note that for some fusion trackers, such as the fusion module 108-1, the reference point associated with each low-level track may be synchronized during fusion." and See fusion model 108 in Figure 1 and also Figure 2 via Processor 204-1) However Cao does not explicitly disclose, but Zeng discloses: determine a second heading of the (See at least Figure 13 which illustrate the second heading based off of the first initial heading as the vehicle needs to move to avoid the obstacles 14 and 22, also the left and right points which are determined on either track sides of path 20 with safety distances dt, Dbm, etc. in order to ensure the vehicle can avoid collision) PNG media_image6.png 355 514 media_image6.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the system disclosed by Cao in view of Zeng in order to account for the locations of the first and second points such as in Zeng, in order to safely maneuver around the obstacles: "Based on the determination of which data points represent what objects and their location relative to the subject vehicle 12, the algorithm determines in the pre-processing operation what is the best or safest side of the target vehicle 14 for the virtual target path 24 so that the subject vehicle 12 can more safely avoid the target vehicle 14" [Zeng ¶0034]. Regarding Claim 19, Cao in view of Zeng disclose the system of Claim 18. Furthermore Cao discloses: wherein the processor is further configured to: determine a position of the left corner of the second track side and a position of the right corner of the second track side based on a center point of the sensor fusion track, a length of the sensor fusion track, a width of the sensor fusion track, and an angle of the second heading (See at least ¶0018 via "The fusion module 108 is ultimately concerned with correlating the bounding box 112-1 with the bounding box 112-2 so they appear similarly size, shaped, and positioned to correspond to the same part of the same vehicle 110, rather than track and follow different parts of one or two different vehicles." *Wherein the size includes a length and width and also ¶0029 via "Similar to the radar interface 106-1, the vision camera interface 106-2 provides a list of vision-camera-based object-tracks. The vision camera interface 106-2 outputs sensor data, which can be provided in various forms, such as a list of candidate objects being tracked, along with estimates for each of the objects' position, velocity, object class, and reference angles (e.g., an azimuth angle to a “centroid” reference point on the object, such as a center of a rear face of the moving vehicle 110, other “extent angles” to near corners of the rear face of the moving vehicle 110)" and also ¶0039 via "However, this time a radar-based bounding box 406-2 is offset to the right and above a camera-based bounding box 408-2" which illustrates that which side (right or left) is able to be determined from the collected data. Additionally Figure 2 via Processor 204-1) Regarding Claim 20, Cao in view of Zeng disclose the system of Claim 18. Furthermore Cao discloses: wherein the processor is further configured to: adjust a first longitudinal coordinate value of a (See at least Figure 4-1 via A(Xpseudo,Ypseudo) ; B(Xpseudo,Ypseudo) ; C(Xpseudo,Ypseudo) which indicate the lateral and longitudinal coordinates of the corner* points. And also see ¶0057 via "With the probability values for each hypothesis well defined using Equations 1-12, the fusion module 108-1 can fuse sensor data from the multiple interfaces 106 using the most accurate of the three combinations A, B, and C, of pseudo measurement types, for a particular situation" which shows the updating/adjusting, also ¶0061 via "At 506, a pseudo measurement type that has a greater chance of being accurate than each other pseudo measurement type is selected from a plurality of pseudo measurement types") PNG media_image7.png 831 567 media_image7.png Greyscale However, Cao does not explicitly disclose the updating/adjusting based on the midpoint. Nevertheless, it would have been obvious to one of ordinary skill in the art to choose any corresponding point between the tracks in order to compare them. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cao et. al. (US 20220262129 A1) and Zeng et. al. (US 20150120138 A1) in view of Sawada (US 20210124052 A1). Regarding Claim 3, Cao in view of Zeng discloses the method of Claim 2. However Cao in view of Zeng do not explicitly disclose four potential headings of the LiDAR track. Nevertheless, Sawada discloses: wherein the determining of the first heading includes: determining four potential headings of the LiDAR track based on a shape of the LiDAR track; and (See at least Figure 12 which illustrates four potential headings of the contour of an oncoming vehicle which is detected using LiDAR-- see at least ¶0037 via " contour data 4 composed of a plurality of pieces of detection point data 3 indicating the contour of the oncoming vehicle 2 is detected by the LIDAR 20.") PNG media_image9.png 383 757 media_image9.png Greyscale determining, as the first heading, a potential heading including a heading angle with a smallest difference from the heading angle of the vehicle among the four potential headings (See at least ¶0087-¶0088 which describe D0 as the actual advancing direction of Fig. 12 during a situation where the rectangular frame/track deviates from the posture of the target vehicle. Also see Figure 10 and advancing direction Di of Fig. 10, as well as Host vehicle 1 which has two illustrated headings x' and y' which can have negative counterparts in the opposite directions as shown.) PNG media_image10.png 386 759 media_image10.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method disclosed by Cao and Zeng, in view of Sawada in order to account for a deviation of the track and the actual posture/orientation of the target object, thus, allowing the vehicle to make a decision on whether to perform an evasive maneuver based on data that is more accurate: even in a case where the specified advancing direction of the oncoming vehicle 2 is likely to deviate from an actual advancing direction, the advancing direction of the oncoming vehicle 2 can be specified with accuracy by the following process carried out by the specifying unit 123 with focus given to characteristics in which the posture of the rectangular frame 7 deviates from the posture of the oncoming vehicle 2 due to a variation in the specified advancing direction of the oncoming vehicle 2 [Sawada ¶0088]. Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cao et. al. (US 20220262129 A1) and Zeng et. al. (US 20150120138 A1) in view of Yang et. al. (US 20190025433 A1). Regarding Claim 4, Cao in view of Zeng disclose the method of Claim 2. Furthermore Zeng discloses: wherein determining of the first point includes: determining a position of a left corner of the first track side and a position of a right corner of the first track side (See at least ¶0066 via "If the target vehicle scan points are among the right points, the safe direction of the lane change is "left" and the non-target-vehicle scan points in the left group are considered as the object points, otherwise, the safe direction is "right" and the non-target-vehicle scan points in the right group are considered as the object points") wherein the coordinate values are determined based on a first coordinate system (See at least ¶0039 via “global coordinates” which illustrates that there is a coordinate system relating to the LiDAR track). However, Zeng does not explicitly disclose that the left and right corner positions are based on the specified measurements of coordinate values of a center point, length, width, and heading angle. Nevertheless, Yang--who is directed towards a LiDAR tracking system for a vehicle-- discloses: based on a coordinate values of a center point of the LiDAR track, a length of the LiDAR track, a width of the LiDAR track, and an angle of the first heading, (See at least ¶0014 via "In at least the example of the tracking system 22 being a LiDAR tracking system, the system 22 is adapted to generally recognize the shape and size of at least a portion of the object or vehicle 28 within the unobstructed view of the tracking system 22. As is generally known in the art of LiDAR tracking systems, the system 22 is further configured to recognize the direction of motion 30 and speed of the moving object 28." *Wherein the size of the target object includes the length and width, and the direction of motion includes the heading angle. Additionally see ¶0013 via "For tracking purposes, the vehicle 28 may further include a reference point 58 that may be a center point. In the illustrated example, the center point 58 is generally the center of a ‘footprint’ of the vehicle 28." *Wherein reference point 58/center point is being interpreted as the center coordinate value. Furthermore, See at least Figure 1 which also shows points 54 and 52-- thus showing that the vehicle is able to determine corner points/positions.) PNG media_image11.png 453 430 media_image11.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method disclosed by Cao and Zeng in view of Zeng's determination of left and right points among the LiDAR track and Yang's specified measurements of a center point, size, and direction of the target object in order to determine which side a point of potential collision would be on based on accurate data regarding the target object. Regarding Claim 15, Cao in view of Zeng disclose the system of Claim 14. Furthermore Zeng discloses: wherein the processor, to determine the first point, is further configured to determine a position of a left corner of the first track side and a position of a right corner of the first track side (See at least Figure 2 via Processor 34 and ¶0036 via "A LiDAR sensor included in the processor 34 provides the data scan point map to a perception module processor 42 that processes the data and provides sensor data fusion, object detection, object tracking, etc.", Additionally see ¶0066 via "If the target vehicle scan points are among the right points, the safe direction of the lane change is "left" and the non-target-vehicle scan points in the left group are considered as the object points, otherwise, the safe direction is "right" and the non-target-vehicle scan points in the right group are considered as the object points") However, Zeng does not explicitly disclose that the left and right corner positions are based on the specified measurements of a center coordinate value, length, width, and heading angle. Nevertheless, Yang--who is directed towards a LiDAR tracking system for a vehicle-- discloses: based on a coordinate values of a center point of the LiDAR track, a length of the LiDAR track, a width of the LiDAR track, and an angle of the first heading (See at least ¶0014 via "In at least the example of the tracking system 22 being a LiDAR tracking system, the system 22 is adapted to generally recognize the shape and size of at least a portion of the object or vehicle 28 within the unobstructed view of the tracking system 22. As is generally known in the art of LiDAR tracking systems, the system 22 is further configured to recognize the direction of motion 30 and speed of the moving object 28." *Wherein the size of the target object includes the length and width, and the direction of motion includes the heading angle. Additionally see ¶0013 via "For tracking purposes, the vehicle 28 may further include a reference point 58 that may be a center point. In the illustrated example, the center point 58 is generally the center of a ‘footprint’ of the vehicle 28." *Wherein reference point 58/center point is being interpreted as the center coordinate value. Furthermore, See at least Figure 1 which also shows points 54 and 52-- thus showing that the vehicle is able to determine corner points/positions.) PNG media_image11.png 453 430 media_image11.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the system disclosed by Cao and Zeng in view of Zeng's determination of left and right points among the LiDAR track and Yang's specified measurements of a center point, size, and direction of the target object in order to determine which side a point of potential collision would be on based on accurate data regarding the target object. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cao et. al. (US 20220262129 A1) and Zeng et. al. (US 20150120138 A1) in view of Math-Only-Math (08/22/2018 - NPL ). Regarding Claim 6, Cao in view of Zeng disclose the method of Claim 5. Furthermore, Zeng discloses: the first coordinate system and wherein the first coordinate system includes a longitudinal axis and a lateral axis which are perpendicular to each other (See at least ¶0039 via “global coordinates” which illustrates a coordinate system). Additionally, Zeng discloses an algorithm and equations for determining whether to evade collision by maneuvering to the left or right depending on the location of the closest point in at least ¶0056-¶0066. However, Zeng does not explicitly disclose applying the mathematical formula for Ax+By+C=0 to determine where the point is located such as in Claim 6 which recites: wherein the equation is Ax+By+C=0, A, B, and C being real numbers and x and y being coordinate values Nevertheless, applying any mathematical formulae, including that if the claimed invention, would have been an obvious design choice for one of ordinary skill in the art because it facilitates known mathematical means for deriving the position of points , as shown by NPL Reference: Math-Only-Math . Since the invention failed to provide novel or unexpected results from the usage of said claimed formulae, use of any mathematical means, including that of the claimed invention would be an obvious matter of design choice within the skill of the art. In addition, because both Zeng and NPL Math-Only-Math are directed to determining the position of a point, it would have been obvious for a person with ordinary skill in the art, at the time the invention was filed, with a reasonable expectation of success, to have substituted Zeng's algorithm method of determining whether to evade a collision on the right side or the left side based on a potential collision point with the straight line equation Ax+By+C=0 to achieve predictable results of determining the location/position of a point, which can then be used to determine whether the point is on the left or right side. Additionally, See NPL Math-Only-Math Remarks section: "Let Ax + By + C = 0 be a given straight line and P (x1, y1) be a given point. If Ax1+ By1 + C is positive, then the side of the straight line on which the point P lies is called the positive side of the line and the other side is called its negative side." Allowable Subject Matter Claims 8-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYLA RENEE DOROS whose telephone number is (703)756-1415. The examiner can normally be reached Generally: M-F (8-5) EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.D./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Feb 20, 2025
Non-Final Rejection — §103
Jun 03, 2025
Response Filed
Jul 09, 2025
Final Rejection — §103
Oct 14, 2025
Request for Continued Examination
Oct 22, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602048
TRAVEL ROUTE GENERATION METHOD FOR AUTONOMOUS VEHICLE AND CONTROL APPARATUS FOR AUTONOMOUS VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12576840
VEHICLE CONTROL DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12570012
ROBOT SYSTEM AND METHOD FOR CREATING VISUAL RECORD OF TASK PERFORMED IN WORKING AREA
2y 5m to grant Granted Mar 10, 2026
Patent 12566451
Interactive Detection of Obstacle Status in Mobile Robots
2y 5m to grant Granted Mar 03, 2026
Patent 12544925
ROBOT CONTROL SYSTEM, ROBOT CONTROL METHOD AND PROGRAM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
76%
With Interview (+2.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month