Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,665

VEHICLE EQUIPPED WITH A SENSOR INFORMATION FUSION DEVICE AND A MERGING METHOD USING THE SAME

Final Rejection §101§102§103
Filed
Dec 01, 2023
Examiner
ALAM, NAEEM TASLIM
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
223 granted / 266 resolved
+31.8% vs TC avg
Moderate +11% lift
Without
With
+11.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
284
Total Applications
across all art units

Statute-Specific Performance

§101
21.1%
-18.9% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 266 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 of US application 1-20 filed 12/1/23 were examined. Examiner filed a non-final rejection on 8/12/25. Applicant filed remarks and amendments on 11/12/25. Claims 1, 7, 11 and 17 were amended. Claims 1-20 are presently pending and presented for examination. Response to Arguments Regarding the claim objections: applicant’s amendments have resolved the objections to claims 7 and 17. The previously given objections to these claims are therefore withdrawn. Regarding the claims rejections under 35 USC 101: Applicant's arguments filed 11/12/25 (hereinafter referred to as the “Remarks”) have been fully considered but they are not persuasive. Regarding claims 1 and 11, Applicant argues that the claim does not recite a judicial exception because, “the complex nature of the amended claims 1 (and similarly claim 11) indicates that the subject matter of the claim 1cannot practically be performed in a human mind or by a human using a pen and paper and, thus, the subject matter of the claim 1 does not fall under the grouping of mental processes. For example, a human mind is not equipped to generate two or more tracks using the information on the target object provided by the one or more sensors, determine a similarity between the generated two or more tracks, and generate the sensor fusion track by merging at least two tracks among the two or more tracks based on the similarity, as now recited in claim 1.” (See at least Page 9 in the Remarks). However, this argument is not persuasive because a human can track an object using two different data sets. For example, a human can mentally or manually look at radar or lidar data representing a position of an object over time and a human can also mentally or manually look at camera data representing a position of an object over time. The human can also match the two tracks to each other if they are of the same object by comparing these two data sets. Applicant further argues that the claims do not recite a judicial exception because, “the human senses can't discern a similarity between two or more tracks generated using information on a same target object provided by one or more sensors” (See at least Page 9 in the Remarks). However, this argument is not persuasive either. Nobody is trying to argue that a human has eyes that contain built-in radar, lidar, or camera sensors. However, it is the case that a human can look at datasets or images collected by these kinds of sensors, as discussed earlier, and match up corresponding data points between such sets. Mere data gathering and processing is not enough to amount to more than the judicial exception. Applicant further argues that the judicial exception is integrated into a practical application because, “the claimed vehicle and method generate a sensor fusion track by merging at least two tracks based on similarity and manage the generated sensor fusion track, the claimed vehicle and method may track the object (e.g., the large vehicle) as a single object even when two or more tracks are generated for the object (e.g., the large vehicle). Thus, the independent claims 1 and 11 provide an improvement in accuracy of tracking a target object around a vehicle” (See at least Page 10 in the Remarks). However, this argument is not persuasive because nowhere in the claim does applicant recite, “provide an improvement in accuracy of tracking a target object around a vehicle”. Instead, applicant recites the much more generic language, “manage the generated sensor fusion track” (See at least claim 1). The limitation “manage the generated sensor fusion track” can mean anything, including merely gathering and processing data without any improvement in accuracy. Examiner will not at this moment evaluate whether including such a claim limitation would overcome the 101 rejection because the claim limitation currently does not exist. Applicant further argues that the judicial exception is integrated into a practical application because, “the subject matter of independent claims 1 and 11 may thus accurately recognize and track the target object around the vehicle even when two or more tracks are generated using sensor information on the single target object around the vehicle.” (See at least Page 11 in the Remarks). However, this argument is not persuasive because nowhere in the claim does applicant recite, “accurately recognize and track the target object around the vehicle even when two or more tracks are generated using sensor information on the single target object around the vehicle.” Examiner will not at this moment evaluate whether including such a claim limitation would overcome the 101 rejection because the claim limitation currently does not exist. Applicant further argues eligibility under Step 2B because, “In claims 1 and 11, the above additional elements amount to significantly more than the judicial exception itself. As discussed above, the above limitations of claims 1 and 11 show a technical improvement in existing technology.” (See at least Page 11 in the Remarks). This is not true. The claims merely recite gathering and comparing data, followed by generic “managing” of that data, which can mean anything, including further generic processing of the data. It is not at all clear how this makes a computer more efficient and computing under Step 2B. For at least the above stated reasons, claims 1 and 11 and their dependents are not eligible under 101 and the previously given 101 rejections are all maintained. Examiner’s suggestion to help applicant overcome the 101 rejections: it appears that the limitations of the claim 4 may be intended to improve computational efficiency of the system by deleting old tracks. If this is the case, then in order to overcome the 101 rejections applicant may argue that claim 4 is eligible under Step 2B of the 101 analysis because the deletion of old tracks improves the computational efficiency of the system. Applicant may then amend the independent claims to include the limitations of claim 4 and thereby overcome the 101 rejections. However, the office will maintain the 101 rejections until applicant affirmatively makes this argument and performs these amendments. Regarding the claim rejections under 35 USC 102 and 103: Applicant's arguments filed 11/12/25 (hereinafter referred to as the “Remarks”) have been fully considered but they are not persuasive. Regarding claims 1 and 11, applicant argues that, “Ramakrishnan fails to disclose or render obvious anything about merging two or more tracks generated using information on a target object provided by one or more sensors, much less merging two or more tracks [generated using information on a target object provided by one or more sensors] based on a similarity between the two or more tracks, in the manner now recited in claim 1. Instead, as discussed above, Ramakrishnan merely describes fusing sensor data form different sensors, and identifying and tracking objects based on the fused sensor data from the different sensors. See also Ramakrishnan at par. [0068], for example.” (See at least Page 13 in the Remarks). However, this argument is not persuasive in light of the exact portions of Ramakrishnan that applicant has cited above: claims 1 and 11 are so broad, that the sensor fusion of [Ramakrishnan, 0068] reads perfectly on the claimed “merging two or more tracks generated using information on a target object provided by one or more sensors” (See at least [Ramakrishnan, 0068]). Accordingly, Ramakrishnan does disclose A vehicle (See at least Fig. 1 in Rama: Rama discloses a system architecture diagram for an objection detection and tracking framework 100 for ensuring reliable operation of autonomously-operated vehicles 102 [See at least Rama, 0018]), comprising: a sensing device comprising one or more sensors configured to obtain information on a target object present around the vehicle (See at least Fig. 1 in Rama: Rama discloses that The input data 110, as noted above, includes information obtained from one or more detection systems configured either on-board, or, proximate to, an autonomously-operated vehicle 102, and these detection systems comprise one or more sensors that collect different types of data [See at least Rama, 0021]); and a sensor information fusion device comprising a processor configured to generate or maintain a sensor fusion track using the information on the target object provided by the sensing device (See at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. Rama further discloses that These fused detections are used to create and assign tracks 105 for objects 104 to monitor an object's movement across a field of view 103 [See at least Rama, 0038]), wherein the processor is configured to (See at least Fig. 2 in Rama: Rama discloses a flowchart illustrating a process 200 for performing the object detection and tracking framework 100 [See at least Rama, 0065]): generate two or more tracks using the information on the target object provided by the one or more sensors (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. The orientations used to perform the correlation may be regarded as the one or more tracks), determine a similarity between the generated two or more tracks (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. This correlation reads on the claim limitation), generate the sensor fusion track by merging at least two tracks among the two or more tracks based on the similarity (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. The fusion therefore occurs after, and based on, the correlation), and manage the generated sensor fusion track (See at least Fig. 1 in Rama: Rama discloses that The values of the created tracks 105 are then updated by measurements from any of the sensors in the order they arrive into the subsystem of the present disclosure for object tracking in module 144 [See at least Rama, 0038]). For at least the above stated reasons, claims 1 and 11 and their dependents are not allowable over the prior art of record. Examiner’s suggestion to help applicant overcome the prior art of record: claims 6 and 16 contain allowable subject matter. This is discussed in more detail in the section of this office action titled “Allowable Subject Matter”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claimed invention is directed to the concept of detecting an object, generating multiple possible tracks (trajectories and/or positions) describing the object, determining a level of similar between the possible tracks, determining a track that is believe to be accurate, and keeping the stored knowledge of that determined track updated. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception and do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Examiner will address each of the claims below. Where appropriate, claims are grouped together for applicant’s reading convenience, but it will be appreciated that all claims in each group are rejected. Regarding claims 1 and 11, applicant recites, mutatis mutandis, A vehicle, comprising: a sensing device comprising one or more sensors configured to obtain information on a target object present around the vehicle; and a sensor information fusion device comprising a processor configured to generate or maintain a sensor fusion track using the information on the target object provided by the sensing device, wherein the processor is configured to: generate two or more tracks using the information on the target object provided by the one or more sensors, determine a similarity between the generated two or more tracks, generate the sensor fusion track by merging at least two tracks among the two or more tracks based on the similarity, and manage the generated sensor fusion track. Claim 1 recites a vehicle, which is an apparatus. Claim 11 recites a series of steps and therefore is directed to a process. Both of these satisfy step 1 of the Section 101 analysis. Under the two-prong inquiry, the claim is eligible at revised step 2A unless: Prong One: the claim recites a judicial exception; and Prong Two: the exception is not integrated into a practical application of the exception. The above claim steps are directed to the concept of detecting an object, generating multiple possible tracks (trajectories and/or positions) describing the object, determining a level of similar between the possible tracks, determining a track that is believe to be accurate, and keeping the stored knowledge of that determined track updated, which is an abstract idea that can be performed by a user mentally or manually and falls within the Mental Processes grouping. (Prong one: YES, recites an abstract idea). Other than reciting the use of a vehicle, one or more sensors, and a processor, nothing in the claim elements precludes the steps from being performed entirely by a human. The use of one or more computing devices is insufficient to amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. (Prong Two: NO, does not recite additional elements that integrate the abstract idea into a practical application similar to that shown in MPEP 2106.05). Under step 2B, the claimed invention does not recite additional elements that are indicative of an inventive concept. The additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. The vehicle is just described as a computing environment in paragraph [0012] of the specification. The one or more sensors are described in paragraphs [0064]-[0065] of applicant’s specification as merely general purpose sensors. The processor is described as a generic processor in at least paragraph [0053] of the specification. Therefore these additional limitations are no more than mere instructions to apply the exception using generic computer components. The recitation of generic processors/computers does not take the above limitations out of the mental processes grouping. Moreover, the implementation of the abstract idea on generic computers and/or generic computer components does not add significantly more, similar to how the recitation of the computer in Alice amounted to mere instructions to apply the abstract idea on a generic computer. The claims merely invoke the additional elements as tools that are being used in their ordinary capacity. Further, the courts have found that simply limiting the use of the abstract idea to a particular environment does not add significantly more. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Examiner’s suggestion to help applicant overcome the 101 rejections: it appears that the limitations of the claim 4 may be intended to improve computational efficiency of the system by deleting old tracks. If this is the case, then in order to overcome the 101 rejections applicant may argue that claim 4 is eligible under Step 2B of the 101 analysis because the deletion of old tracks improves the computational efficiency of the system. Applicant may then amend the independent claims to include the limitations of claim 4 and thereby overcome the 101 rejections. However, the office will maintain the 101 rejections until applicant affirmatively makes this argument and performs these amendments. Regarding claims 2 and 12, applicant recites The vehicle of claim 1, wherein the processor is further configured to: assign an identifier (ID) and an age of a dynamic object fusion (DOF) track to identify the DOF track in the sensor fusion track; update the age of the DOF track to maintain the DOF track; and when the DOF track satisfies a coasting age, control the DOF track to disappear. However, a human can mentally or manually track the age of a representation of an object. Regarding claims 3 and 13, applicant recites The vehicle of claim 2, wherein the age of the DOF track increases as the updating is performed. However, a human can mentally or manually track the age of a representation of an object. Regarding claims 4 and 14, applicant recites The vehicle of claim 3, wherein the processor is configured to: convert the DOF track to a coasting DOF track when the coasting age is satisfied, and delete the coasting DOF track when the coasting age reaches a preset value. However, a human can mentally or manually choose to disregard a particular representation of a track after some time. Regarding claims 5 and 15, applicant recites The vehicle of claim 4, wherein the processor is configured to: in response to two different DOF tracks overlapping with each other, use a longitudinal position of a smaller DOF track of the two different DOF tracks that are different in a longitudinal direction when a specific condition is satisfied. However, a human can mentally observe that two differently sized representations of objects overlap. Regarding claims 6 and 16, applicant recites The vehicle of claim 5, wherein the processor is configured to: in response to the two different DOF tracks overlapping with each other, add a size of a length of the other remaining DOF track, among the two different DOF tracks, to the longitudinal position of the smaller DOF track and output a result therefrom as a single DOF track. However, a human can mentally or manually choose to add together and group to object tracks. Regarding claims 7 and 17, applicant recites The vehicle of claim 6, wherein a condition for entry into absorbing and merging the two different DOF tracks is satisfied when: indices of the two different DOF tracks are different, respective IDs of the two different DOF tracks are valid, a status of the two different DOF tracks is not an initial value, and the two different DOF tracks correspond to the vehicle. However, a human can mentally or manually check these characteristics of two tracks to decide if they should instead be regarded as one track. Regarding claims 8 and 18, applicant recites The vehicle of claim 7, wherein a condition for application of absorbing and merging the two different DOF tracks is satisfied when: an overlapping area of the two different DOF tracks is greater than 0.1 square meters, a longitudinal position of each of the two different DOF tracks is less than 6 meters (m), respective horizontal positions of the two different DOF tracks are left/right side lanes, a difference in horizontal position between the two different DOF tracks is less than 0.7m, a longitudinal position of a DOF track to be absorbed of the two different DOF tracks is greater than a longitudinal position of an absorbing DOF track of the two different DOF tracks, a difference between a longitudinal position of a center point of a front bumper of a corner radar (CR) track of one of the two different DOF tracks and a longitudinal position of a center point of a rear bumper of the CR track of the other one of the two different DOF tracks is within 2m, a difference between the longitudinal position of the center point of the front bumper of the CR track of one of the two different DOF tracks and a longitudinal position of a center point of the rear bumper of a front corner lidar (FCL) track of the other one of the two different DOF track is within 1.3m, and a CR is comprised in each of the two different DOF tracks, an FCL is comprised in each of the two different DOF tracks, or an FCL is only in the DOF track to be absorbed. However, a human can mentally or manually perform the above mathematical calculations to determine if two tracks should be joined into one. Regarding claims 9 and 19, applicant recites The vehicle of claim 8, wherein a condition for execution of absorbing and merging the two different DOF tracks is satisfied when: the absorbing and merging have been completed on the two different DOF tracks, a length of the DOF track to be absorbed is added to an absolute value of a difference in longitudinal position between the two different DOF tracks, the FCL is not present in the absorbing DOF track of the two different DOF tracks, and the FCL is present in the DOF track to be absorbed of the two different DOF tracks,in the presence of the FCL in both the two different DOF tracks, a length of the FCL in the absorbing DOF track is smaller than a length of the FCL in the DOF track to be absorbed, and a longitudinal position of the DOF track on which the absorbing and merging have been completed is set to a longitudinal position of the absorbing DOF track. However, a human can mentally or manually perform the above mathematical calculations to determine if two tracks should be joined into one. Regarding claims 10 and 20, applicant recites The vehicle of claim 9, wherein the processor is configured to: when the absorbing and merging have been completed on the two different DOF tracks, delete the DOF track, among the two different DOF tracks, that has been absorbed and merged. However, a human can mentally or manually disregard a previous track after merging two tracks together into a new track. Examiner’s suggestion to help applicant overcome the 101 rejections: it appears that the limitations of the claim 4 may be intended to improve computational efficiency of the system by deleting old tracks. If this is the case, then in order to overcome the 101 rejections applicant may argue that claim 4 is eligible under Step 2B of the 101 analysis because the deletion of old tracks improves the computational efficiency of the system. Applicant may then amend the independent claims to include the limitations of claim 4 and thereby overcome the 101 rejections. However, the office will maintain the 101 rejections until applicant affirmatively makes this argument and performs these amendments. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4 and 11-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ramakrishnan et al. (US 20240409127 A1), hereinafter referred to as Rama. It will be appreciated that, where appropriate, similar claims are grouped together and the narrowest claim in the group is quoted in the rejection, with the understanding that all claims in the group are rejected. Regarding claims 1 and 11, Rama discloses A vehicle (See at least Fig. 1 in Rama: Rama discloses a system architecture diagram for an objection detection and tracking framework 100 for ensuring reliable operation of autonomously-operated vehicles 102 [See at least Rama, 0018]), comprising: a sensing device comprising one or more sensors configured to obtain information on a target object present around the vehicle (See at least Fig. 1 in Rama: Rama discloses that The input data 110, as noted above, includes information obtained from one or more detection systems configured either on-board, or, proximate to, an autonomously-operated vehicle 102, and these detection systems comprise one or more sensors that collect different types of data [See at least Rama, 0021]); and a sensor information fusion device comprising a processor configured to generate or maintain a sensor fusion track using the information on the target object provided by the sensing device (See at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. Rama further discloses that These fused detections are used to create and assign tracks 105 for objects 104 to monitor an object's movement across a field of view 103 [See at least Rama, 0038]), wherein the processor is configured to (See at least Fig. 2 in Rama: Rama discloses a flowchart illustrating a process 200 for performing the object detection and tracking framework 100 [See at least Rama, 0065]): generate two or more tracks using the information on the target object provided by the one or more sensors (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. The orientations used to perform the correlation may be regarded as the one or more tracks), determine a similarity between the generated two or more tracks (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. This correlation reads on the claim limitation), generate the sensor fusion track by merging at least two tracks among the two or more tracks based on the similarity (See at least Fig. 2 in Rama: Rama discloses that At step 250, the process 200 uses the pre-processed outputs of the plurality of sensors to fuse 153 sensor the input data 110 relative to the fields of view 103 to initiate other sub-processes of multi-object tracking 154 and filtering 155 to create, match, predict and filter tracks 105 as described above [See at least Rama, 0068]. Also see at least Fig. 1 in Rama: Rama discloses that The object tracking module 144 initially is configured to perform sensor fusion 153 [See at least Rama, 0038]. Rama further discloses that This fusion 154 of sensor data may be performed by calculating an orientation of each object 104 in the current camera frame with respect to radar 114, and correlating objects 104 using those orientations to match objects 104 and thereby fusing detections across both fields of view 103 of the camera 112 and the radar 114 [See at least Rama, 0038]. The fusion therefore occurs after, and based on, the correlation), and manage the generated sensor fusion track (See at least Fig. 1 in Rama: Rama discloses that The values of the created tracks 105 are then updated by measurements from any of the sensors in the order they arrive into the subsystem of the present disclosure for object tracking in module 144 [See at least Rama, 0038]). Regarding claims 2 and 12, Rama discloses The vehicle of claim 1, wherein the processor is further configured to: assign an identifier (ID) and an age of a dynamic object fusion (DOF) track to identify the DOF track in the sensor fusion track (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]. It will therefore be appreciated that each newly generated track 105 is identifiable and has an age that is monitored by the system); update the age of the DOF track to maintain the DOF track (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]. It will be appreciated that the age is updated in order to determine how long a track 105 has been unmatched); and when the DOF track satisfies a coasting age, control the DOF track to disappear (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]). Regarding claims 3 and 13, Rama discloses The vehicle of claim 2, wherein the age of the DOF track increases (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]. It will be appreciated that the age is updated (i.e., increased) in order to determine how long a track 105 has been unmatched) as the updating is performed (Rama discloses that Matched detections are detections/measurements that match an existing track 105 during assignment [See at least Rama, 0043]. Rama further discloses that These are the detections that are used by the object tracking module 144 to update existing tracks 105 with the latest values [See at least Rama, 0043]). Regarding claims 4 and 14, Rama discloses The vehicle of claim 3, wherein the processor is configured to: convert the DOF track to a coasting DOF track when the coasting age is satisfied (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]. The age of the track may be regarded as applicant’s “coasting age”, and it may be satisfied when Rama’s “period of time” elapses, thus turning the track into a track labeled for deletion; a track so labeled may be regarded as applicant’s “coasting DOF track”), and delete the coasting DOF track when the coasting age reaches a preset value (Rama discloses that Unmatched tracks 105 are tracks 105 that do not have any matched detections/measurements during assignment [See at least Rama, 0043]. Rama further discloses that These tracks 105 are still tracked for a limited period of time before they are deleted from the list of tracks 105, unless they get a matched measurement within that period of time [See at least Rama, 0043]. The age of the track may be regarded as applicant’s “coasting age”, and it may be regarded as reaching applicant’s “preset value” when Rama’s “period of time” elapses, thus turning the track into a deleted track). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ramakrishnan et al. (US 20240409127 A1) in view of Hiramatsu et al. (US 20210261162 A1), hereinafter referred to as Rama and Hiramatsu, respectively. Regarding claims 5 and 15, Rama discloses The vehicle of claim 4, wherein the processor is configured to: detect a scenario where two different DOF tracks overlap with each other (Rama discloses that If the measurement is from the camera 112, the assignment metric is an Intersection over Union (IoU) mathematical model, which allows for an evaluation of how similar the measurement's bounding box is to the track's bounding box [See at least Rama, 0041]. Rama further discloses that This is performed to compare the ratio of the area where the two bounding boxes overlap to the total combined area of the two bounding boxes [See at least Rama, 0041]. Rama further discloses that For such camera-provided measurements, an embodiment of the present disclosure loops through each object 104 in the measurement and calculates the IoU over existing tracks 105, and assigns them to corresponding tracks 105 if their IoU is for example greater than 0.5 [See at least Rama, 0041]). However, Rama does not explicitly teach the vehicle wherein, in response to two different DOF tracks overlapping with each other, the processor is configured to use a longitudinal position of a smaller DOF track of the two different DOF tracks that are different in a longitudinal direction when a specific condition is satisfied. However, Hiramatsu does teach a vehicle wherein, in response to two different DOF tracks (See at least Fig. 1 and Fig. 4A in Hiramatsu: Hiramatsu teaches that The surrounding environment detecting unit 12 acquires data indicating a relative position to the host-vehicle 20, a vehicle body length in a traveling direction, and a vehicle body length in a vehicle width direction of a first other vehicle 21 and a second other vehicle 22 that travel on other roads that intersect the road on which the host-vehicle 20 travels [See at least Hiramatsu, 0029]) overlapping with each other, the processor is configured to use a longitudinal position of a smaller DOF track of the two different DOF tracks that are different in a longitudinal direction when a specific condition is satisfied (See at least Fig. 4A in Hiramatsu: Hiramatsu teaches that the shielding time estimating unit 14 estimates the shielding time by using a vehicle body length (L1) of the second other vehicle 22 in the traveling direction, a lane width (D1) of a lane on which the first other vehicle 21 travels, and a lane width (D2) of a lane on which the second other vehicle 22 travels [See at least Hiramatsu, 0039]. It will be appreciated from the Fig. 4A that the length of other vehicle 22, whose track overlaps the track of other vehicle 21, is bigger than width of the track of the oncoming vehicle 21, so the smaller size of the width of the track of vehicle 21 and the bigger length of intersecting track of the vehicle 22 are both "used"). Both Hiramatsu and Rama teach methods for tracking surrounding objects. However, only Hiramatsu explicitly teaches where, in a situation where one object has a track that is larger in a particular dimension and another object has a track which is smaller in the particular dimension, both tracks may be used to determine how long an ego vehicle may be shielded from an oncoming vehicle by another vehicle’s position. It would have been obvious to anyone of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the object tracking method of Rama to also include where, in a situation where one object has a track that is larger in a particular dimension and another object has a track which is smaller in the particular dimension, both tracks may be used to determine how long an ego vehicle may be shielded from an oncoming vehicle by another vehicle’s position. Doing so improves safety for the ego vehicle. Allowable Subject Matter Claims 6-10 and 16-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The closest prior art of record is Ramakrishnan et al. (US 20240409127 A1) in view of Hiramatsu et al. (US 20210261162 A1), hereinafter referred to as Ramakrishnan and Hiramatsu, respectively. The following is a statement of reasons for the indication of allowable subject matter: Regarding claims 6 and 16, Ramakrishnan in view of Hiramatsu teaches The vehicle of claim 5 and the method of claim 15. However, none of the prior art of record, taken either alone or in combination, teaches or suggests the vehicle or method wherein the processor is configured to: in response to the two different DOF tracks overlapping with each other, add a size of a length of the other remaining DOF track, among the two different DOF tracks, to the longitudinal position of the smaller DOF track and output a result therefrom as a single DOF track. Ramakrishnan is silent as to any tracks overlapping other tracks. While Hiramatsu does teach where one track may overlap another track (See at least Fig. 4A in Hiramatsu and [Hiramatsu, 0039]), Hiramatsu does not teach or suggest adding a length of a bigger overlapping track to a longitudinal position of a smaller track in order to merge the two tracks into a single track. None of the prior art of record remedies this deficiency in Ramakrishna and Hiramatsu. For at least the above stated reasons, claims 6 and 16 contain allowable subject matter. Regarding claims 7-10 and 17-20, these claims also contain allowable subject matter at least by virtue of their dependence from claims 6 and 16, respectively. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAEEM T ALAM whose telephone number is (571)272-5901. The examiner can normally be reached M-F, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached at (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NAEEM TASLIM ALAM/ Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §101, §102, §103
Nov 12, 2025
Response Filed
Dec 27, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594833
DISPLAY CONTROL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12589889
SIMULATION ARCHITECTURE FOR SAFETY TESTING OF AIRCRAFT MONITORING SOFTWARE
2y 5m to grant Granted Mar 31, 2026
Patent 12583562
MARINE MOUNT ANGLE CALIBRATION SYSTEM AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12573300
VEHICLE DETECTION SYSTEM AND METHOD FOR DETECTING A TARGET VEHICLE IN A DETECTION AREA LOCATED BEHIND A SUBJECT VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12570322
VEHICLE DETECTION SYSTEM AND METHOD FOR DETECTING A TARGET VEHICLE IN A DETECTION AREA LOCATED BEHIND A SUBJECT VEHICLE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
95%
With Interview (+11.2%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 266 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month