Prosecution Insights
Last updated: April 19, 2026
Application No. 18/155,735

DATA FUSION FOR ENVIRONMENTAL MODEL GENERATION

Non-Final OA §103
Filed
Jan 18, 2023
Examiner
PERVIN, NUZHAT
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Autobrains Technologies Ltd.
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
394 granted / 490 resolved
+28.4% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
34 currently pending
Career history
524
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 490 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Examiner acknowledges no foreign priority is claimed. ​ Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 2/8/2023 and 2/8/2023 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered if signed and initialed by the Examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/25/2025 has been entered. Response to Arguments Applicant's arguments filed 11/25/2025 have been fully considered but they are not persuasive. Argument: Regarding independent claims 1, 11 and 20, the applicant argues that the claimed method specifically addresses the problem of distance ambiguity in visual detection by using radar information to solve that distance ambiguity through searching for radar-detection-based objects within regions of interest that are defined by both the estimated visual-detection-based locations and the distance ambiguity of those locations. This represents a specific technical approach to improving the accuracy of sensor fusion by compensating for the inherent distance inaccuracy of visual detection systems through targeted radar searching within defined regions of interest. While Zhong describes general camera radar fusion processing, Zhong does not teach or suggest the specific approach of solving distance ambiguity by searching for radar-detection-based objects within regions of interest that are specifically defined by estimated visual-detection-based locations and the distance ambiguity of those locations. Similarly, while Minemura describes position error regions for radar and image data, Minemura does not teach the claimed method of solving distance ambiguity through targeted radar searching within regions of interest defined by visual-detection-based locations and their associated distance ambiguity. Bills describes using infrared cameras and radar systems for range resolution, but Bills does not teach the specific claimed approach of solving distance ambiguity of estimated visual-detection-based locations by searching for radar-detection-based objects within regions of interest defined by the estimated visual-detection-based locations and the distance ambiguity of the estimated visual-detection-based locations. Response: The examiner disagrees. Claims 1, 11 and 20 are rejected with Zhong et al. (US 2021/0041555 A1), in view of Bills et al. (US 10,445,896 B1). Zhong (’55) describes that system-on-a-chip (SoC) includes one or more processors coupled to a video camera and to a radar sensor…the one or more processors are configured to receive, from the video camera, video data receive, from the radar sensor, radar data…the one or more processors are also configured to perform image processing, filtering, and object classification and tracking based on the video data, to generate visual object classification data and perform visual motion estimation on the video data, to generate a vision motion estimation vector…the one or more processors are configured to perform radar signal processing on the radar data, to generate processed radar object detection data and perform camera radar alignment on the radar data, to generate aligned radar object detection data… the one or more processors are configured to perform camera radar fusion on the aligned radar object detection data, the vision motion estimation vector, and the visual object classification data, to generate camera radar fusion data (paragraph 6); in the block 414, the camera radar fusion system performs radar data processing on the radar data acquired in the block 412…radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed…radar object detection data is produced…the camera radar fusion system performs camera radar alignment, for example using image plane mapping…a timestamp may be associated with the processed radar data (paragraph 38). Bills et al. (‘896) describes that the sensing system 100 and the method 600, infrared cameras 104 may be employed not only to determine the lateral or spatial location of objects relative to some location, but to determine within some level of uncertainty the distance of that location from the infrared cameras 104 (column 16 lines 32-37); the infrared camera 704, which typically may facilitate high spatial resolution but less distance or depth resolution, is used to generate an ROI for each detected object in a particular scene…the distance to the object in the ROI with less range uncertainty (column 20 line 67-column 21 line 9); FIG. 9 is a flow diagram of an example method 900 of employing an infrared camera (e.g., the infrared camera 704 of Figures 7A and 7B) and a lidar system or radar system (e.g., the lidar system 706 of FIG. 7A or the radar system 707 of FIG. 7B) for fine range resolution….an ROI and a first range of distance to the ROI is identified using the infrared camera (operation 902)…this may be accomplished using image recognition algorithms or DNN that have been trained to detect and identify the objects of interest…the ROI is then probed using the lidar system or the radar system to refine the first range to a second range of distance to the ROI having a lower measurement uncertainty (operation 904)… consequently…the sensing system 700 of FIG. 7A or the sensing system of FIG. 7B may employ the infrared camera 704 and the steerable lidar system 706 and/or the radar system 707 in combination to provide significant resolution regarding the location of objects both radially (e.g., in a z direction) and spatially, or laterally and vertically (e.g., in an x, y plane orthogonal to the z direction), beyond the individual capabilities of either the infrared camera 704, the lidar system 706, or the radar system 707…the infrared camera 704, which typically may facilitate high spatial resolution but less distance or depth resolution, is used to generate an ROI for each detected object in a particular scene…the steerable lidar system 706 or the radar system 707, which typically provides superior distance or radial resolution but less spatial resolution, may then probe each of these ROIs individually, as opposed to probing the entire scene in detail, to more accurately determine the distance to the object in the ROI with less range uncertainty (Column 20 line 46- column 21 line 9). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. For applicant’s benefit portions of the cited reference(s) have been cited to aid in the review of the rejection(s). While every attempt has been made to be thorough and consistent within the rejection it is noted that the PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS. See MPEP 2141.02 VI. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhong et al. (US 2021/0041555 A1), in view of Bills et al. (US 10,445,896 B1). Regarding claim 1, Zhong et al. (‘555) discloses “a method for fusion of radar and visual information (paragraph 2: system and method for camera and radar sensing, and in particular, to a system and method for camera radar fusion; paragraph 38: the camera radar fusion system), the method comprises: obtaining visual information and radar information about a three dimensional (3D) space located within a field of view of a camera that acquired the visual information and within a field of view of a radar that acquired the radar information (paragraph 38: radar data processing on the radar data acquired in the block 412…radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed…radar object detection data is produce; paragraph 81: the DSP simultaneously tracks objects in 2D space and in 3D space…the objects are modeled in 3D space using a six-parameter kinematic model and the objects are modeled on the 2D image plane as a bounding area); finding, based on the visual information, estimated visual-detection-based (VDB) objects and estimated VDB locations of the estimated VDB objects within the 3D space (paragraph 5: method for camera radar fusion includes receiving, by a processor, a vision motion estimation vector for an object and receiving, by the processor, visual object classification data for the object, the visual object classification data obtained by a video camera); determining hybrid-detection-based (HDB) objects and HDB locations of the HDB objects, based on (i) the radar information, (ii) the estimated VDB objects, and (iii) the estimated VDB locations of the VDB objects (paragraph 6: system-on-a-chip (SoC) includes one or more processors coupled to a video camera and to a radar sensor…the one or more processors are configured to receive, from the video camera, video data receive, from the radar sensor, radar data…the one or more processors are also configured to perform image processing, filtering, and object classification and tracking based on the video data, to generate visual object classification data and perform visual motion estimation on the video data, to generate a vision motion estimation vector…the one or more processors are configured to perform radar signal processing on the radar data, to generate processed radar object detection data and perform camera radar alignment on the radar data, to generate aligned radar object detection data… the one or more processors are configured to perform camera radar fusion on the aligned radar object detection data, the vision motion estimation vector, and the visual object classification data, to generate camera radar fusion data; paragraph 38: in the block 414, the camera radar fusion system performs radar data processing on the radar data acquired in the block 412…radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed…radar object detection data is produced…the camera radar fusion system performs camera radar alignment, for example using image plane mapping…a timestamp may be associated with the processed radar data).” Zhong et al. (‘555) does not explicitly disclose “the finding of the estimated VDB locations exhibits a distance inaccuracy that are represented by a distance ambiguity that is either a certain percent of the distance inaccuracy or certain amount of distance; determining the HDB objects and HDB locations comprises solving the distance ambiguity of the estimated VDB locations using the radar information by searching for radar-detection-based (RDB) objects within regions of interest defined by the estimated VDB locations and the distance ambiguity of the estimated VDB locations; wherein the distance ambiguity is a range inaccuracy associated with estimation of locations that are based solely estimated VDB locations.” Bills et al. (‘896) relates to radar and camera sensing. Bills et al. (‘896) teaches “the finding of the estimated VDB locations exhibits a distance inaccuracy that are represented by a distance ambiguity that is either a certain percent of the distance inaccuracy or certain amount of distance (column 16 lines 32-37: the sensing system 100 and the method 600, infrared cameras 104 may be employed not only to determine the lateral or spatial location of objects relative to some location, but to determine within some level of uncertainty the distance of that location from the infrared cameras 104; column 20 line 67-column 21 line 9: the infrared camera 704, which typically may facilitate high spatial resolution but less distance or depth resolution, is used to generate an ROI for each detected object in a particular scene…the distance to the object in the ROI with less range uncertainty); determining the HDB objects and HDB locations comprises solving the distance ambiguity of the estimated VDB locations using the radar information by searching for radar-detection-based (RDB) objects within regions of interest defined by the estimated VDB locations and the distance ambiguity of the estimated VDB locations; wherein the distance ambiguity is a range inaccuracy associated with estimation of locations that are based solely estimated VDB locations (Column 20 line 46- column 21 line 9: FIG. 9 is a flow diagram of an example method 900 of employing an infrared camera (e.g., the infrared camera 704 of Figures 7A and 7B) and a lidar system or radar system (e.g., the lidar system 706 of FIG. 7A or the radar system 707 of FIG. 7B) for fine range resolution….an ROI and a first range of distance to the ROI is identified using the infrared camera (operation 902)…this may be accomplished using image recognition algorithms or DNN that have been trained to detect and identify the objects of interest…the ROI is then probed using the lidar system or the radar system to refine the first range to a second range of distance to the ROI having a lower measurement uncertainty (operation 904)…consequently…the sensing system 700 of FIG. 7A or the sensing system of FIG. 7B may employ the infrared camera 704 and the steerable lidar system 706 and/or the radar system 707 in combination to provide significant resolution regarding the location of objects both radially (e.g., in a z direction) and spatially, or laterally and vertically (e.g., in an x, y plane orthogonal to the z direction), beyond the individual capabilities of either the infrared camera 704, the lidar system 706, or the radar system 707…the infrared camera 704, which typically may facilitate high spatial resolution but less distance or depth resolution, is used to generate an ROI for each detected object in a particular scene…the steerable lidar system 706 or the radar system 707, which typically provides superior distance or radial resolution but less spatial resolution, may then probe each of these ROIs individually, as opposed to probing the entire scene in detail, to more accurately determine the distance to the object in the ROI with less range uncertainty).” It would have been obvious to one of ordinary skill-in-the-art before the effective filing date of the claimed invention to modify the method of Zhong et al. (‘555) with the teaching of Bills et al. (‘896) for enhanced target detection (Bills et al. (‘896) – column 4 lines 15-29). In addition, both of the prior art references, (Zhong et al. (‘555) and Bills et al. (‘896)) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, fusing radar and camera detection for object detection. Regarding claim 3, which is dependent on independent claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 1. Zhong et al. (‘555) further discloses “displaying data (paragraph 42: the camera radar fusion system outputs the fusion results obtained in the block 420. Fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver; paragraph 61: in the block 234, the display driver displays the data from the 214, from the block 224, and from the block 230…a user may view the radar data).” Zhong et al. (‘555) does not explicitly disclose displaying “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance ambiguity.” Bills et al. (‘896) teaches displaying “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance ambiguity (column 2 lines 54-57: identifying, by a processor, a region of interest corresponding to an object and a first range of distance using a camera and a light source, and probing, by the processor, the region of interest; column 20 lines 50-55: an ROI and a first range of distance to the ROI is identified using the infrared camera (operation 902)…this may be accomplished using image recognition algorithms or DNN that have been trained to detect and identify the objects of interest; column 20 lines 59-66: FIG. 7A or the sensing system of FIG. 7B may employ the infrared camera 704…regarding the location of objects both radially (e.g., in a z direction) and spatially, or laterally and vertically (e.g., in an x, y plane orthogonal to the z direction), beyond the individual capabilities of either the infrared camera 704).” It would have been obvious to one of ordinary skill-in-the-art before the effective filing date of the claimed invention to modify the method of Zhong et al. (‘555)/Minemura et al. (‘171) with the teaching of Bills et al. (‘896) for enhanced target detection (Bills et al. (‘896) – column 4 lines 15-29). In addition, both of the prior art references, (Zhong et al. (‘555) and Bills et al. (‘896)) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, fusing radar and camera detection for object detection. Regarding claim 4, which is dependent on independent claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 1. Zhong et al. (‘555) further discloses “the determining of the HDB objects and the HDB locations of the HDB objects comprises finding radar-detection-based (RDB) objects (paragraph 38: in the block 414, the camera radar fusion system performs radar data processing on the radar data acquired in the block 412…radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed…radar object detection data is produced…the camera radar fusion system performs camera radar alignment, for example using image plane mapping…a timestamp may be associated with the processed radar data).” Regarding claim 5, which is dependent on claim 4, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 4. Zhong et al. (‘555) further discloses “pairing RDB objects with VDB objects to provide the HDB objects and the HDB locations of the HDB objects (paragraph 38: in the block 414, the camera radar fusion system performs radar data processing on the radar data acquired in the block 412… radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed… radar object detection data is produced…the camera radar fusion system performs camera radar alignment, for example using image plane mapping…a timestamp may be associated with the processed radar data).” Regarding claim 6, which is dependent on claim 4, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 4. Zhong et al. (‘555) further discloses “the finding of the RDB objects is preceded by aggregating radar points from different points of time within a time window (paragraph 79: In the block 212, the DSP performs object tracking, for example Kalman tracking, on recognized objects across consecutive frames…the DSP determines visual object classification data, for example bounding 214es for the objects being tracked…in Kalman tracking, an optimal recursive Bayesian filter is used, for linear functions subjected to Gaussian noise…the DSP uses a series of measurements observed over time, containing noise and other inaccuracies, to produce estimates of unknown variables that are more precise than estimates based on a single measurement. Bayesian interference is used, and a joint probability distribution over the variables for each timeframe is estimated by the DSP…in the prediction step, the Kalman filter produces estimates of the current state of variables and the uncertainties of these variables…after the outcome of the next measurement is observed, the estimates are updated using a weighted average, with more weight given to estimates with higher certainty).” Regarding claim 7, which is dependent on claim 4, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 4. Zhong et al. (‘555) further discloses “the finding of the RDB objects comprises aggregating radar points from different points of time within a time window (paragraph 79: In the block 212, the DSP performs object tracking, for example Kalman tracking, on recognized objects across consecutive frames…the DSP determines visual object classification data, for example bounding 214es for the objects being tracked…in Kalman tracking, an optimal recursive Bayesian filter is used, for linear functions subjected to Gaussian noise…the DSP uses a series of measurements observed over time, containing noise and other inaccuracies, to produce estimates of unknown variables that are more precise than estimates based on a single measurement. Bayesian interference is used, and a joint probability distribution over the variables for each timeframe is estimated by the DSP…in the prediction step, the Kalman filter produces estimates of the current state of variables and the uncertainties of these variables…after the outcome of the next measurement is observed, the estimates are updated using a weighted average, with more weight given to estimates with higher certainty).” Regarding claim 8, which is dependent on independent claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 1. Zhong et al. (‘555) further discloses “responding to the determining of the HDB objects (paragraph 42: In the block 422, the camera radar fusion system outputs the fusion results obtained in the block 420…fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver…the results may be directly used by another function in an advanced driving assistant system (ADAS).” Regarding claim 9, which is dependent on claim 8, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 4. Zhong et al. (‘555) further discloses “the responding comprises at least one of autonomously driving the vehicle based on the determining or performing an advanced driver assistance system (ADAS) operation (paragraph 42: In the block 422, the camera radar fusion system outputs the fusion results obtained in the block 420…fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver…the results may be directly used by another function in an advanced driving assistant system (ADAS).” Regarding claim 11, which is a corresponding non-transitory computer readable medium of independent claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 1. Zhong et al. (‘555) further discloses “a non-transitory computer readable medium for fusion of radar and visual information, the non-transitory computer readable medium stores instructions (paragraph 4: method for camera radar fusion includes receiving, by the processor, radar object detection data for an object and modeling, by a processor, a three dimensional (3D) physical space kinematic model, including updating 3D coordinates of the object, to generate updated 3D coordinates of the object, in response to receiving the radar object detection data for the object…the method also includes transforming, by the processor, the updated 3D coordinates of the object to updated two dimensional (2D) coordinates of the object, based on a 2D-3D calibrated mapping table and modeling, by the processor, a two dimensional (2D) image plane kinematic model, while modeling the 3D physical space kinematic model, where modeling the 2D image plane kinematic model includes updating coordinates of the object based on the updated 2D coordinates of the object).” Regarding claim 12, which is dependent on independent claim 11, Zhong et al. (‘555)/Bills et al. (‘896) discloses all non-transitory computer readable medium of claim 11. Zhong et al. (‘555) further discloses “displaying, within an enlarged RAR display (paragraph 42: the camera radar fusion system outputs the fusion results obtained in the block 420. Fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver; paragraph 61: in the block 234, the display driver displays the data from the 214, from the block 224, and from the block 230…a user may view the radar data).” Zhong et al. (‘555) does not explicitly disclose “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance.” Bills et al. (‘896) relates to radar and camera sensing. Bills et al. (‘896) teaches “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance (column 2 lines 54-57: identifying, by a processor, a region of interest corresponding to an object and a first range of distance using a camera and a light source, and probing, by the processor, the region of interest; column 20 lines 50-55: an ROI and a first range of distance to the ROI is identified using the infrared camera (operation 902)…this may be accomplished using image recognition algorithms or DNN that have been trained to detect and identify the objects of interest; column 20 lines 59-66: FIG. 7A or the sensing system of FIG. 7B may employ the infrared camera 704…regarding the location of objects both radially (e.g., in a z direction) and spatially, or laterally and vertically (e.g., in an x, y plane orthogonal to the z direction), beyond the individual capabilities of either the infrared camera 704).” It would have been obvious to one of ordinary skill-in-the-art before the effective filing date of the claimed invention to modify the method of Zhong et al. (‘555) with the teaching of Bills et al. (‘896) for enhanced target detection (Bills et al. (‘896) – column 4 lines 15-29). In addition, both of the prior art references, (Zhong et al. (‘555) and Bills et al. (‘896)) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, fusing radar and camera detection for object detection. Regarding claim 13, which is dependent on independent claim 11, Zhong et al. (‘555)/Bills et al. (‘896) discloses the non-transitory computer readable medium of claim 11. Zhong et al. (‘555) further discloses “the determining of the HDB objects and the HDB locations of the HDB objects comprises searching for radar-detection-based (RDB) objects within regions of interest defined by the estimated VDB locations of the estimated VDB objects and the distance ambiguity of the estimated VDB locations of the VDB objects (paragraph 38: in the block 414, the camera radar fusion system performs radar data processing on the radar data acquired in the block 412… radar signal processing, for example, range fast Fourier transform (FFT), Doppler FFT, constant false alarm rate (CFAR) detection, direction of arrival (DOA) estimation, region of interest (ROI) clustering, and object tracking is performed …radar object detection data is produced…the camera radar fusion system performs camera radar alignment, for example using image plane mapping…a timestamp may be associated with the processed radar data).” Regarding claim 14, which is dependent on independent claim 11, and which is a corresponding non-transitory computer readable medium claim of method claim 4, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 4. Regarding claim 15, which is dependent on claim 14, and which is a corresponding non-transitory computer readable medium claim of method claim 5, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 5. Regarding claim 16, which is dependent on claim 14, and which is a corresponding non-transitory computer readable medium claim of method claim 6, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 6. Regarding claim 17, which is dependent on claim 14, and which is a corresponding non-transitory computer readable medium claim of method claim 7, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 7. Regarding claim 18, which is dependent on independent claim 11, and which is a corresponding non-transitory computer readable medium claim of method claim 8, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 8. Regarding claim 19, which is dependent on claim 17, and which is a corresponding non-transitory computer readable medium claim of method claim 9, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 9. Regarding claim 20, which is dependent on claim 17, and which is a corresponding non-transitory computer readable medium claim of method claim 10, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 10. Regarding independent claim 21, which is a corresponding system claim of independent method claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses all the claimed invention as shown above for claim 1. Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Zhong et al. (US 2021/0041555 A1)/Bills et al. (US 10,445,896 B1), in view of Ahmed et al. (US 2020/0370920 A1). Regarding claim 2, which is dependent on independent claim 1, Zhong et al. (‘555)/Bills et al. (‘896) discloses the method of claim 1. Zhong et al. (‘555) further discloses “displaying” data (paragraph 42: the camera radar fusion system outputs the fusion results obtained in the block 420. Fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver; paragraph 61: in the block 234, the display driver displays the data from the 214, from the block 224, and from the block 230…a user may view the radar data).” Zhong et al. (‘555)/Bills et al. (‘896) does not explicitly disclose “a radar angle to range (RAR) display and an image overlaid with radar points aggregated from multiple radar scans.” Ahmed et al. (‘920) relates to positioning method using sensor fusion. Ahmed et al. (‘920) teaches “a radar angle to range (RAR) display and an image overlaid with radar points aggregated from multiple radar scans (paragraph 64: the radar signal processing unit estimates the range and Doppler from the received reflections…the radar range at a specific azimuth and elevation angle can be denoted by r.sub.α,β, where α is the azimuth/bearing angle relative to the radar coordinates and β is the elevation angle…another useful derived information can be the estimated size of the target…this might require the processing of several scans).” It would have been obvious to one of ordinary skill-in-the-art before the effective filing date of the claimed invention to modify the method of Zhong et al. (‘555)/Bills et al. (‘896) with the teaching of Ahmed et al. (‘920) to derive a more accurate measurement model (Ahmed et al. (‘920) – paragraph 64). In addition, both of the prior art references, (Zhong et al. (‘555), Bills et al. (‘896)) and Ahmed et al. (‘920) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, fusing radar and camera detection for object detection. Regarding claim 10, which is dependent on claim 2, Zhong et al. (‘555)/Bills et al. (‘896)/Ahmed et al. (‘920) discloses the method of claim 2. Zhong et al. (‘555) further discloses “displaying, within an enlarged RAR display (paragraph 42: the camera radar fusion system outputs the fusion results obtained in the block 420. Fusion visualization, such as radar drawing, fusion drawing, and real-time system-on-a-chip (SoC) loading may be performed…the results may be output on a visual screen using a display driver; paragraph 61: in the block 234, the display driver displays the data from the 214, from the block 224, and from the block 230…a user may view the radar data).” Zhong et al. (‘555) does not explicitly disclose “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance ambiguity.” Bills et al. (‘896) relates to radar and camera sensing. Bills et al. (‘896) teaches “the regions of interest as spatial regions in the 3D space that extend radially from the camera along directions corresponding to the estimated VDB locations, with radial extents determined by the distance ambiguity (column 2 lines 54-57: identifying, by a processor, a region of interest corresponding to an object and a first range of distance using a camera and a light source, and probing, by the processor, the region of interest; column 20 lines 50-55: an ROI and a first range of distance to the ROI is identified using the infrared camera (operation 902)…this may be accomplished using image recognition algorithms or DNN that have been trained to detect and identify the objects of interest; column 20 lines 59-66: FIG. 7A or the sensing system of FIG. 7B may employ the infrared camera 704…regarding the location of objects both radially (e.g., in a z direction) and spatially, or laterally and vertically (e.g., in an x, y plane orthogonal to the z direction), beyond the individual capabilities of either the infrared camera 704).” It would have been obvious to one of ordinary skill-in-the-art before the effective filing date of the claimed invention to modify the method of Zhong et al. (‘555)/Ahmed et al. (‘920) with the teaching of Bills et al. (‘896) for enhanced target detection (Bills et al. (‘896) – column 4 lines 15-29). In addition, both of the prior art references, (Zhong et al. (‘555), Bills et al. (‘896) and Ahmed et al. (‘920)) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, fusing radar and camera detection for object detection. Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Choi et al. (US 11,170,201 B2) describes method, apparatus, and a recording medium for recognizing an object by combining sensor data of an object obtained using a plurality of different types of sensors, thereby increasing the accuracy of object recognition (column 1 lines 58-62); a method of recognizing an object includes obtaining a plurality of pieces of sensor data about the object from a plurality of different types of sensors; converting at least some of the plurality of pieces of sensor data into two-dimensional (2D) sensor data; and recognizing the object by using a previously generated learning network model based on 2D image data obtained from an image sensor which is one of the plurality of sensors and the 2D sensor data (column 3 lines 6-14). Becker (US 2022/0308200 A1) describes a distance and radial velocity of a radar with pixel-wise information of a camera including angular position, semantic labels, color, and/or object bounding boxes may be fused, such that further information may be added to an image based on a fusion with radar data (paragraph 30); predicting radar reflections based on a camera frame, such that the above-mentioned ambiguity may be reduced or even resolved, thereby improving an accuracy and a reliability of data fusion (paragraph 31); the actual radar detections may be transferred into the image plane, wherein it may be assumed that a relative position and orientation of the camera and the radar as well as their internal characteristics may be known (e.g. determined based on a calibration) (paragraph 69); the actual radar positions and uncertainties (taking into account error propagation(s), as it is generally known) may be transferred into a camera coordinate system and thus projected to the image plane, such that the actual radar detections and the radar predictions are present in the same domain (paragraph 70); a list of detections may be obtained which may be indicative of information fused from camera and radar being more precise and richer than detections of only one data source (paragraph 89); he microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like. In addition, the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle (paragraph 161). Bank et al. (US 2020/0219316 A1) describes Autonomous and semi-autonomous driving safety technologies use a combination of hardware (sensors, cameras, and radar) and software to help vehicles identify certain safety risks so they can warn the driver to act (in the case of an ADAS), or act themselves (in the case of an ADS), to avoid a crash…a vehicle outfitted with an ADAS or ADS includes one or more camera sensors mounted on the vehicle that capture images of the scene in front of the vehicle, and also possibly behind and to the sides of the vehicle…Radar systems may also be used to detect objects along the road of travel, and also possibly behind and to the sides of the vehicle. Radar systems utilize radio frequency (RF) waves to determine the range, direction, speed, and/or altitude of the objects along the road…a transmitter transmits pulses of RF waves that bounce off any object(s) in their path…the pulses reflected off the object(s) return a small part of the RF waves' energy to a receiver, which is typically located at the same location as the transmitter. The camera and radar are typically oriented to capture their respective versions of the same scene (paragraph 34). EP 3466070 B1 relates to a device and method of obtaining an image, and a recording medium having recorded thereon a program for executing the method of obtaining an image. Izzat et al. (US 2017/0242117 A1) describes an object-detection system 900 configured to detect an object 902 proximate to a vehicle 924…the system 900 includes a radar-sensor 904 that is used to detect a radar-signal 926 reflected by an object in a radar-field-of-view 906…the system 900 also includes a camera 908 used to capture an image 402 (Figure 4) of the object 902 in a camera-field-of-view 910 that overlaps the radar-field-of-view 906…the system 900 is generally configured to combine information from the radar-sensor 904 and the camera 908 in a manner that takes advantage of the strengths of these two devices and thereby compensating for the weaknesses of the radar-sensor 904 and the camera 908 (paragraph 52; Figure 9). Minemura et al. (US 2014/0297171 A1) relates to radar and camera apparatus. Minemura et al. (‘171) describes fusion information generating processing is executed, is which combines the radar target object information generated in step S250 with the image target object information generated in step S260…this will be described referring to FIG. 5, in which distance values relative to the host vehicle are plotted along the vertical axis and lateral position values along the horizontal axis…a radar position error region is defined by adding predetermined assumed amounts of position error (each expressed by a distance error and a lateral position error amount) to a position that has been obtained for a target object based on the radar target object information….an image position error region is defined by adding predetermined assumed amounts of position error (each expressed by a distance error and a direction angle error amount) to a position calculated for a target object based on the image target object information…each direction angle value is obtained by dividing a distance value by a corresponding lateral position value (paragraph 74). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NUZHAT PERVIN whose telephone number is (571)272-9795. The examiner can normally be reached M-F 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William J Kelleher can be reached at 571-272-7753. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NUZHAT PERVIN/Primary Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Jan 18, 2023
Application Filed
Apr 23, 2025
Non-Final Rejection — §103
Jun 24, 2025
Interview Requested
Jul 08, 2025
Applicant Interview (Telephonic)
Jul 09, 2025
Examiner Interview Summary
Jul 10, 2025
Response Filed
Aug 22, 2025
Final Rejection — §103
Nov 25, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Dec 23, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591036
RADAR SENSOR FOR A VEHICLE AND METHOD FOR INTEGRATING A RADAR SENSOR IN A VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12585011
RADAR DETECTION USING PRIOR TRACKED OBJECT INFORMATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578426
CHANNEL OFFSET CORRECTION FOR RADAR DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12571903
ELECTRONIC DEVICE FOR TRANSMITTING DATA THROUGH UWB COMMUNICATION, AND ELECTRONIC DEVICE OPERATING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12570314
RADAR Sensor System for Vehicles
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
95%
With Interview (+14.3%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 490 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month