Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to applicant’s amendments and remarks received on 08/26/2025.
Response to Arguments
Applicant's arguments filed on 08/26/2025 have been fully considered but they are not persuasive.
Applicant argues:
Kazuki does not take into account a case where multiple objects detected by the radar sensor overlap one object detected by the camera.
The technique disclosed in Nabati does not determine whether or not the object detected by the radar sensor overlaps the object detected by the camera sensor. Further, the technique of Kazuki only allows for one overlap.
Kakuki and Nabati (alone or in combination) do not teach the limitation of "calculate match probability indicating possibility of the imaged target matching each of the rangefinding targets by using a dimension of an overlap between each of the tentative areas and a target area, the tentative areas being areas where the rangefinding targets are projected onto the image, the target area being an area in the image capturing the imaged target" as recited in claim 1.
Examiner’s response:
Kazuki see Para. 60 “The image fusion presence probability calculation unit 34 calculates an image fusion presence probability for each piece of target information L.” Target information L corresponds to radar detections and target information C corresponds to camera detections (Para. 41). Each detection is compared with one another as described in para 60 and further exemplified in para 61. Para. 51 discloses the probability increasing if L and C are overlapping.
Kazuki see Para. 50 and Fig. 6B which discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Nabati see Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union. Both Kazuki and Nabati clearly disclose determining overlapping radar and camera detections. Additionally, upon consideration of response 1, it is clear that multiple overlaps are determined.
Kazuki Para. 47 see "The image fusion presence probability is the presence probability of the target calculated based on how much the target information L is similar to the target information C. That is, it is estimated that the higher the image fusion existence probability, the higher the possibility of fusion." Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Para. 60-61 discloses performing the probability calculation for each radar target information. Nabati Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to propose areas in the image to calculate match probabilities between the radar projected area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-7, 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki et al. (JP 2014006123 A), hereinafter Kazuki, in view of Nabati et al.: "Radar-Camera Sensor Fusion for Joint Object Detection
and Distance Estimation in Autonomous Vehicles", Arxiv.org Cornell University Library, submitted 17 Sep 2020, [retrieved on 6/17/2025]. <https://arxiv.org/abs/2009.08428>, hereinafter Nabati.
Regarding claim 1, Kazuki teaches An information processing system comprising: a sensor to detect distance to each of a plurality of rangefinding targets, the rangefinding targets being targets present in a detection range; (Para. 22-23 discloses an object detection device 100 with a radar sensor 11 to detect the distance to an object. Para. 26 discloses determining the distances, speeds, and directions of all targets with the radar ECU 22 (Electronic Control Unit).). an imaging device to capture an image with at least a portion of an imaging range overlapping the detection range and generate image data indicating the image; (Para. 22 and 30 discloses a camera sensor 12 to output image data. Para. 3 and Fig. 1 discloses the radar 11 and camera 12 detection range partially overlapping.). and processing circuitry to generate rangefinding information indicating the distance and direction to each of the rangefinding targets based on a result of detection by the sensor; (Para. 26 discloses determining the distances, speeds, and directions of all targets with the radar ECU 22.). to specify distance and direction of an imaged target, and generate imaging information indicating the distance and direction of the imaged target, the imaged target being a target included in the image; (Para. 30 discloses capturing images and a camera ECU 24 to process image data. Para. 34 discloses the camera obtaining the distance and direction (azimuth) of an object.). and calculate match probability indicating possibility of the imaged target matching each of the rangefinding targets by using a dimension of an overlap between (Para. 41 defines "L" as information of a radar detected target and defines "C" as information of a camera detected target. Para. 47 see "The image fusion presence probability is the presence probability of the target calculated based on how much the target information L is similar to the target information C. That is, it is estimated that the higher the image fusion existence probability, the higher the possibility of fusion." Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Para. 60-61 discloses performing the probability calculation for each radar target information.). the target area being an area in the image capturing the imaged target. (Para. 50 discloses C having a width (area) in the image.).
Kazuki does not teach to specify a type of an imaged target, and generate imaging information indicating the type of the imaged target, the imaged target being a target included in the image; and to specify tentative values indicating sizes of the rangefinding targets by using the imaging information, specify a plurality of tentative areas in accordance with the tentative values and the rangefinding information, each of the tentative areas and a target area, the tentative areas being areas where the rangefinding targets are projected onto the image,.
However, Nabati teaches to specify a type of an imaged target, and generate imaging information indicating the type of the imaged target, the imaged target being a target included in the image; (Page 2, Col. 2, Para. 1 see "Fast R-CNN [13] also uses an external proposal generator, but eliminates redundant feature extraction by utilizing the global features extracted from the entire image to classify each proposal in the second stage. Faster RCNN [14] unifies the proposal generation and classification by introducing the Region Proposal Network (RPN), which uses the global features extracted from the image to generate object proposals."). and to specify tentative values indicating sizes of the rangefinding targets by using the imaging information, (Page 3, Col. 2, Para. 2 see "RPR uses the features extracted from the image by the backbone network to adjust the size and location of the radar proposals on the image."). specify a plurality of tentative areas in accordance with the tentative values and the rangefinding information, (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 1, Para. 3 see "3D anchors with the same size as objects of interest are used to generate the 2D object proposals on the image, the resulting proposals capture the true size of the objects as they appear in the image."). each of the tentative areas and a target area, the tentative areas being areas where the rangefinding targets are projected onto the image, (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify, for a radar detected object, a size of a classified object based on distance by using image data and proposing areas in the image corresponding to the determined size to calculate match probabilities between the radar projected area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object. Additionally, performing object classification using image data would predictably result in more accurate classifications because image data contains rich information that radar/lidar does not have.
Regarding claim 2, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches wherein the processing circuitry calculates the match probability in such a manner that the match probability increases as a dimension of an overlap between (Para. 51 discloses the probability increasing if L and C are overlapping.).
Kazuki does not teach each of the tentative areas and the target area increases.
However, Nabati teaches each of the tentative areas and the target area increases. (Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify tentative areas for rangefinding detections using camera data and calculating a match probability between the rangefinding area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object.
Regarding claim 3, Kazuki in view of Nabati teaches The information processing system according to claim 2.
In addition, Kazuki teaches wherein the processing circuitry calculates the match probability in such a manner that the match probability increase as an area of the overlap between (Para. 51 discloses the probability increasing if L and C are overlapping. Para. 50 discloses C having a width (area) in the image.).
Kazuki does not teach each of the tentative areas and the target area increases.
However, Nabati teaches each of the tentative areas and the target area increases. (Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify tentative areas for rangefinding detections using camera data and calculating a match probability between the rangefinding area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object.
Regarding claim 4, Kazuki in view of Nabati teaches The information processing system according to claim 2.
In addition, Kazuki teaches wherein the processing circuitry calculates the match probability in such a manner that the match probability increases as a width of the overlap between (Para. 51 discloses the probability increasing if L and C are overlapping. Para. 50 discloses C having a width (area) in the image.).
Kazuki does not teach each of the tentative areas and the target area increases.
However, Nabati teaches each of the tentative areas and the target area increases. (Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify tentative areas for rangefinding detections using camera data and calculating a match probability between the rangefinding area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object.
Regarding claim 6, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches wherein the processing circuitry calculates the match probability in such a manner that the match probability increases as a dimension of an overlap between (Para. 51 discloses the probability increasing if L and C are overlapping. Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object.).
Kazuki does not teach each of the tentative areas and the target area increases and as a distance between the imaged target and each of the tentative areas decreases when the rangefinding targets are projected onto the image.
However, Nabati teaches each of the tentative areas and the target area increases and as a distance between the imaged target and each of the tentative areas decreases when the rangefinding targets are projected onto the image. (Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union and defines positive proposals with an overlap higher than 0.7 with any ground truth bounding box, and negative proposals as ones with an IoU below 0.3 for all ground truth boxes.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify tentative areas for rangefinding detections using camera data and calculating a match probability between the rangefinding area and the imaged area where the probability increases as the distance between the imaged target and the tentative area decreases. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object.
Regarding claim 7, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches wherein the processing circuitry calculates confidence levels of distance and direction of the imaged target indicated by the imaging information and distance and direction of each of the rangefinding targets indicated by the rangefinding information, and calculates the match probability when the confidence levels of all of the rangefinding targets are lower than a predetermined threshold. (Para. 18 discloses determining if fusion is performed based on if the two sensors detected a target with "good" position accuracy. Para 28. discloses the position of the radar object is derived from distance and azimuth. Para. 33 discloses determining the azimuth of the camera object based on distance and position. Para. 49 discloses using a threshold to determine a "good" state or a "normal" state of the radar. The distances and directions are positional information and a confidence level is calculated for a position.).
Regarding claim 13, Kazuki teaches An information processing device comprising: a communication interface to acquire rangefinding information, image data, and imaging information, (Para. 22 discloses a radar sensor device 11 and a camera sensor device 12 and ECUs that are communicably connected.). the rangefinding information indicating distance and direction of each of a plurality of rangefinding targets, the rangefinding targets being targets present in a detection range, (Para. 26 discloses determining the distances, speeds, and directions of all targets with the radar ECU 22.). the image data indicating an image captured by overlapping at least a portion of an imaging range with the detection range, the imaging information indicating distance, direction, (Para. 22 and 30 discloses a camera sensor 12 to output image data. Para. 3 and Fig. 1 discloses the radar 11 and camera 12 detection range partially overlapping. Para. 34 discloses the camera obtaining the distance and direction (azimuth) of an object.). and calculate match probability indicating possibility of the imaged target matching each of the rangefinding targets by using a dimension of an overlap between (Para. 41 defines "L" as information of a radar detected target and defines "C" as information of a camera detected target. Para. 47 see "The image fusion presence probability is the presence probability of the target calculated based on how much the target information L is similar to the target information C. That is, it is estimated that the higher the image fusion existence probability, the higher the possibility of fusion." Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Para. 60-61 discloses performing the probability calculation for each radar target information.). the target area being an area in the image capturing the imaged target. (Para. 50 discloses C having a width (area) in the image.).
Kazuki does not teach and type of an imaged target, the imaged target being a target included in the image; and processing circuitry to specify tentative values indicating sizes of the rangefinding targets by using the imaging information, specify a plurality of tentative areas in accordance with the tentative values and the rangefinding information, each of the tentative areas and a target area, the tentative areas being areas where the rangefinding targets are projected onto the image.
However, Nabati teaches and type of an imaged target, the imaged target being a target included in the image; (Page 2, Col. 2, Para. 1 see "Fast R-CNN [13] also uses an external proposal generator, but eliminates redundant feature extraction by utilizing the global features extracted from the entire image to classify each proposal in the second stage. Faster RCNN [14] unifies the proposal generation and classification by introducing the Region Proposal Network (RPN), which uses the global features extracted from the image to generate object proposals."). and processing circuitry to specify tentative values indicating sizes of the rangefinding targets by using the imaging information, (Page 3, Col. 2, Para. 2 see "RPR uses the features extracted from the image by the backbone network to adjust the size and location of the radar proposals on the image."). specify a plurality of tentative areas in accordance with the tentative values and the rangefinding information, (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 1, Para. 3 see "3D anchors with the same size as objects of interest are used to generate the 2D object proposals on the image, the resulting proposals capture the true size of the objects as they appear in the image."). each of the tentative areas and a target area, the tentative areas being areas where the rangefinding targets are projected onto the image (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify, for a radar detected object, a size of a classified object based on distance by using image data and proposing areas in the image corresponding to the determined size to calculate match probabilities between the radar projected area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object. Additionally, performing object classification using image data would predictably result in more accurate classifications because image data contains rich information that radar/lidar does not have.
Regarding claim 14, Kazuki teaches A non-transitory computer-readable storage medium storing a program that causes a computer to execute processing comprising: (Para. 40 see "The system ECU 14 includes a sensor fusion unit 31, an image fusion presence probability calculation unit 34, a stationary object presence probability calculation unit 35, a close distance correction determination unit 32, and a correction process, which are realized by the CPU executing a program and cooperating with hardware." Para. 27 discloses a cpu, rom, and ram to execute a program for all ECUs.). acquiring rangefinding information, image data, and imaging information, (Para. 22 discloses a radar sensor device 11 and a camera sensor device 12 and ECUs that are communicably connected.). the rangefinding information indicating distance and direction of each of a plurality of rangefinding targets, the rangefinding targets being targets present in a detection range, (Para. 26 discloses determining the distances, speeds, and directions of all targets with the radar ECU 22.). the image data indicating an image captured by overlapping at least a portion of an imaging range with the detection range, the imaging information indicating distance, direction, (Para. 22 and 30 discloses a camera sensor 12 to output image data. Para. 3 and Fig. 1 discloses the radar 11 and camera 12 detection range partially overlapping. Para. 34 discloses the camera obtaining the distance and direction (azimuth) of an object.). and calculating match probability indicating possibility of the imaged target matching each of the rangefinding targets by using a dimension of an overlap between (Para. 41 defines "L" as information of a radar detected target and defines "C" as information of a camera detected target. Para. 47 see "The image fusion presence probability is the presence probability of the target calculated based on how much the target information L is similar to the target information C. That is, it is estimated that the higher the image fusion existence probability, the higher the possibility of fusion." Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Para. 60-61 discloses performing the probability calculation for each radar target information.). the target area being an area in the image capturing the imaged target. (Para. 50 discloses C having a width (area) in the image.).
Kazuki does not teach and type of an imaged target, the imaged target being a target included in the image; specifying tentative values indicating sizes of the rangefinding targets by using the imaging information; specifying a plurality of tentative areas in accordance with the tentative values and the rangefinding information, the tentative areas being areas where the rangefinding targets are projected onto the image; each of the tentative areas and a target area,.
However, Nabati teaches and type of an imaged target, the imaged target being a target included in the image; (Page 2, Col. 2, Para. 1 see "Fast R-CNN [13] also uses an external proposal generator, but eliminates redundant feature extraction by utilizing the global features extracted from the entire image to classify each proposal in the second stage. Faster RCNN [14] unifies the proposal generation and classification by introducing the Region Proposal Network (RPN), which uses the global features extracted from the image to generate object proposals."). specifying tentative values indicating sizes of the rangefinding targets by using the imaging information; (Page 3, Col. 2, Para. 2 see "RPR uses the features extracted from the image by the backbone network to adjust the size and location of the radar proposals on the image."). specifying a plurality of tentative areas in accordance with the tentative values and the rangefinding information, the tentative areas being areas where the rangefinding targets are projected onto the image; (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 1, Para. 3 see "3D anchors with the same size as objects of interest are used to generate the 2D object proposals on the image, the resulting proposals capture the true size of the objects as they appear in the image."). each of the tentative areas and a target area, (Abstract see "Our radar object proposal network uses radar point clouds to generate 3D proposals from a set of 3D prior boxes. These proposals are mapped to the image." Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati to specify, for a radar detected object, a size of a classified object based on distance by using image data and proposing areas in the image corresponding to the determined size to calculate match probabilities between the radar projected area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object. Additionally, performing object classification using image data would predictably result in more accurate classifications because image data contains rich information that radar/lidar does not have.
Claim 15 is rejected under the same analysis as claim 1 above.
Claims 5, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki et al. (JP 2014006123 A), hereinafter Kazuki, in view of Nabati et al.: "Radar-Camera Sensor Fusion for Joint Object Detection
and Distance Estimation in Autonomous Vehicles", Arxiv.org Cornell University Library, submitted 17 Sep 2020, [retrieved on 6/17/2025]. <https://arxiv.org/abs/2009.08428>, hereinafter Nabati, and Schiffmann et al. (US 20170206436 A1), hereinafter Schiffmann.
Regarding claim 5, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches wherein the processing circuitry calculates the match probability in such a manner that the match probability increases as a dimension of an overlap between (Para. 51 discloses the probability increasing if L and C are overlapping.).
Kazuki does not teach each of the tentative areas and the target area increases and as a distance to each of the rangefinding targets approximates a distance to the imaged target.
However, Nabati teaches each of the tentative areas and the target area increases (Page. 3, Col. 2, Para. 2 and Fig. 3 discloses determining if the image area and the radar area overlap using Intersection-over-Union.).
Furthermore, Schiffmann teaches and as a distance to each of the rangefinding targets approximates a distance to the imaged target. (Para. 36 discloses the match probability increasing as the longitudinal position measured by the radar and camera agree with each other. Para. 19 demonstrates the longitudinal position as a distance.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Schiffmann to specify tentative areas for rangefinding detections using camera data and calculating a match probability between the rangefinding area and the imaged area. Doing so would make rangefinding and camera fusion predictably more precise, accurate, and reliable by reducing errors in identifying whether detected objects to be fused correspond to the same physical object. Additionally, calculating the match probability in such a way that the probability increases as a distance to each of the rangefinding targets approximates a distance to the imaged target, would predictably increase accuracy of matching the detected objects from each sensor.
Regarding claim 12, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches wherein when a distance to at least one of the rangefinding targets is smaller than a predetermined threshold distance, (Para. 68 discloses setting a close distance condition to ON when the distance to an object is within 2.5m.).
Kazuki does not teach the processing circuitry prevents the match probability from being calculated between the imaged target and the at least one of the rangefinding targets.
However, Schiffmann teaches the processing circuitry prevents the match probability from being calculated between the imaged target and the at least one of the rangefinding targets. (Para. 4 discloses determining a match-feasibility matrix based on a distance between objects. Para. 27-29 discloses using a pre-gating technique to filter out possible matches before matching using the match-feasibility matrix which reduces the combinatorial complexity of possible matches.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Schiffmann to prevent the match probability calculation from being performed based on if the object too close to the vehicle. Doing so would predictably save on computational resources such as cpu processing, memory, and energy costs by reducing the number of computations to be carried out.
Claims 8, 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki et al. (JP 2014006123 A), hereinafter Kazuki, in view of Nabati et al.: "Radar-Camera Sensor Fusion for Joint Object Detection
and Distance Estimation in Autonomous Vehicles", Arxiv.org Cornell University Library, submitted 17 Sep 2020, [retrieved on 6/17/2025]. <https://arxiv.org/abs/2009.08428>, hereinafter Nabati, and Koivisto et al. (US 20190258878 A1), hereinafter Koivisto.
Regarding claim 8, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches of a vehicle on which the information processing system is mounted, (Para. 23 and 30 discloses the radar and camera being mounted on the front of a vehicle which contain ECU.). and calculates the match probability between the imaged target and each of the rangefinding targets when the imaged target affects the running trajectory. (Para. 41 defines "L" as information of a radar detected target and defines "C" as information of a camera detected target. Para. 47 see "The image fusion presence probability is the presence probability of the target calculated based on how much the target information L is similar to the target information C. That is, it is estimated that the higher the image fusion existence probability, the higher the possibility of fusion." Para. 50 and Fig. 6B discloses using an overlapping state variable which indicates if the point of the radar detected object is within the width of the camera detected object. Para. 60-61 discloses performing the probability calculation for each radar target information.).
Kazuki does not teach wherein the processing circuitry specifies a running trajectory.
However, Koivisto teaches wherein the processing circuitry specifies a running trajectory (Para. 222 discloses identifying and determining paths and obstacles in front of the vehicle.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Koivisto to specify a path the vehicle is traveling and perform match probabilities of objects along that path. Doing so would predictably save on computational resources such as cpu processing, memory, and energy costs by only calculating match probabilities relevant to the vehicles path as opposed to objects far behind the vehicle where it has already traveled.
Regarding claim 10, Kazuki in view of Nabati teaches The information processing system according to claim 1.
Kazuki does not teach wherein the processing circuitry specifies the tentative value corresponding to a type included in the imaging information.
However, Koivisto teaches wherein the processing circuitry specifies the tentative value corresponding to a type included in the imaging information. (Para. 153-154 discloses determining regions and shapes in the image and specifying the size of the shape based on object class.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Koivisto to specify a tentative value (size) of an object based on classification using image data. Doing so would predictably result in more accurate size representations in the image because image data contains rich information that can be used to classify objects which radar/lidar does not have.
Regarding claim 11, Kazuki in view of Nabati teaches The information processing system according to claim 1.
Kazuki does not teach wherein when two or more rangefinding targets out of the plurality of rangefinding targets are adjacent to each other, the processing circuitry specifies an aggregated rangefinding target obtained by aggregating the two or more rangefinding targets into one, and calculates the match probability between the imaged target and the aggregated rangefinding target.
However, Koivisto teaches wherein when two or more rangefinding targets out of the plurality of rangefinding targets are adjacent to each other, the processing circuitry specifies an aggregated rangefinding target obtained by aggregating the two or more rangefinding targets into one, and calculates the match probability between the imaged target and the aggregated rangefinding target (Para. 37 discloses detecting and combining multiple bounding boxes of the same object into an aggregated object and assigning a confidence value to the aggregated detection that indicates the objected represented in image data.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Koivisto to process the match probability of targets adjacent to one another as an aggregated target. Doing so would predictably result in a more robust and accurate matching because large objects sensed by a radar/lidar sensor can often return more than one detection. Additionally, camera sensors might detect and classify separate objects when in reality they are one object (e.g. a person on a bicycle).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kazuki et al. (JP 2014006123 A), hereinafter Kazuki, in view of Nabati et al.: "Radar-Camera Sensor Fusion for Joint Object Detection
and Distance Estimation in Autonomous Vehicles", Arxiv.org Cornell University Library, submitted 17 Sep 2020, [retrieved on 6/17/2025]. <https://arxiv.org/abs/2009.08428>, hereinafter Nabati, and Sekiguchi (US 20040066285 A1), hereinafter Sekiguchi.
Regarding claim 9, Kazuki in view of Nabati teaches The information processing system according to claim 1.
In addition, Kazuki teaches and combines distance and direction of the rangefinding target and distance and direction of the combination target. (Para. 41 discloses generating one target information F with target information L and C with the sensor fusion unit 31.).
Kazuki does not teach wherein the processing circuitry determines a rangefinding target having the highest match probability out of the plurality of rangefinding targets, as a combination target,.
However, Sekiguchi teaches wherein the processing circuitry determines a rangefinding target having the highest match probability out of the plurality of rangefinding targets, as a combination target, (Para. 26 discloses determining the object with the highest match probability when fusing camera and radar data.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Kazuki to incorporate the teachings of Nabati and Sekiguchi to determine the object with the highest match probability when fusing camera and radar data. Doing so would predictably increase accuracy of matching the detected objects from each sensor and prevent matching with multiple objects.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Izzat et al. (US 20170242117 A1) discloses a method and system to fuse radar or lidar data with camera data where the fields of view overlap.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J VAUGHN whose telephone number is (571) 272-5253. The examiner can normally be reached M-F 8:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW MOYER can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER JOSEPH VAUGHN/Examiner, Art Unit 2675
/EDWARD PARK/Primary Examiner, Art Unit 2675