DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered but they are considered moot because the arguments are directed toward elements that have not been previously considered and have necessitated a new grounds of rejection as outlined below.
Response to Amendment
Regarding the rejections under 35 USC §103, amendments made to the claims have necessitated a new grounds of rejection as outlined below.
Claim Objections
Claims 1, 15, and 21 are objected to because of the following informalities:
Claim 1, line 12, “wherein the controller is configured to control” should read “wherein the controller is further configured to control”
Claim 1, line 17, Claim 1, line 12, “wherein the controller is configured to:” should read “wherein the controller is further configured to:”
Claim 15, line 8, “and the object in front” should read “an object in front” in order to avoid a lack of antecedent basis
Claim 21, lines 7-8, “and the object in front” should read “an object in front” in order to avoid a lack of antecedent basis
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 7, 13, 15, 17, 19, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Hartung et al. (U.S. Patent Application Publication No. 2017/0139411 A1; hereinafter Hartung) in view of Timmons et al. (U.S. Patent Application Publication No. 2011/0190972 A1; hereinafter Timmons) and further in view of Karasudani (U.S. Patent No. 5,369,590) and Nehmadi et al. (U.S. Patent Application Publication No. 2022/0335729 A1; hereinafter Nehmadi).
Regarding claim 1, Hartung discloses:
A vehicle (autonomous vehicle 202, see at least [0037]) comprising:
a camera provided to obtain image data (front-facing camera for images, see at least [0050]-[0051]);
a radar provided to obtain radar data (radar to detect object, see at least [0051]);
a Lidar provided to obtain Lidar data (LiDAR for laser measured distances, see at least [0050]-[0051]); and
a controller (device includes processor system of one or more processors such as controllers, see at least [0183]) configured to process the image data, the radar data, and the Lidar data to generate a first sensor fusion (employ sensor fusion algorithm to improve accuracy of data combined from different components, the different components being the front-facing camera, lidar, and radar, see at least [0053] and [0070]),
wherein the controller calculates reliability of at least one sensor in which an event does not occur among a plurality of sensors comprising the camera, the radar and the Lidar when the event for the at least one sensor is detected (determine a component failure if the information from the components are not in agreement such as two sensors detecting an object but one does not, see at least [0053]), and changes from the first sensor fusion to a second sensor fusion (when a component is in a failed state, the information from the failed component is not considered, see at least [0053]-[0054]) based on the at least one sensor when the reliability is greater than or equal to a predetermined threshold (failure of component may be determined with some threshold amount of certainty and that outputs may be similar within some threshold tolerance, see at least [0096]), and
wherein the controller is configured to control a braking amount or a deceleration amount of the vehicle based on the first sensor fusion or the second sensor fusion (plan a route such as stopping based on the detected objects using the components that are not in a failed state, see at least [0054])
Hartung does not explicitly disclose:
sensor fusion track
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and an object in front of the preceding vehicle is detected
wherein the controller is configured to: determine whether a size of the object is larger than a predetermined size based on the Lidar data when generating the second sensor fusion track based on the Lidar sensor, and
reflect the object on the second sensor fusion track when the size of the object is larger than the predetermined size
However, Timmons teaches:
a controller configured to process the image data, the radar data, and the Lidar data to generate a first sensor fusion track (various input sensors is fused into a single useful track of an object, see at least [0063]; CPS monitors environment using radars, lidars, and cameras, see at least [0066])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung by adding the fused track taught by Timmons with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “for a variety of purposes including adaptive cruise control, wherein the vehicle adjusts speed to maintain a minimum distance from vehicles in the current path” and “to identify a likely impending or imminent collision based upon the track motion relative to the vehicle” (see [0064]). Furthermore, “numerous data fusion methods are known in the art” (see [0063]).
Additionally, Karasudani teaches:
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and an object in front of the preceding vehicle is detected (preceding vehicle changes driving lane to another lane and a vehicle in front of the former preceding vehicle becomes a new preceding vehicle, see at least col. 1 line 60 – col. 2 line 9)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung and the fused track taught by Timmons by adding the new preceding vehicle taught by Karasudani with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification so that “the distance detection or the like is continuously performed” (see col. 4 lines 13-14).
Furthermore, Nehmadi teaches:
wherein the controller is configured to: determine whether a size of the object is larger than a predetermined size based on the Lidar data when generating the second sensor fusion track based on the Lidar sensor (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078]; sensors include Lidar, see at least [0072]), and
reflect the object on the second sensor fusion track when the size of the object is larger than the predetermined size (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung, the fused track taught by Timmons, and the new preceding vehicle taught by Karasudani by adding the object meeting criteria taught by Nehmadi with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification such that “detection threshold on the height map identifies objects that may affect the driving path” (see [0078]).
Regarding claim 5, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung teaches:
the controller detects the event for the radar and generates the second sensor fusion track based on the image data and the Lidar data (if radar has failed, stop considering information from radar and only use information from the camera and lidar, see at least [0054]).
Regarding claim 7, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung teaches:
the controller detects the event for the Lidar and generates the second sensor fusion track based on the image data and the radar data (camera, lidar, and radar are used for object detection and the component in a failed state is not used, see at least [0053]-[0054]) *Examiner sets forth that although the example given by Hartung teaches radar has failed and to use lidar and camera, the logic would still apply to when lidar has failed to use a camera and radar
Regarding claim 13, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and further discloses:
the controller performs avoidance control for the object based on the second sensor fusion track (when radar has failed use camera and lidar to indicate presence of object and when object is present, vehicle control plans a route based on the detected object such as to stop and engage brakes, see at least [0054])
Regarding claim 15, Hartung discloses:
A control method of a vehicle (autonomous vehicle 202, see at least [0037]) which comprises a camera provided to obtain image data (front-facing camera for images, see at least [0050]-[0051]), a radar provided to obtain radar data (radar to detect object, see at least [0051]), and a Lidar provided to obtain Lidar data (LiDAR for laser measured distances, see at least [0050]-[0051]), the control method comprising:
processing, by a controller (device includes processor system of one or more processors such as controllers, see at least [0183]), the image data, the radar data, and the Lidar data to generate a first sensor fusion (employ sensor fusion algorithm to improve accuracy of data combined from different components, the different components being the front-facing camera, lidar, and radar, see at least [0053] and [0070]);
detecting, by the controller, an event for at least one of a plurality of sensors comprising the camera, the radar and the Lidar (determine a component failure if the information from the components are not in agreement such as two sensors detecting an object but one does not, see at least [0053]);
calculating, by the controller, reliability of at least one sensor among the plurality of sensors in which the event does not occur (failure of component may be determined with some threshold amount of certainty, see at least [0096]); and changing, by the controller, from the first sensor fusion to the second sensor fusion (when a component is in a failed state, the information from the failed component is not considered, see at least [0053]-[0054]) based on the at least one sensor when the reliability is greater than or equal to a predetermined threshold (failure of component may be determined with some threshold amount of certainty and that outputs may be similar within some threshold tolerance, see at least [0096]); and
controlling, by the controller, a braking amount or a deceleration amount of the vehicle based on the first sensor fusion or the second sensor fusion (plan a route such as stopping based on the detected objects using the components that are not in a failed state, see at least [0054])
Hartung does not explicitly disclose:
sensor fusion track
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and an object in front of the preceding vehicle is detected
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and the object in front of the preceding vehicle is detected
determining, by the controller, whether a size of the object is larger than a predetermined size based on the Lidar data when generating a second sensor fusion track based on the Lidar sensor;
reflecting, by the controller, the object on the second sensor fusion track when the size of the object is larger than the predetermined size
However, Timmons teaches:
a controller configured to process the image data, the radar data, and the Lidar data to generate a first sensor fusion track (various input sensors is fused into a single useful track of an object, see at least [0063]; CPS monitors environment using radars, lidars, and cameras, see at least [0066])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung by adding the fused track taught by Timmons with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “for a variety of purposes including adaptive cruise control, wherein the vehicle adjusts speed to maintain a minimum distance from vehicles in the current path” and “to identify a likely impending or imminent collision based upon the track motion relative to the vehicle” (see [0064]). Furthermore, “numerous data fusion methods are known in the art” (see [0063]).
Additionally, Karasudani teaches:
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and the object in front of the preceding vehicle is detected (preceding vehicle changes driving lane to another lane and a vehicle in front of the former preceding vehicle becomes a new preceding vehicle, see at least col. 1 line 60 – col. 2 line 9)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung and the fused track taught by Timmons by adding the new preceding vehicle taught by Karasudani with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification so that “the distance detection or the like is continuously performed” (see col. 4 lines 13-14).
Furthermore, Nehmadi teaches:
determining, by the controller, whether a size of the object is larger than a predetermined size based on the Lidar data when generating a second sensor fusion track based on the Lidar sensor (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078]; sensors include Lidar, see at least [0072]), and
reflecting, by the controller, the object on the second sensor fusion track when the size of the object is larger than the predetermined size (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung, the fused track taught by Timmons, and the new preceding vehicle taught by Karasudani by adding the object meeting criteria taught by Nehmadi with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification such that “detection threshold on the height map identifies objects that may affect the driving path” (see [0078]).
Regarding claim 17, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung teaches:
changing from the first sensor fusion track to the second sensor fusion track comprises detecting the event for the camera and generating the second sensor fusion track based on the radar data and the Lidar data (camera, lidar, and radar are used for object detection and the component in a failed state is not used, see at least [0053]-[0054]) *Examiner sets forth that although the example given by Hartung teaches radar has failed and to use lidar and camera, the logic would still apply to when a camera has failed to use lidar and radar
Regarding claim 19, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung teaches:
changing from the first sensor fusion track to the second sensor fusion track comprises detecting the event for the radar and generating the second sensor fusion track based on the image data and the Lidar data (if radar has failed, stop considering information from radar and only use information from the camera and lidar, see at least [0054])
Regarding claim 21, Hartung discloses:
A non-transitory computer readable medium containing program instructions executed by a processor (device includes computer-readable storage memory accessed by computing device and executable instructions, see at least [0185]), the computer readable medium comprising:
program instructions that process image data, radar data, and Lidar data to generate a first sensor fusion (employ sensor fusion algorithm to improve accuracy of data combined from different components, the different components being the front-facing camera, lidar, and radar, see at least [0053] and [0070]);
program instructions that detect an event for at least one of a plurality of sensors comprising a camera, a radar and a Lidar (determine a component failure if the information from the components are not in agreement such as two sensors detecting an object but one does not, see at least [0053]);
program instructions that calculate reliability of at least one sensor among the plurality of sensors in which the event does not occur (failure of component may be determined with some threshold amount of certainty, see at least [0096]);
program instructions that change from the first sensor fusion to a second sensor fusion (when a component is in a failed state, the information from the failed component is not considered, see at least [0053]-[0054]) based on the at least one sensor when the reliability is greater than or equal to a predetermined threshold (failure of component may be determined with some threshold amount of certainty and that outputs may be similar within some threshold tolerance, see at least [0096]); and
program instructions that control a braking amount or a deceleration amount of the vehicle based on the first sensor fusion or the second sensor fusion (plan a route such as stopping based on the detected objects using the components that are not in a failed state, see at least [0054]).
Hartung does not explicitly disclose:
sensor fusion track
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and an object in front of the preceding vehicle is detected
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and the object in front of the preceding vehicle is detected;
program instructions that determine whether a size of the object is larger than a predetermined size based on the Lidar data when generating a second sensor fusion track based on the Lidar sensor;
program instructions that reflect the object on the second sensor fusion track when the size of the object is larger than the predetermined size
However, Timmons teaches:
process the image data, the radar data, and the Lidar data to generate a first sensor fusion track (various input sensors is fused into a single useful track of an object, see at least [0063]; CPS monitors environment using radars, lidars, and cameras, see at least [0066])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung by adding the fused track taught by Timmons with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “for a variety of purposes including adaptive cruise control, wherein the vehicle adjusts speed to maintain a minimum distance from vehicles in the current path” and “to identify a likely impending or imminent collision based upon the track motion relative to the vehicle” (see [0064]). Furthermore, “numerous data fusion methods are known in the art” (see [0063]).
Additionally, Karasudani teaches:
wherein the event comprises a situation in which a preceding vehicle traveling in a field of front view of the vehicle disappears and the object in front of the preceding vehicle is detected (preceding vehicle changes driving lane to another lane and a vehicle in front of the former preceding vehicle becomes a new preceding vehicle, see at least col. 1 line 60 – col. 2 line 9)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung and the fused track taught by Timmons by adding the new preceding vehicle taught by Karasudani with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification so that “the distance detection or the like is continuously performed” (see col. 4 lines 13-14).
Furthermore, Nehmadi teaches:
program instructions that determine whether a size of the object is larger than a predetermined size based on the Lidar data when generating a second sensor fusion track based on the Lidar sensor (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078]; sensors include Lidar, see at least [0072]), and
program instructions that reflect the object on the second sensor fusion track when the size of the object is larger than the predetermined size (detect objects that meet certain criteria such as object having height greater than a predetermined minimum height, see at least [0078])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera and component failure information disclosed by Hartung, the fused track taught by Timmons, and the new preceding vehicle taught by Karasudani by adding the object meeting criteria taught by Nehmadi with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification such that “detection threshold on the height map identifies objects that may affect the driving path” (see [0078]).
Regarding claim 22, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung teaches:
the controller detects the event for the camera and generates the second sensor fusion track based on the radar data and the lidar data (camera, lidar, and radar are used for object detection and the component in a failed state is not used, see at least [0053]-[0054]) *Examiner sets forth that although the example given by Hartung teaches radar has failed and to use lidar and camera, the logic would still apply to when a camera has failed to use lidar and radar
Claims 2 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hartung in view of Timmons, Karasudani, and Nehmadi as applied to claim 1 above and further in view of Pasch et al. (U.S. Patent Application Publication No. 2022/0308577 A1; hereinafter Pasch).
Regarding claim 2, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller limits at least one of the braking amount or the deceleration amount of the vehicle to a predetermined ratio when the second sensor fusion track is generated
However, Pasch teaches:
the controller limits at least one of the braking amount or the deceleration amount of the vehicle to a predetermined ratio when the second sensor fusion track is generated (in a degraded mode, the automated driving system may use remaining sensors of the vehicle but with limited execution ranges such as reduced maximum speed and reduced deceleration, see at least [0053]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the limited execution range in a degraded mode taught by Pasch with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “in order to remain safe” in the event of a failure (see [0053]).
Regarding claim 16, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
limiting at least one of the braking amount or the deceleration amount of the vehicle to a predetermined ratio when the second sensor fusion track is generated.
However, Pasch teaches:
limiting at least one of the braking amount or the deceleration amount of the vehicle to a predetermined ratio when the second sensor fusion track is generated (in a degraded mode, the automated driving system may use remaining sensors of the vehicle but with limited execution ranges such as reduced maximum speed and reduced deceleration, see at least [0053]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the limited execution range in a degraded mode taught by Pasch with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “in order to remain safe” in the event of a failure (see [0053]).
Claims 6, 8, 18, 20, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Hartung in view of Timmons, Karasudani, and Nehmadi as applied to claim 1 above and further in view of Nishida et al. (U.S. Patent Application Publication No. 2020/0207362 A1; hereinafter Nishida).
Regarding claim 6, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller occurs the event based on a connection state between the radar and the controller
However, Nishida teaches:
the controller occurs the event based on a connection state between the radar and the controller (radar may produce false positives due to interference with other millimeter-wave radars causing a “C”, see at least [0032]-[0033])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the sensor environment performance taught by Nishida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to determine or detect “a failure in each of the external sensors based on the failure likelihood of each of the plurality of external sensors” (see [0008]) which makes it possible “to extend the cruisable distance of automatic driving without driver intervention” (see [0010]).
Regarding claim 8, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller occurs the event based on a connection state between the radar and the controller.
However, Nishida teaches:
the controller occurs the event based on a connection state between the radar and the controller (radar may produce false positives due to interference with other millimeter-wave radars causing a “C”, see at least [0032]-[0033])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the sensor environment performance taught by Nishida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to determine or detect “a failure in each of the external sensors based on the failure likelihood of each of the plurality of external sensors” (see [0008]) which makes it possible “to extend the cruisable distance of automatic driving without driver intervention” (see [0010]).
Regarding claim 18, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
detecting the event for the camera comprises generating an event for the camera based on illuminance or external weather conditions.
However, Nishida teaches:
detecting the event for the camera comprises generating an event for the camera based on illuminance or external weather conditions (camera has unfavorable conditions which causes recognition accuracy to drop such as intense glare or foggy, see at least [0032]-[0033] and Fig. 3)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the sensor environment performance taught by Nishida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to determine or detect “a failure in each of the external sensors based on the failure likelihood of each of the plurality of external sensors” (see [0008]) which makes it possible “to extend the cruisable distance of automatic driving without driver intervention” (see [0010]).
Regarding claim 20, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
detecting the event for the radar comprises detecting the event based on a connection state between the radar and a controller
However, Nishida teaches:
detecting the event for the radar comprises detecting the event based on a connection state between the radar and a controller (radar may produce false positives due to interference with other millimeter-wave radars causing a “C”, see at least [0032]-[0033])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the sensor environment performance taught by Nishida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to determine or detect “a failure in each of the external sensors based on the failure likelihood of each of the plurality of external sensors” (see [0008]) which makes it possible “to extend the cruisable distance of automatic driving without driver intervention” (see [0010]).
Regarding claim 23, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller generates the event for the camera based on illuminance or external weather conditions
However, Nishida teaches:
the controller generates the event for the camera based on illuminance or external weather conditions (camera has unfavorable conditions which causes recognition accuracy to drop such as intense glare or foggy, see at least [0032]-[0033] and Fig. 3)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the sensor environment performance taught by Nishida with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to determine or detect “a failure in each of the external sensors based on the failure likelihood of each of the plurality of external sensors” (see [0008]) which makes it possible “to extend the cruisable distance of automatic driving without driver intervention” (see [0010]).
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Hartung in view of Timmons and Karasudani as applied to claim 1 above and further in view of Yoon et al. (U.S. Patent Application Publication No. 2021/0362733 A1; hereinafter Yoon).
Regarding claim 9, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller detects an event for the camera and the radar, and generates the second sensor fusion track based on the Lidar data
However, Yoon teaches:
the controller detects an event for the camera and the radar, and generates the second sensor fusion track based on the Lidar data (use only lidar when camera and radar fail, see at least [0158] and Fig. 8b)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the use of lidar in event of camera and radar failure taught by Yoon with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to allow a vehicle to travel because it still “is capable of determining the presence or absence of neighboring objects” (see [0158]).
Regarding claim 10, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller detects an event for the radar and the Lidar, and generates the second sensor fusion track based on the image data
However, Yoon teaches:
the controller detects an event for the radar and the Lidar, and generates the second sensor fusion track based on the image data (use camera when radar and lidar fail, see at least [0156] and Fig. 8b)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the use of camera in event of radar and lidar failure taught by Yoon with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to allow a vehicle to travel because it still “is capable of determining the presence or absence of neighboring objects” (see [0158]).
Regarding claim 11, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above but does not teach:
the controller detects an event for the camera and the Lidar, and generates the second sensor fusion track based on the radar data.
However, Yoon teaches:
the controller detects an event for the camera and the Lidar, and generates the second sensor fusion track based on the radar data (use only radar when camera and lidar fail, see at least [0158] and Fig. 8b)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the use of radar in event of camera and lidar failure taught by Yoon with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification to allow a vehicle to travel because it still “is capable of determining the presence or absence of neighboring objects” (see [0158]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Hartung in view of Timmons, Karasudani, and Nehmadi as applied to claim 1 above and further in view of Church et al. (U.S. Patent Application Publication No. 2019/0135216 A1; hereinafter Church).
Regarding claim 12, the combination of Hartung, Timmons, Karasudani, and Nehmadi teaches the elements above and Hartung further discloses:
the controller obtains object in front of the vehicle based on at least one of the image data and the Lidar data when the camera or the Lidar is included in the at least one sensor in which the event does not occur (when radar has failed, use camera and lidar to indicate an object is present, see at least [0054])
Hartung, Timmons, Karasudani, and Nehmadi does not teach:
the controller obtains the size of the object in front of the vehicle
limits at least one of a braking amount and a deceleration amount of the vehicle to a predetermined ratio when the size of the object is greater than or equal to the predetermined size
However, Church teaches:
the controller obtains the size of the object (detect objects using cameras, see at least [0014])
limits at least one of a braking amount and a deceleration amount of the vehicle to a predetermined ratio when the size of the object is greater than or equal to the predetermined size (detect a large object greater than a threshold size and response to the detection of large object, control the vehicle such as braking of the vehicle, see at least [0014])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the sensor fusion of components such as lidar, radar, and camera disclosed by Hartung, the fused track taught by Timmons, the new preceding vehicle taught by Karasudani, and the object meeting criteria taught by Nehmadi by adding the large object detection taught by Church with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification “to avoid or mitigate impact with the detected object” (see [0014]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Shokonji (U.S. Patent Application Publication No. 2023/0230368 A1) teaches determining whether object region exists higher than a predetermined height and extracting the sensor data corresponding to the object region as extraction target.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANA LEE whose telephone number is (571)272-5277. The examiner can normally be reached Monday-Friday: 7:30AM-4:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached at (571) 270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./Examiner, Art Unit 3662
/DALE W HILGENDORF/Primary Examiner, Art Unit 3662