Prosecution Insights
Last updated: April 19, 2026
Application No. 18/137,073

Multi-Sensor Advanced Driver Assistance System and Method for Generating a Conditional Stationary Object Alert

Final Rejection §102§103§112
Filed
Apr 20, 2023
Examiner
ROBERT, DANIEL M
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Bendix Commercial Vehicle Systems LLC
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
188 granted / 239 resolved
+26.7% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
274
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
25.0%
-15.0% vs TC avg
§112
29.3%
-10.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 239 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The amendment filed August 15, 2025 has been entered. Claims 1, 9, 11, 14, 17, and 19 have been amended. Claim 2, 12, 13, and 18 are presently canceled. Claims 21-24 are new. The remaining claims are in original or previously presented form. Therefore, claims 1, 2-11, 14-17, and 19-24 are pending in the application. Claims 1, 11, and 17 are the independent claims. The Remarks filed August 15, 2025 have been fully considered. The applicant states that during the interview on August 12, 2025 the examiner indicated that the proposed amendments overcame the rejection made in the last detailed action, which was the Non-Final Rejection dated April 29, 2025. The examiner notes that in the present published disclosure, Miller et al. (US2024/0351578 A1) the disclosure teaches in Fig. 2, step 210, a system that can detect a stationary object in the path of the host vehicle using a first sensor. This first sensor can be radar, according to paragraph 0013. The stationary object could be road debris, for example, or it could be another vehicle. The system then moves to step 220 in which the system tries to determine if the stationary object is a vehicle. This requires a second sensor. This second sensor could be a camera, according to paragraph 0014. If the object is a vehicle, the host vehicle will automatically react, such as automatically braking or steering to avoid a collision, according to paragraph 0015. But if the object is not another vehicle “it is up the driver to assess the stationary object and react accordingly (although in other embodiments, an action can be taken).” That is a quote from paragraph 0016. In other words, at least in the disclosure, if the second sensor, which is the camera, is working and classifies the second object as not being a vehicle, automatic collision avoidance will not be executed, at least in one embodiment. Yet the phrase in parentheses in paragraph 0015 is important, because it seems somewhat odd that system would automatically brake to avoid a collision with another vehicle but not do so to avoid a collision with road debris. If a rock was in the road, would the vehicle not want to automatically avoid that, too? But the disclosure answers this by stating that it is possible that the stationary object is just an overpass and not a collision hazard and that while an alert might help a driver maintain awareness, too many alerts can annoy the driver. So the system really tries to verify if the object is another vehicle or not, and only provides an alert when a camera (second sensor) is broken. In what cases would the system not be able to verify what type of object the stationary object is? When the second sensor is broken, such classification cannot occur. Thus, as noted in paragraph 0017, “the second sensor 102 plays an important role”. Paragraph 0018 can be reasonably interpreted as teaching that, if the camera sensor breaks (Yes out of Fig. 3, block 312), just give an alert to the driver about an object (314), do not further generate an avoidance action the brakes or steering. But if the camera is working (No out of 312) and if the camera classifies the object as another vehicle (Yes out of 320), then generate an avoidance action (330). This is a good guide for a broad reasonable interpretation of claim 1. Is such a teaching in the prior art? It seems to the examiner that the Komatsu in view of Igarashi does not explicitly teach a system in which: when a radar and camera are working no alert is given when a vehicle is detected and instead an avoidance action to avoid the vehicle is executed, but when the radar is working and the camera is not working an alert about an object is given but no avoidance maneuver is executed. That is a reasonable interpretation of claim 1. Komatsu does not teach this because Komatsu can be interpreted to display an alert either way, as in Fig. 10 and paragraph 0152. (Note that detecting the vehicle in this paragraph can really mean detecting an object, since the system is not able to classify the object. It might turn out that the object is a vehicle, but that does not have to be the case. The example in the paragraph is just that, an example.) What about Igarashi (US2022/0020272 A1), cited in the last detailed action? In Igarashi, Fig. 8, S34, the camera of the vehicle is not working (S12) yet the radar of the vehicle is working (Yes out of S31). In that case, when the radar detects an object in front of the host vehicle, an alert is generated for the driver in S34. The alert in S34 is a “collision is not determinable” alert, and the other alert is a “possibility of collision” alert in S17. See paragraph 0187 for this alert in S34 being generated when “the object is not successfully recognized”. The system identifies an object, the camera is not working properly, and therefore an alert is generated. Then the system moves to S19, which is to terminate the process. Thus, no avoidance action is taken. The alert and the avoidance action are mutually exclusive. See also paragraph 0228 for outputting information regarding the indeterminate object ahead. See paragraph 0079 for the output section 106 outputting visual or audio information to the person on board the host vehicle. The output could be a heads-up display. See paragraph 0046 for the vehicle which sometimes “has no option but to notify a driver that there exists a certain object, although what kind of object the certain object is, is not determinable.” This is the case of S34. These citations teach that in S34 the alert is an alert to the driver about a stationary object when the camera system is not working. In Igarashi, Fig. 8, S17, the system generates a notification “that there is a possibility of collision”. Then in S18 an avoidance maneuver is executed. What kind of notification is S17 according to the disclosure of Igarashi? Paragraph 0152 teaches that in S17 the system “notifies the emergency event avoiding section 171 of the movement controller 135 that there is a possibility of the collision.” Paragraph 0153 teaches that “In Step S18, the emergency event avoiding section 171 controls the acceleration/deceleration controller 172 and the direction controller 173 on the basis of the notification”. In other words, the notification in S17 is not a visual notification to the driver of the vehicle. Rather it is an internal notification to the controller of the vehicle. Nothing as in paragraph 0046 about notifying the driver is taught. Rather, in the disclosure of Igarashi, there is also the teaching that the system can “notify them [the controllers] of a state on controlling the drivetrain system.” Therefore, Igarashi teaches that when a radar is working but a camera is not working, and an object exists in front of the host vehicle, an alert is issued about the object to the driver, but when both the radar and camera is working, and there is an object in front of the host vehicle, no alert is issued about the object to the driver, and an avoidance maneuver is executed. It seems to the examiner that this meets the limitations of claim 1 as amended. But even if Igarashi did teach an alert to a driver when both the radar and camera was working, that might not present an obviousness rejection. Consider Deng et al. (U.S. 11,113,584 B2). Deng teaches in col 6, lines 20-34 a system in which radar 116B and camera 116A are used to detect an object in front of the host vehicle 100. Then, “when the vehicle 100 is driving autonomously (e.g., Level 3, Level 4, or Level 5) and detects other vehicles stopped in a travel path, the sensor detection information may be sent to the vehicle control system of the vehicle 100 to control a driving operation (e.g., braking, decelerating, etc.) associated with the vehicle 100 (in this example, slowing the vehicle 100 as to avoid colliding with the stopped other vehicles).” No alert or notification is made. This seems quite like the present disclosure, at least in this teaching. See Deng, Fig. 11 for using lidar and radar in 1104 to classify an object in 1120. See Deng, col. 17, lines 25-30 for a display such as a heads-up display (HUD). Yet Deng does not consider what happens when the camera malfunctions. Could the teachings of Igarashi be added here? In other words, could the teachings of Deng be used when the system is working fine, as in Deng, and then Igarashi used to teach that an alert is generated when the camera sensor is not working? It seems that this would make sense in part because Igarashi teaches that an alert is given when an avoidance action is going to be made, and a faulty sensor alert is given when there is a faulty camera sensor. But if other art teaches that an alert about an avoidance action is not given when an avoidance action is going to be made, isn’t that pretty combinable. Both systems are very close. Would such a combination have to rely on hindsight? The examiner does not think so. Prior art often seeks to determine what to do in the event of a sensor failure, Igarashi included. Deng merely teaches that when sensors are working fine, no alert is necessary. Another way to look at it is that avoidance actions are common in the vehicle control art. Sometimes alerts are given, as in Igarashi, but sometimes not, as in Deng. When the radar and camera sensors are working fine, Deng and Igarashi do the same thing, except that Deng does not provide a notification. Why remove the notification using Deng? Another way to ask that is, why add the notification in Igarashi? The motivation to do this does not have to be suggested by the prior art. In the KSR case, Justice Kennedy's opinion stated, "A person of ordinary skill is also a person of ordinary creativity, not an automaton." The court also found in that case that “Common sense teaches, however, that familiar items may have obvious uses beyond their primary purposes, and in many cases a person of ordinary skill will be able to fit the teachings of multiple patents together like pieces of a puzzle” (KSR Int’l Co. v. Teleflex Inc. 550 U.S. 398, 82 USPQ2d 1385 (Supreme Court 2007)). Fitting Deng in would be a matter of simple substitution. Simply substitute no alert yet an avoidance action in Deng with the alert and avoidance action in Igarashi. Would such a small difference not have been obvious? The examiner thinks that it would have been. This conclusion is strengthened by the fact that many other pieces of prior art teach systems that only provide a notification in case of a camera fault, but do not provide on in the case of no fault, including when an avoidance action is to be taken. For example, Lei et al. (US2020/0160626 A1). See paragraph 0054 and Fig. 4, block 475, for generating a “notification to the vehicle” if the camera 325 is “faulty.” Note that according to Fig. 3, the host vehicle has environmental sensors 320 and camera 325. See paragraph 0035 for the system having radar and/or lidar as part of its sensos 320. The notification to the vehicle, according to paragraphs 0054 and 0056, “notifies the driver/occupant of the vehicle 120A” using a display or audible alerts. This can be similar to a “ ‘check engine’ light,” according to the paragraph. See Lei Fig. 1 and paragraph 0017 for the general context of the invention and components of the host vehicle. The camera “may have difficulty recognizing the object,” which could be a pot hole. Lei states in paragraph 0002 that a working vehicle system is necessary for “driver-assist technology,” such as vehicle with “autonomous driving.” Lei does not explicitly teach that an alert is not given when a sensor is working fine, but Lei also does not teach that an alert is given when a sensor is working fine. A reasonable interpretation of Lei is that when the camera sensor is faulty, an alert is given. Yet when the sensors of the host vehicle are working fine, no alert is given. Furthermore, since the host vehicle is designed for autonomous driving, the system would generate an avoidance action when detecting a vehicle in front of it. In summary, a broad reasonable interpretation of Lei is that when there is a camera fault, an alert is generated. When the camera is working fine the autonomous driving system works as normal and no alert is generated. It seems to the examiner that one could add Deng, col 6, lines 20-34, to Lei and let Deng teach that an autonomous driving system works as normal when the camera and radar are working. Deng does not teach that an alert is generated in that case. Yet such additions of Deng and Lei are not necessary, since as just described, Igarashi teaches claim 1. Due to the applicant’s amendments the grounds for rejection have changed. Please see the rejections below. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims 17 and 19-22 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim 17 recites in part “means for:” In some case the claim goes on to state what the means are, such as “using the first sensor”. But in other parts of the claim, a controller is implied by not stated explicitly. This is the case in which there is a “determining” or a “response”. Because Fig. 1 of the present disclosure, uses a processor and memory to execute the determining and responding, as shown in Fig. 2, the disclosure as sufficient material structure to execute the claimed functions. Therefore, for the purposes, of examination, the means will be interpreted as the processors and memory, items 103 and 104. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 2-11, 14-17, and 19-24 are NOT rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The examiner will claim that here. Claim 1 recites in part: wherein; the stationary object alert is generated only in response to the second forward-facing sensor of the vehicle being in the error state; and generating the stationary object alert and performing the collision avoidance action are mutually exclusive. In a broad reasonable interpretation, this clause means that when a collision avoidance action is performed, no stationary object alert is generated; and when a stationary object alert is generated, no collision avoidance action is performed. The alert and the action are therefore “mutually exclusive,” which is a phrase added on amendment and not part of the original disclosure. Yet the specification teaches that “In one embodiment, the stationary object alert is generated only in response to determining that the second sensor 102 is prevented from making such a determination with respect to the detected stationary object. In other embodiments, the stationary object alert can be generated under other conditions.” What are the “other conditions”? It seems like another condition could be what the second sensor 102 is not prevented from making such a determination with respect to the detected stationary object. In other words, when the second sensor is able to classify an object. Perhaps in that case, an alert is provided at Fig. 3, step 340. But that embodiment is not what is claimed due to the “mutually exclusive” phrase. Paragraph 0024 of the present published disclosure teaches that “By generating stationary object alerts only under specific circumstances (e.g., a sensor failure, fog or other poor visibility scenarios, lack of sensor redundancy, verification, etc.), these embodiments can reduce the number of stationary object alerts generated and, hence, the number of false alerts.” The present published disclosure therefore teaches that an alert, at least in some embodiments, is “only” generated when the second sensor is in an error state. But does it also teach that a collision avoidance action is “only” generated when the second sensor is not in an error state? That would be required to support the amended claim language about these two features being “mutually exclusive.” Fig. 2 teaches that in block 220 the system can classify a stationary object as “another vehicle”. That requires a working camera, not just lidar or radar, according to the disclosure. Therefore, a Yes out of 220, which generates a collision avoidance action, is essentially synonymous with a working camera. But a No out of 220, in which the detected stationary object is not another vehicle, could still be performed by a working camera. The camera could work, and classify the object as debris on the road, as the present specification teaches. Therefore, Fig. 2 does not support the teaching of the alert (i.e., faulty camera) and the avoidance action being “mutually exclusive”. How about Fig. 3? Does it provide support? Fig. 3, block 312 teaches determining whether the second sensor (the camera) is prevented from working. If it is, the system will generate an alert in step 314. If, alternative, the camera is working (No out of 312), the system can generate an avoidance action (330) if the object is another vehicle, but it may not generate an avoidance action (340) if the object is not another vehicle. In this figure, the alert is mutually exclusive from the avoidance action. When there is an alert, there is no avoidance action. When there is an avoidance action, there is no alert. Just because the claim states that the alert and avoidance action are “mutually exclusive” does not mean that when an alert is not given, an avoidance action must occur. Rather, the claim is about what is excluded; it is framed in the negative. When there is an alert, an avoidance action is excluded. That is true according to Fig. 3. When there is an avoidance action, an alert is excluded. That is also true in Fig. 3. It is further true that when there is no avoidance action (340) an alert is excluded. But that does not change that fact that the language of “mutual exclusion” with respect to the alert and collision avoidance action is supported and has written description. Therefore, no 35 USC 112(a) rejection will be made. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2-11, 14-17, 19-22, and 24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Igarashi (US2022/0020272 A1). Regarding claim 1, Igarashi discloses: A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by one or more processors in a vehicle, cause the one or more processors to (see Fig. 4 for controller 112, including storage 111.): detect a stationary object in front of the vehicle using a first forward-facing sensor of the vehicle (see Fig. 5 for the controller 112 receiving sensor data from a radar 202.); determine whether a second forward-facing sensor of the vehicle is in an error state that prevents the second forward-facing sensor from determining whether or not the detected stationary object is another vehicle (see Fig. 8 and paragraph 0152 for a camera 51. For the camera it is “difficult to detect an object” that is preceding the host vehicle. See Fig. 8 for a NO out of S12.); and in response to determining that the second forward-facing sensor of the vehicle is in the error state, cause a stationary object alert to be generated (see Fig. 8, S34, for the camera of the vehicle is not working (No out of S12) yet the radar of the vehicle is working (Yes out of S31). In that case, when the radar detects an object in front of the host vehicle, an alert is generated for the driver in S34. The alert in S34 is a “collision is not determinable” alert.); and in response to determining that the second forward-facing sensor of the vehicle is not in the error state (see Fig. 8 for a YES out of S12.): determine whether the detected stationary object is another vehicle (see Fig. 8, S13.); and in response to determining that the detected stationary object is another vehicle, cause a collision avoidance action to be performed (see Fig. 8, S18): wherein; the stationary object alert is generated only in response to the second forward-facing sensor of the vehicle being in the error state (see Fig. 8, S34. Not that in Igarashi, Fig. 8, S34, the camera of the vehicle is not working (S12) yet the radar of the vehicle is working (Yes out of S31). In that case, when the radar detects an object in front of the host vehicle, an alert is generated for the driver in S34. The alert in S34 is a “collision is not determinable” alert, and the other alert is a “possibility of collision” alert in S17. See paragraph 0187 for this alert in S34 being generated when “the object is not successfully recognized”. The system identifies an object, the camera is not working properly, and therefore an alert is generated. Then the system moves to S19, which is to terminate the process. Thus, no avoidance action is taken. The alert and the avoidance action are mutually exclusive. See also paragraph 0228 for outputting information regarding the indeterminate object ahead. See paragraph 0079 for the output section 106 outputting visual or audio information to the person on board the host vehicle. The output could be a heads-up display. See paragraph 0046 for the vehicle which sometimes “has no option but to notify a driver that there exists a certain object, although what kind of object the certain object is, is not determinable.” This is the case of S34. These citations teach that in S34 the alert is an alert to the driver about a stationary object and provided when the camera system is not working. This contrasts with Igarashi, Fig. 8, S17, in which the system generates a notification “that there is a possibility of collision”. Then in S18 an avoidance maneuver is executed. But what kind of notification is S17 according to the disclosure of Igarashi? Paragraph 0152 teaches that in S17 the system “notifies the emergency event avoiding section 171 of the movement controller 135 that there is a possibility of the collision.” Paragraph 0153 teaches that “In Step S18, the emergency event avoiding section 171 controls the acceleration/deceleration controller 172 and the direction controller 173 on the basis of the notification”. In other words, the notification in S17 is not a visual notification to the driver of the vehicle. Rather it is an internal notification to the controller of the vehicle. Nothing as in paragraph 0046 about notifying the driver is taught. Rather, in the disclosure of Igarashi, there is also the teaching that the system can “notify them [the controllers] of a state on controlling the drivetrain system.” Therefore, Igarashi teaches that when a radar is working but a camera is not working, and an object exists in front of the host vehicle, an alert is issued about the object to the driver, but when both the radar and camera is working, and there is an object in front of the host vehicle, no alert is issued about the object to the driver, and an avoidance maneuver is executed.); and generating the stationary object alert and performing the collision avoidance action are mutually exclusive (see the above bullet. After S34, the process terminates, whereas after S17, an avoidance action is generated. Thus the alert in S17 and the avoidance action in S18 are mutually exclusive.). Regarding claim 3, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherin wherein the first forward-facing sensor and the second forward-facing sensor are different types of sensors (see Fig. 8 for S12 being a camera, and S31 being radar.). Regarding claim 4, Igarashi discloses the non-transitory computer-readable storage medium of Claim 3. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 3, wherein wherein the first forward-facing sensor and the second forward-facing sensor are different types of sensors (see Fig. 8 for S12 being a camera, and S31 being radar.). Regarding claim 5, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherein the first forward-facing sensor is configured to operate using radar (see Fig. 8 for S31 being radar.). Regarding claim 6, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherin the first forward-facing sensor is configured to operate using lidar (see Fig. 8 for S20 being lidar. In the case of a collision, the result would be the same, the process would route to S17 then S18.). Regarding claim 7, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherein the first forward-facing sensor is configured to operate using ultrasound (see paragraph 0071). Regarding claim 8, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherein the second forward-facing sensor comprises a camera (see Fig. 8, S11). Regarding claim 9, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherein the error state is caused by a hardware see Fig. 8, No out of S12. The camera is not working). Regarding claim 10, Igarashi discloses the non-transitory computer-readable storage medium of Claim 1. Igarashi further discloses: The non-transitory computer-readable storage medium of Claim 1, wherein the error state is caused by impaired visibility of the second forward-facing sensor (see paragraph 0144). Regarding claim 11, Igarashi discloses: A method comprising (see Fig. 8): performing in a vehicle comprising [[a]] first and second sensors (see the image sensor in S11 and the radar in S31) detecting a stationary object in front of the vehicle using the first sensor (the remaining bullets are substantially similar to those in claim 1. See the rejection of the analogous bullet in claim 1 for the rejection of these bullets.); determining whether the second sensor of the vehicle is in an error state that prevents the second sensor from determining whether or not the detected stationary object is another vehicle; in response to determining that the second forward-facing sensor of the vehicle is in the error state, causing a stationary object alert to be generated; and in response to determining that the second sensor of the vehicle is not in the error state: determining whether the detected stationary object is another vehicle; and in response to determining that the detected stationary object is another vehicle, causing a collision avoidance action to be performed: wherein; the stationary object alert is generated only in response to the second forward-facing sensor of the vehicle being in the error state; and generating the stationary object alert and performing the collision avoidance action are mutually exclusive. Regarding claim 14, Igarashi discloses the method of Claim 11. Igarashi further discloses: The method of Claim 11, wherein the collision avoidance action comprises automatically braking the vehicle see S18). Regarding claim 15, Igarashi discloses the method of Claim 11. Igarashi further discloses: The method of Claim 11, wherein the first sensor is configured to use radar, lidar, or ultrasound (see S11 and paragraph 0071). Regarding claim 16, Igarashi discloses the method of Claim 11. Igarashi further discloses: The method of Claim 11, wherein the second sensor comprises a camera (see S11 and S12). Regarding claim 17, Igarashi discloses: A multi-sensor advanced driver assistance system for use in a vehicle, the system comprising: (see Fig. 4 and Fig. 8): a first sensor (see S31); a second sensor (see S11 and S12); and means for: detecting a stationary object in front of the vehicle using the first sensor (the remaining bullets are substantially similar to those in claim 1. See the rejection of the analogous bullet in claim 1 for the rejection of these bullets.); determining whether the second sensor of the vehicle is in an error state that prevents the second sensor from determining whether or not the detected stationary object is another vehicle; in response to determining that the second forward-facing sensor of the vehicle is in the error state, causing a stationary object alert to be generated; and in response to determining that the second sensor of the vehicle is not in the error state: determining whether the detected stationary object is another vehicle; and in response to determining that the detected stationary object is another vehicle, causing a collision avoidance action to be performed: wherein; the stationary object alert is generated only in response to the second forward-facing sensor of the vehicle being in the error state; and generating the stationary object alert and performing the collision avoidance action are mutually exclusive. Regarding claim 19, Igarashi discloses the system of claim 17. Igarashi further discloses: The system of claim 17, wherein the first sensor is configured to use radasee Fig. 8, S31). Regarding claim 20, Igarashi discloses the system of claim 17. Igarashi further discloses: The system of claim 17, wherein the second sensor comprises a camera (see S11 and S12). Regarding claim 21, Igarashi discloses the system of claim 17. Igarashi further discloses: The system of claim 17, wherein the first sensor is configured to use lidar (see S20). Regarding claim 22, Igarashi discloses the system of claim 17. Igarashi further discloses: The system of claim 17, wherein the first sensor is configured to use ultrasound (see paragraph 0071). Regarding claim 24, Igarashi discloses the method of claim 11. Igarashi further discloses: The method of claim 11, wherein the collision avoidance action comprises automatically steering the vehicle to attempt to avoid the vehicle colliding with the detected stationary object (see Fig. 8, step S18. Paragraph 0153 teaches that “In Step S18, the emergency event avoiding section 171 controls the acceleration/deceleration controller 172 and the direction controller 173 on the basis of the notification”.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Igarashi (US2022/0020272 A1) in view of Komatsu (US2024/0290108 A1). Regarding claim 23, Igarashi discloses the system of claim 17. Yet Igarashi does not further teach: The system of claim 17, wherein the error state is caused by a software problem in the second forward-facing sensor. However, Komatsu further teaches: A system wherein the error state is caused by a software problem in the second forward-facing sensor (see the system of Komatsu outputting the wrong label as compared to the correct label. This is a software problem.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system, as taught by Igarashi, to add the additional features of: the error state is caused by a software problem in the second forward-facing sensor, as taught by Komatsu. The motivation for doing so would be to prevent a collision (see paragraph 0002). This conclusion of obviousness corresponds to KSR rationale “A”: it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined prior art elements according to known methods to yield predictable results. See MPEP § 2141, subsection III. Additional Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Oba (US2020/0183411 A1). Oba teaches in Fig. 7 that a camera can be determined to malfunction (Yes out of S1102). If the remaining sensors work, the automated traveling system of the vehicle can continue in S1105, yet an alert will be generated in S1108. See paragraph 0165 for such an alert (or “report regarding the failure”) being displayed in the vehicle. See paragraph 0163 for the report being an audible alert. Oba does not appear to teach that the vehicle has radar or lidar. Oba et al. (US2020/0317213 A1), hereinafter Oba et al. Oba et al. teaches a system in Fig. 5 and paragraphs 0097-0106 in which a camera abnormality can be detected in St102. In that case, “alert information” will be generated in St104. But if not abnormality is detected the process will go to St103 and no alert will be generated. In St103, to “display information” is part of the normal process of the vehicle, not a display related to an alert about a malfunction. So Oba et al. teaches a system when a camera has a fault, the system will generate an alert, but otherwise, no alert will be generated. Oba et al. does not appear to teach that the vehicle has radar or lidar. PNG media_image1.png 866 540 media_image1.png Greyscale Dworakowski (US2021/0295560 A1). Dworakowski teaches in paragraph 0040 sending an alert to an ECU when a camera 102 fails. Park (US2021/0385432 A1). See paragraph 0147 for the teaching that: “If the evaluation result of the image is determined to be poor, malfunction information of the camera 460 may be displayed on the display 180 (S640). The malfunction information displayed on the display 180 may include text or an image that provides notification of a malfunction cause of the camera 460 or text or an image that shows a handling method to the driver 900. The malfunction information may be output in a sound form through the audio output unit 485.” Yet the camera appears to be internally-facing in Park. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL M. ROBERT whose telephone number is (571)270-5841. The examiner can normally be reached M-F 7:30-4:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at 571-272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL M. ROBERT/Primary Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Apr 20, 2023
Application Filed
Apr 24, 2025
Non-Final Rejection — §102, §103, §112
Aug 12, 2025
Applicant Interview (Telephonic)
Aug 12, 2025
Examiner Interview Summary
Aug 15, 2025
Response Filed
Aug 27, 2025
Final Rejection — §102, §103, §112
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 24, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600351
VEHICLE CONTROL DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12552397
VEHICLE CONTROL APPARATUS AND METHOD FOR PERFORMING TORQUE CONTROL OF VEHICLE
2y 5m to grant Granted Feb 17, 2026
Patent 12545245
VEHICLE OPERATION AROUND OBSTACLES
2y 5m to grant Granted Feb 10, 2026
Patent 12545247
SYSTEM AND METHOD FOR REDUCING DAMAGE FROM VEHICLE COLLISION
2y 5m to grant Granted Feb 10, 2026
Patent 12515645
APPARATUS AND METHOD FOR COLLISION AVOIDANCE ASSISTANCE
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+10.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 239 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month