Prosecution Insights
Last updated: April 19, 2026
Application No. 18/112,974

VEHICLE SAFETY SYSTEM IMPLEMENTING DYNAMIC SEVERITY AND CONTROLLABILITY DETERMINATIONS

Non-Final OA §103
Filed
Feb 22, 2023
Examiner
SHARMA, SHIVAM
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mercedes-Benz Group AG
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
43%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
15 granted / 34 resolved
-7.9% vs TC avg
Minimal -1% lift
Without
With
+-1.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
49 currently pending
Career history
83
Total Applications
across all art units

Statute-Specific Performance

§101
11.8%
-28.2% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
24.0%
-16.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is reply to the Application Number 18/112,974 filed on 08/26/2025 Claims 1, 3, 5 – 11, 13 and 15 – 20 are currently pending and have been examined. Claims 1, 7, 10, 11 and 20 have been amended. This action is made NON-FINAL Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/26/2025 has been entered. Information Disclosure Statement The information disclosure statements filed 08/27/2025 and 09/16/2025 have been received and considered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 5 – 11, 13 and 15 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Nilsson et al. (US 20220297706 A1), further in view of Yamaoka et al. (US 20250042428 A1). Regarding claim 1, Nilsson teaches a computing system for a vehicle, the computing system comprising: (Nilsson: Paragraph 0002: “For autonomous or semi-autonomous machines—e.g., land, sea, and air based vehicles, dynamic or static robots, etc.—being able to accurately perceive the environment in order to determine actions for controlling the machine within the environment is critical.”) one or more processors; a memory storing instructions that, when executed by the one or more processors, cause the computing system to: (Nilsson: Paragraph 0027: “various functions may be carried out by a processor executing instructions stored in memory.”) receive sensor data from a sensor system of the vehicle; (Nilsson: Paragraph 0072: “The controller(s) 1536 may provide the signals for controlling one or more components and/or systems of the vehicle 1500 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 1558 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1560, ultrasonic sensor(s) 1562, LIDAR sensor(s) 1564, inertial measurement unit (IMU) sensor(s) 1566 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1596, stereo camera(s) 1568, wide-view camera(s) 1570 (e.g., fisheye cameras), infrared camera(s) 1572, surround camera(s) 1574 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1598, speed sensor(s) 1544 (e.g., for measuring the speed of the vehicle 1500), vibration sensor(s) 1542, steering sensor(s) 1540, brake sensor(s) (e.g., as part of the brake sensor system 1546), and/or other sensor types.” ) dynamically determine, in real-time and based on the sensor data, (Nilsson: Paragraph 0088: “The vehicle 1500 may include a system(s) on a chip (SoC) 1504. The SoC 1504 may include CPU(s) 1506, GPU(s) 1508, processor(s) 1510, cache(s) 1512, accelerator(s) 1514, data store(s) 1516, and/or other components and features not illustrated. The SoC(s) 1504 may be used to control the vehicle 1500 in a variety of platforms and systems. For example, the SoC(s) 1504 may be combined in a system (e.g., the system of the vehicle 1500) with an HD map 1522 which may obtain map refreshes and/or updates via a network interface 1524 from one or more servers (e.g., server(s) 1578 of FIG. 15D).”; Paragraph 0103: “The RISC cores may interact with image sensors (e.g., the image sensors of any of the cameras described herein), image signal processor(s), and/or the like. Each of the RISC cores may include any amount of memory. The RISC cores may use any of a number of protocols, depending on the embodiment. In some examples, the RISC cores may execute a real-time operating system (RTOS).”; Paragraph 0109: “In some examples, the SoC(s) 1504 may include a real-time ray-tracing hardware accelerator, such as described in U.S. patent application Ser. No. 16/101,232, filed on Aug. 10, 2018. The real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses. In some embodiments, one or more tree traversal units (TTUs) may be used for executing one or more ray-tracing related operations.”) a severity value and a controllability value for each of a plurality of vehicle safety systems of the vehicle, each of the severity value and the controllability value fluctuating based on a set of dynamic factors that are present and indicative of risk when the vehicle is operating, (Nilsson: Paragraphs 0038 – 0039: “With respect to FIG. 4, FIG. 4 may correspond to early rule-based sensor fusion (ERSF), where sensor 202A may generate sensor data at an ASIL level of B(D), sensor 202B may generate sensor data at ASIL B(D), and rule-based sensor fusion (RSF) 206B may be executed on both the input sensor signals to generate an ASIL D fused output. Because sensor 202A and sensor 202B are separate—e.g., would not share common cause failures—and the fusion mechanism is rule-based, the output of the rule-based sensor fusion processing may be at ASIL D. As another example, FIG. 5 may correspond to early learned sensor fusion (ELSF), where sensor 202A may generate sensor data at an ASIL level of B(D), sensor 202B may generate sensor data at an ASIL level of B(D), and learned sensor fusion (LSF) 208B may be executed to generate an ASIL B fused output. In such an example, because learned sensor fusion may not by itself be capable of achieving a higher or maximum integrity level, learned sensor fusion may be used to achieve, for example, at most ASIL B compliance—e.g., corresponding to a lowest ASIL level of the input sensor signals.”: Paragraph 0041: “In some embodiments, safety goals may be established after use cases and potential hazards of the system are analyzed. For example, safety goals may be assigned safety integrity levels that derive from the exposure (E) to, severity (S) of, and controllability (C) of the hazardous event that begat the Safety Goal. The exposure may correspond to a measure of the probability the system will encounter the relevant situation, the severity may correspond to an assessment of the consequences that could result from the hazardous event if it were encountered and missed and/or not mitigated, and the controllability may correspond to an assessment of the difficulty for humans or other systems to handle the hazard if it were encountered and were missed and/or not mitigated. The aggregation of these factors may generally be referred to as SEC. Generally, the lower the SEC score, the lower the safety integrity level required. For example, some hazards may have lower safety integrity level requirements because they are uncommon (e.g., road hazards such as refrigerators, old tires, etc.), the severity is low (e.g., a fender bender at low speed), and/or because the hazard is easy to control (e.g., a human driver is actively supervising and can take control if adaptive cruise control brakes too late for a traffic jam in front of the ego-vehicle 1500).”; Paragraph 0055: “includes another example of a disjoint safety goals architecture that is executed, e.g., for road hazard detection using learned sensor fusion in parallel with rule-based sensor fusion, in accordance with one or more embodiments of the present disclosure. For advanced autonomous driving, the ASIL requirement for detection of vehicles and pedestrians may be derived to be ASIL D, while the ASIL requirement for detection of road hazards may be derived to be ASIL B (e.g., since road hazards such as refrigerators and old tires may be assessed as uncommon on major highways, thus having a lower SEC score). Since a learned sensor fusion architecture may achieve ASIL B, standalone learned sensor fusion may be employed to cover the safety goal of road hazard detection.”, Supplemental Note: SEC scores for different scenarios are stored and according to the real-time sensor data, the vehicle is able traverse the roadway accordingly while also sensing out road hazards) the severity value corresponding to a severity of potential injuries a hazardous event is likely to cause, the controllability value corresponding to a relative likelihood that an operator of the vehicle will not be able to act to prevent the potential injuries as identified by the severity value; (Nilsson: Abstract: “For example, in-parallel and/or in-serial combinations of early rule-based sensor fusion, late rule-based sensor fusion, early learned sensor fusion, or late learned sensor fusion may be used to solve various safety goals associated with various required safety levels at a high level of accuracy and precision. In embodiments, learned sensor fusion may be used to make more conservative decisions than the rule-based sensor fusion (as determined using, e.g., severity (S), exposure (E), and controllability (C) (SEC) associated with a current safety goal),”: Paragraph 0041: “For example, safety goals may be assigned safety integrity levels that derive from the exposure (E) to, severity (S) of, and controllability (C) of the hazardous event that begat the Safety Goal. The exposure may correspond to a measure of the probability the system will encounter the relevant situation, the severity may correspond to an assessment of the consequences that could result from the hazardous event if it were encountered and missed and/or not mitigated, and the controllability may correspond to an assessment of the difficulty for humans or other systems to handle the hazard if it were encountered and were missed and/or not mitigated.”) based on the dynamically determined severity value and controllability value for each respective vehicle safety system of the plurality of vehicle safety systems, dynamically determine, in real-time, whether any triggering condition of a plurality of triggering conditions associated with the respective vehicle safety system is met; and (Nilsson: Paragraph 0041: “For example, some hazards may have lower safety integrity level requirements because they are uncommon (e.g., road hazards such as refrigerators, old tires, etc.), the severity is low (e.g., a fender bender at low speed), and/or because the hazard is easy to control (e.g., a human driver is actively supervising and can take control if adaptive cruise control brakes too late for a traffic jam in front of the ego-vehicle 1500).”; Paragraph 0055: “includes another example of a disjoint safety goals architecture that is executed, e.g., for road hazard detection using learned sensor fusion in parallel with rule-based sensor fusion, in accordance with one or more embodiments of the present disclosure. For advanced autonomous driving, the ASIL requirement for detection of vehicles and pedestrians may be derived to be ASIL D, while the ASIL requirement for detection of road hazards may be derived to be ASIL B (e.g., since road hazards such as refrigerators and old tires may be assessed as uncommon on major highways, thus having a lower SEC score). Since a learned sensor fusion architecture may achieve ASIL B, standalone learned sensor fusion may be employed to cover the safety goal of road hazard detection.”, Supplemental Note: the low level trigger conditions are cited above which is identified by the vehicle and it’s sensors) … by automatically operating at least one of a braking system, a steering system, or an acceleration system of the vehicle.(Nilsson: Paragraph 0048: “Where a monitor architecture is implemented, rule-based sensor fusion may execute in parallel with learned sensor fusion, and an arbiter or decision component may use the rule-based sensor fusion output to monitor the learned sensor fusion output. In some embodiments, the arbiter may limit the learned sensor fusion results, or may completely override learned sensor fusion results by triggering shut-down of autonomous engagement, for example (which may be referred to as “a safety monitor”). In one or more monitor architecture variants, decomposition between learned sensor fusion and rule-based sensor fusion may be executed by the arbiter, which may choose a valid safety integrity decomposition, and may implicate either learned sensor fusion or rule-based sensor fusion in any or all of the safety goals assigned to it.”; Paragraph 0113: “Such a confidence value may be interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. For example, the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections. In an automatic emergency braking (AEB) system, false positive detections would cause the vehicle to automatically perform emergency braking, which is obviously undesirable. Therefore, only the most confident detections should be considered as triggers for AEB. The DLA may run a neural network for regressing the confidence value.”, Supplemental Note: based data received from the vehicle sensors, the system is able to determine the confidence value of these detections and able to identify trigger conditions relating to the confidence values. An example is cited for when an automatic emergency braking is to be applied based on the confidence level of the obstacles around the vehicle) In sum, Nilsson teaches a computing system for a vehicle, the computing system comprising: one or more processors; a memory storing instructions that, when executed by the one or more processors, cause the computing system to: receive sensor data from a sensor system of the vehicle; dynamically determine, in real-time and based on the sensor data, a severity value and a controllability value for each of a plurality of vehicle safety systems of the vehicle, each of the severity value and the controllability value fluctuating based on a set of dynamic factors that are present and indicative of risk when the vehicle is operating, the severity value corresponding to a severity of potential injuries a hazardous event is likely to cause, the controllability value corresponding to a relative likelihood that an operator of the vehicle will not be able to act to prevent the potential injuries as identified by the severity value; based on the dynamically determined severity value and controllability value for each respective vehicle safety system of the plurality of vehicle safety systems, dynamically determine, in real-time, whether any triggering condition of a plurality of triggering conditions associated with the respective vehicle safety system is met; and by automatically operating at least one of a braking system, a steering system, or an acceleration system of the vehicle. Nilsson however does not teach in response to determining that a triggering condition of the plurality of triggering conditions for the respective vehicle safety system is met, implement, in real-time and based at least in part on the determined severity value and the controllability value, a safety of intended functionality (SOTIF) mitigation measure of multiple SOTIF mitigation measures associated with the respective vehicle safety systems whereas Yamaoka does. Yamaoka teaches in response to determining that a triggering condition of the plurality of triggering conditions for the respective vehicle safety system is met, (Yamaoka: Paragraph 0101: “The behavior planning function may include a function of generating a condition related to a state transition of the vehicle 1. The condition related to the state transition of the vehicle 1 may correspond to a triggering condition.”; Paragraph 0261: “A triggering condition may be a specific condition of a scenario that serves as an initiator for a subsequent system reaction contributing to either a hazardous behavior and reasonably foreseeable indirect misuse, which is a subsequent reaction of the system.”) Implement, in real-time and based at least in part on the determined severity value and the controllability value, a safety of intended functionality (SOTIF) mitigation measure of multiple SOTIF mitigation measures associated with the respective vehicle safety systems, (Yamaoka: Paragraphs 0044 – 0045: “Architecture of the driving system 2 that enables implementation of an efficient safety of the intended functionality (SOTIF) process is selected. For example, the architecture of the driving system 2 may be configured based on a sense-plan-act model. The sense-plan-act model includes a sense (perception) element, a plan (planning) element, and an act (action) element as main system elements. The sense element, the plan element, and the act element interact with each other. Sense can be replaced with perception, plan can be replaced with judgement, and act can be replaced with control. As illustrated in FIG. 1, at a vehicle level in the driving system 2, a vehicle level function 3 is implemented based on a vehicle level safety strategy (VSLL). At a functional level (in other words, a functional perspective), a perception function, a judgement function, and a control function are implemented. At a technical level (in other words, a technical perspective), at least multiple sensors 40 corresponding to the perception function, at least one processing system 50 corresponding to the judgement function, and multiple motion actuators 60 corresponding to the control function are implemented.”) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Nilsson with the teachings of Yamaoka with a reasonable expectation of success. Nilsson and Yamaoka both describe safety systems that can be implemented within a vehicle for hazard event detection and avoidance. Nilsson utilizes a sensor fusion system with a rule-based system (severity, exposure and controllability (SEC)) to make vehicle situational determinations for controlling the vehicle. Yamaoka teaches the ability to capture its surrounding for trigger conditions, based off implementations of a SOTIF process, which then determines what vehicle instructions to display to the driver in avoiding the hazardous trigger condition. One with knowledge in the art would find the SOTIF implementation of Yamaoka as simple substitution with the SEC of Nilsson to improve the controllability of the vehicle. For example, both of these systems are capable of identifying the vehicle environment for any hazards and dictate the severity level of any surrounding hazards. Both of these rule-based systems are performing the same hazard event determination for a vehicle and implement mitigation methods in response. Regarding claim 3, Nilsson, as modified, teaches wherein the severity value and the controllability value correspond to automotive safety integrity levels (ASILs). (Nilsson: Claim 11: “The processor of claim 1, wherein the first threshold level corresponds to a first ASIL level, and the second threshold level corresponds to a second ASIL level that is higher than the first ASIL level.”) Regarding claim 5, Nilsson, as modified, teaches wherein the computing system is included as a component of an advanced driver assistance system (ADAS) of the vehicle. (Nilsson: Paragraph 0024: “Systems and methods are disclosed related to combining rule-based and learned sensor fusion for autonomous machine applications. Although the present disclosure may be described with respect to an example autonomous vehicle 1500 (alternatively referred to herein as “vehicle 1500,” “ego-machine 1500,” or “ego-vehicle 1500,” an example of which is described with respect to FIGS. 15A-15D), this is not intended to be limiting. For example, the systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, manufacturing, construction, or warehouse equipment or robots, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, rail transport, drones, space vehicles, and/or other vehicle types.”) Regarding claim 6, Nilsson, as modified, teaches wherein the computing system is included as a component of an autonomous vehicle control system that autonomously operates the vehicle along a travel route. (Nilsson: Paragraph 0069: “A steering system 1554, which may include a steering wheel, may be used to steer the vehicle 1500 (e.g., along a desired path or route) when the propulsion system 1550 is operating (e.g., when the vehicle is in motion). The steering system 1554 may receive signals from a steering actuator 1556. The steering wheel may be optional for full automation (Level 5) functionality.”) Regarding claim 7, Nilsson, as modified, teaches wherein the sensor system of the vehicle includes multiple sensor systems selected from a group consisting of: (i) one or more image sensors, (ii) one or more LIDAR sensors, (iii) one or more radar sensors, or (iv) one or more ultrasonic sensors. (Nilsson: Paragraph 0002: “In popular implementations, machines may employ various sensors—e.g., camera sensors, LiDAR sensors, RADAR sensors, touch sensors, temperature sensors, pressure sensors, microphones, ultrasonic sensors, etc.—to generate sensor data that may be processed to make sense of the perceived information.”) Regarding claim 8, Nilsson, as modified, teaches wherein the plurality of vehicle safety systems include a plurality of: a brake assist system, a forward collision warning system, an automatic emergency braking system, a pedestrian detection system, an adaptive cruise control system, a blind spot warning system, a rear cross-traffic alert system, a lane departure warning system, a lane-keeping assistance system, an active head restraint system, an active lane steering assist system, a backup camera system, a parking assistance system, an airbag system, a seat-beat locking system, a traction control system, an electronic stability control system, or a collision intervention system. (Nilsson: Paragraph 0151: “The vehicle 1500 may include an ADAS system 1538. The ADAS system 1538 may include a SoC, in some examples. The ADAS system 1538 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB), lane departure warnings (LDW), lane keep assist (LKA), blind spot warning (BSW), rear cross-traffic warning (RCTW), collision warning systems (CWS), lane centering (LC), and/or other features and functionality”) Regarding claim 9, Nilsson, as modified, teaches wherein the triggering condition for the respective vehicle safety system corresponds to one of a plurality of hazardous events classified for implementing the particular vehicle safety system. (Nilsson: Paragraph 0162: “The supervisory MCU may be configured to run a neural network(s) that is trained and configured to determine, based on outputs from the primary computer and the secondary computer, conditions under which the secondary computer provides false alarms. Thus, the neural network(s) in the supervisory MCU may learn when the secondary computer's output may be trusted, and when it cannot. For example, when the secondary computer is a RADAR-based FCW system, a neural network(s) in the supervisory MCU may learn when the FCW system is identifying metallic objects that are not, in fact, hazards, such as a drainage grate or manhole cover that triggers an alarm. Similarly, when the secondary computer is a camera-based LDW system, a neural network in the supervisory MCU may learn to override the LDW when bicyclists or pedestrians are present and a lane departure is, in fact, the safest maneuver. In embodiments that include a neural network(s) running on the supervisory MCU, the supervisory MCU may include at least one of a DLA or GPU suitable for running the neural network(s) with associated memory. In preferred embodiments, the supervisory MCU may comprise and/or be included as a component of the SoC(s) 1504.”, Supplemental Note: the trigger condition in this example is the ability to identify a cyclist or pedestrian and making the safest maneuvers around them) Regarding claim 10, Nilsson, as modified, teaches wherein the set of dynamic factors include at least one factor selected from a group consisting of: (Nilsson: Abstract: “learned sensor fusion may be used to make more conservative decisions than the rule-based sensor fusion (as determined using, e.g., severity (S), exposure (E), and controllability (C) (SEC) associated with a current safety goal), but the rule-based sensor fusion may be relied upon where the learned sensor fusion decision may be less conservative than the corresponding rule-based sensor fusion.”) (i) weather conditions, (ii) road infrastructure, (iii) driver behavior, (iv) a behavior of one or more other vehicles, (v) a driving scenario, known planning algorithm limitations, (vi) known insufficiencies of sensor measurement data for machine learning, (vii) an operational design domain (ODD) of the vehicle, (viii) a mechanical disturbance of one or more sensors, (ix) a dirty or occluded sensor, (x) electromagnetic interference of one or more sensors, (xi) acoustic disturbance of one or more sensors, (xii) glare in one or more sensors, (xiii) accuracy and range of the sensor system, and (xiv) performance impact of one or more sensors due to durability, wear, or aging. (Nilsson: Paragraphs 0129 – 0131: “As another example, multiple neural networks may be run simultaneously, as is required for Level 3, 4, or 5 driving. For example, a warning sign consisting of “Caution: flashing lights indicate icy conditions,” along with an electric light, may be independently or collectively interpreted by several neural networks. The sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a neural network that has been trained), the text “Flashing lights indicate icy conditions” may be interpreted by a second deployed neural network, which informs the vehicle's path planning software (preferably executing on the CPU Complex) that when flashing lights are detected, icy conditions exist. The flashing light may be identified by operating a third deployed neural network over multiple frames, informing the vehicle's path-planning software of the presence (or absence) of flashing lights. All three neural networks may run simultaneously, such as within the DLA and/or on the GPU(s) 1508. In some examples, a CNN for facial recognition and vehicle owner identification may use data from camera sensors to identify the presence of an authorized driver and/or owner of the vehicle 1500. The always on sensor processing engine may be used to unlock the vehicle when the owner approaches the driver door and turn on the lights, and, in security mode, to disable the vehicle when the owner leaves the vehicle. In this way, the SoC(s) 1504 provide for security against theft and/or carjacking. In another example, a CNN for emergency vehicle detection and identification may use data from microphones 1596 to detect and identify emergency vehicle sirens. In contrast to conventional systems, that use general classifiers to detect sirens and manually extract features, the SoC(s) 1504 use the CNN for classifying environmental and urban sounds, as well as classifying visual data. In a preferred embodiment, the CNN running on the DLA is trained to identify the relative closing speed of the emergency vehicle (e.g., by using the Doppler Effect). The CNN may also be trained to identify emergency vehicles specific to the local area in which the vehicle is operating, as identified by GNSS sensor(s) 1558. Thus, for example, when operating in Europe the CNN will seek to detect European sirens, and when in the United States the CNN will seek to identify only North American sirens. Once an emergency vehicle is detected, a control program may be used to execute an emergency vehicle safety routine, slowing the vehicle, pulling over to the side of the road, parking the vehicle, and/or idling the vehicle, with the assistance of ultrasonic sensors 1562, until the emergency vehicle(s) passes.”) Regarding claim 11, Nilsson teaches a non-transitory computer readable medium storing instructions that, when executed by one or more processors of a computing system for a vehicle, cause the computing system to: receive sensor data from a sensor system of the vehicle; (Nilsson: Paragraph 0002: “For autonomous or semi-autonomous machines—e.g., land, sea, and air based vehicles, dynamic or static robots, etc.—being able to accurately perceive the environment in order to determine actions for controlling the machine within the environment is critical.”; Paragraph 0027: “various functions may be carried out by a processor executing instructions stored in memory.”; Paragraph 0072: “The controller(s) 1536 may provide the signals for controlling one or more components and/or systems of the vehicle 1500 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 1558 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 1560, ultrasonic sensor(s) 1562, LIDAR sensor(s) 1564, inertial measurement unit (IMU) sensor(s) 1566 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 1596, stereo camera(s) 1568, wide-view camera(s) 1570 (e.g., fisheye cameras), infrared camera(s) 1572, surround camera(s) 1574 (e.g., 360 degree cameras), long-range and/or mid-range camera(s) 1598, speed sensor(s) 1544 (e.g., for measuring the speed of the vehicle 1500), vibration sensor(s) 1542, steering sensor(s) 1540, brake sensor(s) (e.g., as part of the brake sensor system 1546), and/or other sensor types.”) dynamically determine, in real-time and based on the sensor data, (Nilsson: Paragraph 0088: “The vehicle 1500 may include a system(s) on a chip (SoC) 1504. The SoC 1504 may include CPU(s) 1506, GPU(s) 1508, processor(s) 1510, cache(s) 1512, accelerator(s) 1514, data store(s) 1516, and/or other components and features not illustrated. The SoC(s) 1504 may be used to control the vehicle 1500 in a variety of platforms and systems. For example, the SoC(s) 1504 may be combined in a system (e.g., the system of the vehicle 1500) with an HD map 1522 which may obtain map refreshes and/or updates via a network interface 1524 from one or more servers (e.g., server(s) 1578 of FIG. 15D).”; Paragraph 0103: “The RISC cores may interact with image sensors (e.g., the image sensors of any of the cameras described herein), image signal processor(s), and/or the like. Each of the RISC cores may include any amount of memory. The RISC cores may use any of a number of protocols, depending on the embodiment. In some examples, the RISC cores may execute a real-time operating system (RTOS).”; Paragraph 0109: “In some examples, the SoC(s) 1504 may include a real-time ray-tracing hardware accelerator, such as described in U.S. patent application Ser. No. 16/101,232, filed on Aug. 10, 2018. The real-time ray-tracing hardware accelerator may be used to quickly and efficiently determine the positions and extents of objects (e.g., within a world model), to generate real-time visualization simulations, for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of SONAR systems, for general wave propagation simulation, for comparison to LIDAR data for purposes of localization and/or other functions, and/or for other uses. In some embodiments, one or more tree traversal units (TTUs) may be used for executing one or more ray-tracing related operations.”) a severity value and a controllability value for each of a plurality of vehicle safety systems of the vehicle, each of the severity value and the controllability value fluctuating based on a set of dynamic factors that are present and indicative of risk when the vehicle is operating, (Nilsson: Paragraphs 0038 – 0039: “With respect to FIG. 4, FIG. 4 may correspond to early rule-based sensor fusion (ERSF), where sensor 202A may generate sensor data at an ASIL level of B(D), sensor 202B may generate sensor data at ASIL B(D), and rule-based sensor fusion (RSF) 206B may be executed on both the input sensor signals to generate an ASIL D fused output. Because sensor 202A and sensor 202B are separate—e.g., would not share common cause failures—and the fusion mechanism is rule-based, the output of the rule-based sensor fusion processing may be at ASIL D. As another example, FIG. 5 may correspond to early learned sensor fusion (ELSF), where sensor 202A may generate sensor data at an ASIL level of B(D), sensor 202B may generate sensor data at an ASIL level of B(D), and learned sensor fusion (LSF) 208B may be executed to generate an ASIL B fused output. In such an example, because learned sensor fusion may not by itself be capable of achieving a higher or maximum integrity level, learned sensor fusion may be used to achieve, for example, at most ASIL B compliance—e.g., corresponding to a lowest ASIL level of the input sensor signals.”: Paragraph 0041: “In some embodiments, safety goals may be established after use cases and potential hazards of the system are analyzed. For example, safety goals may be assigned safety integrity levels that derive from the exposure (E) to, severity (S) of, and controllability (C) of the hazardous event that begat the Safety Goal. The exposure may correspond to a measure of the probability the system will encounter the relevant situation, the severity may correspond to an assessment of the consequences that could result from the hazardous event if it were encountered and missed and/or not mitigated, and the controllability may correspond to an assessment of the difficulty for humans or other systems to handle the hazard if it were encountered and were missed and/or not mitigated. The aggregation of these factors may generally be referred to as SEC. Generally, the lower the SEC score, the lower the safety integrity level required. For example, some hazards may have lower safety integrity level requirements because they are uncommon (e.g., road hazards such as refrigerators, old tires, etc.), the severity is low (e.g., a fender bender at low speed), and/or because the hazard is easy to control (e.g., a human driver is actively supervising and can take control if adaptive cruise control brakes too late for a traffic jam in front of the ego-vehicle 1500).”; Paragraph 0055: “includes another example of a disjoint safety goals architecture that is executed, e.g., for road hazard detection using learned sensor fusion in parallel with rule-based sensor fusion, in accordance with one or more embodiments of the present disclosure. For advanced autonomous driving, the ASIL requirement for detection of vehicles and pedestrians may be derived to be ASIL D, while the ASIL requirement for detection of road hazards may be derived to be ASIL B (e.g., since road hazards such as refrigerators and old tires may be assessed as uncommon on major highways, thus having a lower SEC score). Since a learned sensor fusion architecture may achieve ASIL B, standalone learned sensor fusion may be employed to cover the safety goal of road hazard detection.”, Supplemental Note: SEC scores for different scenarios are stored and according to the real-time sensor data, the vehicle is able traverse the roadway accordingly while also sensing out road hazards) the severity value corresponding to a severity of potential injuries a hazardous event is likely to cause, the controllability value corresponding to a relative likelihood that an operator of the vehicle will not be able to act to prevent the potential injuries as identified by the severity value; (Nilsson: Abstract: “For example, in-parallel and/or in-serial combinations of early rule-based sensor fusion, late rule-based sensor fusion, early learned sensor fusion, or late learned sensor fusion may be used to solve various safety goals associated with various required safety levels at a high level of accuracy and precision. In embodiments, learned sensor fusion may be used to make more conservative decisions than the rule-based sensor fusion (as determined using, e.g., severity (S), exposure (E), and controllability (C) (SEC) associated with a current safety goal),”: Paragraph 0041: “For example, safety goals may be assigned safety integrity levels that derive from the exposure (E) to, severity (S) of, and controllability (C) of the hazardous event that begat the Safety Goal. The exposure may correspond to a measure of the probability the system will encounter the relevant situation, the severity may correspond to an assessment of the consequences that could result from the hazardous event if it were encountered and missed and/or not mitigated, and the controllability may correspond to an assessment of the difficulty for humans or other systems to handle the hazard if it were encountered and were missed and/or not mitigated.”) based on the dynamically determined severity value and controllability value for each respective vehicle safety system of the plurality of vehicle safety systems, dynamically determine, in real-time, whether any triggering condition of a plurality of triggering conditions associated with the respective vehicle safety system is met; and (Nilsson: Paragraph 0041: “For example, some hazards may have lower safety integrity level requirements because they are uncommon (e.g., road hazards such as refrigerators, old tires, etc.), the severity is low (e.g., a fender bender at low speed), and/or because the hazard is easy to control (e.g., a human driver is actively supervising and can take control if adaptive cruise control brakes too late for a traffic jam in front of the ego-vehicle 1500).”; Paragraph 0055: “includes another example of a disjoint safety goals architecture that is executed, e.g., for road hazard detection using learned sensor fusion in parallel with rule-based sensor fusion, in accordance with one or more embodiments of the present disclosure. For advanced autonomous driving, the ASIL requirement for detection of vehicles and pedestrians may be derived to be ASIL D, while the ASIL requirement for detection of road hazards may be derived to be ASIL B (e.g., since road hazards such as refrigerators and old tires may be assessed as uncommon on major highways, thus having a lower SEC score). Since a learned sensor fusion architecture may achieve ASIL B, standalone learned sensor fusion may be employed to cover the safety goal of road hazard detection.”, Supplemental Note: the low level trigger conditions are cited above which is identified by the vehicle and it’s sensors) … by automatically operating at least one of a braking system, a steering system, or an acceleration system of the vehicle. (Nilsson: Paragraph 0048: “Where a monitor architecture is implemented, rule-based sensor fusion may execute in parallel with learned sensor fusion, and an arbiter or decision component may use the rule-based sensor fusion output to monitor the learned sensor fusion output. In some embodiments, the arbiter may limit the learned sensor fusion results, or may completely override learned sensor fusion results by triggering shut-down of autonomous engagement, for example (which may be referred to as “a safety monitor”). In one or more monitor architecture variants, decomposition between learned sensor fusion and rule-based sensor fusion may be executed by the arbiter, which may choose a valid safety integrity decomposition, and may implicate either learned sensor fusion or rule-based sensor fusion in any or all of the safety goals assigned to it.”; Paragraph 0113: “Such a confidence value may be interpreted as a probability, or as providing a relative “weight” of each detection compared to other detections. This confidence value enables the system to make further decisions regarding which detections should be considered as true positive detections rather than false positive detections. For example, the system may set a threshold value for the confidence and consider only the detections exceeding the threshold value as true positive detections. In an automatic emergency braking (AEB) system, false positive detections would cause the vehicle to automatically perform emergency braking, which is obviously undesirable. Therefore, only the most confident detections should be considered as triggers for AEB. The DLA may run a neural network for regressing the confidence value.”, Supplemental Note: based data received from the vehicle sensors, the system is able to determine the confidence value of these detections and able to identify trigger conditions relating to the confidence values. An example is cited for when an automatic emergency braking is to be applied based on the confidence level of the obstacles around the vehicle) In sum, Nilsson teaches a non-transitory computer readable medium storing instructions that, when executed by one or more processors of a computing system for a vehicle, cause the computing system to: receive sensor data from a sensor system of the vehicle; dynamically determine, in real-time and based on the sensor data, a severity value and a controllability value for each of a plurality of vehicle safety systems of the vehicle, each of the severity value and the controllability value fluctuating based on a set of dynamic factors that are present and indicative of risk when the vehicle is operating, the severity value corresponding to a severity of potential injuries a hazardous event is likely to cause, the controllability value corresponding to a relative likelihood that an operator of the vehicle will not be able to act to prevent the potential injuries as identified by the severity value; based on the dynamically determined severity value and controllability value for each respective vehicle safety system of the plurality of vehicle safety systems, dynamically determine, in real-time, whether any triggering condition of a plurality of triggering conditions associated with the respective vehicle safety system is met; and by automatically operating at least one of a braking system, a steering system, or an acceleration system of the vehicle. Nilsson however does not teach in response to determining that a triggering condition of the plurality of triggering conditions for the respective vehicle safety system is met, implement, real-time and based at least in part on the determined severity value and the controllability value, a safety of intended functionality (SOTIF) mitigation measure of multiple SOTIF measures associated with the respective vehicle safety system whereas Yamaoka does. Yamaoka teaches in response to determining that a triggering condition of the plurality of triggering conditions for the respective vehicle safety system is met, (Yamaoka: Paragraph 0101: “The behavior planning function may include a function of generating a condition related to a state transition of the vehicle 1. The condition related to the state transition of the vehicle 1 may correspond to a triggering condition.”; Paragraph 0261: “A triggering condition may be a specific condition of a scenario that serves as an initiator for a subsequent system reaction contributing to either a hazardous behavior and reasonably foreseeable indirect misuse, which is a subsequent reaction of the system.”) implement, real-time and based at least in part on the determined severity value and the controllability value, a safety of intended functionality (SOTIF) mitigation measure of multiple SOTIF measures associated with the respective vehicle safety system, (Yamaoka: Paragraphs 0044 – 0045: “Architecture of the driving system 2 that enables implementation of an efficient safety of the intended functionality (SOTIF) process is selected. For example, the architecture of the driving system 2 may be configured based on a sense-plan-act model. The sense-plan-act model includes a sense (perception) element, a plan (planning) element, and an act (action) element as main system elements. The sense element, the plan element, and the act element interact with each other. Sense can be replaced with perception, plan can be replaced with judgement, and act can be replaced with control. As illustrated in FIG. 1, at a vehicle level in the driving system 2, a vehicle level function 3 is implemented based on a vehicle level safety strategy (VSLL). At a functional level (in other words, a functional perspective), a perception function, a judgement function, and a control function are implemented. At a technical level (in other words, a technical perspective), at least multiple sensors 40 corresponding to the perception function, at least one processing system 50 corresponding to the judgement function, and multiple motion actuators 60 corresponding to the control function are implemented.”) Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Nilsson with the teachings of Yamaoka with a reasonable expectation of success. Please refer to the rejection of claim 1 as both state the same functional language and therefore rejected under the same pretenses. Regarding claim 13, Nilsson, as modified, teaches wherein the severity value and the controllability value correspond to automotive safety integrity levels (ASILs). (Nilsson: Claim 11: “The processor of claim 1, wherein the first threshold level corresponds to a first ASIL level, and the second threshold level corresponds to a second ASIL level that is higher than the first ASIL level.”) Regarding claim 15, Nilsson, as modified, teaches wherein the computing system is included as a component of an advanced driver assistance system (ADAS) of the vehicle. (Nilsson: Paragraph 0024: “Systems and methods are disclosed related to combining rule-based and learned sensor fusion for autonomous machine applications. Although the present disclosure may be described with respect to an example autonomous vehicle 1500 (alternatively referred to herein as “vehicle 1500,” “ego-machine 1500,” or “ego-vehicle 1500,” an example of which is described with respect to FIGS. 15A-15D), this is not intended to be limiting. For example, the systems and methods described herein may be used by, without limitation, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more adaptive driver assistance systems (ADAS)), piloted and un-piloted robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, flying vessels, boats, shuttles, emergency response vehicles, motorcycles, manufacturing, construction, or warehouse equipment or robots, electric or motorized bicycles, aircraft, construction vehicles, underwater craft, rail transport, drones, space vehicles, and/or other vehicle types.”) Regarding claim 16, Nilsson, as modified, teaches wherein the computing system is included as a component of an autonomous vehicle control system that autonomously operates the vehicle along a travel route. (Nilsson: Paragraph 0069: “A steering system 1554, which may include a steering wheel, may be used to steer the vehicle 1500 (e.g., along a desired path or route) when the propulsion system 1550 is operating (e.g., when the vehicle is in motion). The steering system 1554 may receive signals from a steering actuator 1556. The steering wheel may be optional for full automation (Level 5) functionality.”) Regarding claim 17, Nilsson, as modified, teaches wherein the sensor system of the vehicl
Read full office action

Prosecution Timeline

Feb 22, 2023
Application Filed
Nov 06, 2024
Non-Final Rejection — §103
Feb 05, 2025
Response Filed
May 15, 2025
Final Rejection — §103
Aug 26, 2025
Request for Continued Examination
Sep 08, 2025
Response after Non-Final Action
Dec 02, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12491869
METHOD FOR CONTROLLING VEHICLE, VEHICLE AND ELECTRONIC DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12485897
METHOD FOR DETERMINING PASSAGE OF AUTONOMOUS VEHICLE AND RELATED DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12434722
METHODS AND SYSTEMS FOR LATERAL CONTROL OF A VEHICLE
2y 5m to grant Granted Oct 07, 2025
Patent 12427919
VEHICLE BLIND-SPOT REDUCTION DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12406535
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
43%
With Interview (-1.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month