Prosecution Insights
Last updated: April 19, 2026
Application No. 18/891,741

COGNITIVE ROBOTIC SYSTEMS AND METHODS WITH FEAR BASED ACTION/REACTION

Non-Final OA §102§103
Filed
Sep 20, 2024
Examiner
CAIN, AARON G
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Intel Corporation
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
66%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
52 granted / 130 resolved
-12.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
57.4%
+17.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 130 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The Office Action is in response to the application filed on 09/20/2024. Claims 1-20 are presently pending and are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/20/ is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3-4, 6-7, 11, 13-14, and 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Levinson et al. US 20170316333 A1 (“Levinson”). Regarding Claim 1. Levinson teaches a computer-assisted or autonomous driving (CAD) system of a vehicle, comprising: at least one sensor to collect sensor data (FIG. 1, a sensor is shown at 139, and other sensors are available but not shown [paragraph 53]. FIG. 9 shows how a teleoperating computer system can be used to provide assistance to the driving of the vehicle [paragraph 80], but the system can also work for autonomous vehicles, and FIG. 38 is a diagram depicting an inferred semantic classification implemented in mapping data for autonomous vehicles in a fleet of autonomous vehicles [paragraph 43]); and processing circuitry configured to: process the sensor data to identify at least one potential threat to safe operation of the vehicle during movement of the vehicle, wherein the at least one potential threat is identified on a path that the vehicle is to travel (FIG. 2 shows a flow diagram to monitor a fleet of autonomous vehicles, wherein at 204, data representing an event associated with a calculated confidence level for a vehicle is detected. An event may be a condition or situation affecting operation, or potentially affecting operation, of an autonomous vehicle. The events may be internal to an autonomous vehicle, or external [paragraph 58]); evaluate a heat map, wherein the heat map is based on observations and actions of other vehicles on the path that the vehicle is to travel (FIG. 40 is a diagram depicting implementation of automated extraction of semantic information based on heat map data over a period of time, according to some examples. In diagram 4000, over a period time 4015, inferred semantic classification based on object activity, over time, in or around a location in the region 3623 may be determined using heat map data generated from sensor data output from one or more sensors of the sensor system of the autonomous vehicles in a road network. The heat map data may be communicated to an external resource (e.g., network 4099) for processing and analysis. In some examples, heat map data may be generated external of the autonomous vehicles by communicating sensor data the heat map data will be generated from to an external resource [paragraph 158]), and wherein the heat map depicts varying amounts of risk encountered on the path by the other vehicles based on the at least one potential threat (In FIG. 37, at stage 3710, based on the tracking in step 3708, a probability that the subset of objects will be positioned at a location in the region over the period of time may be determined [paragraph 149]); determine at least one safety action to perform with the CAD system, based on the at least one potential threat identified from the sensor data and the varying amounts of risk depicted in the heat map (Confidence level generator 1123 may be configured to analyze perception data 1132 to determine a state for the autonomous vehicle. For example, confidence level generator 1123 may use semantic information associated with static and dynamic objects, as well as associated probabilistic estimations, to enhance a degree of certainty that planner 1164 is determining safe course of action [FIG. 11, paragraph 86]); and generate commands to cause the vehicle to implement the at least one safety action with the CAD system, to respond to the at least one potential threat (According to some examples, the autonomous vehicle service platform can respond to the detection of an object obscuring a path or trajectory, by generating a response identifying geographic areas to exclude from planning a path and providing a path to follow, or the teleoperator may define areas of locations that the autonomous vehicle must avoid [paragraph 55]). Regarding Claim 3. Levinson teaches the CAD system of claim 1. Levinson also teaches: wherein the processing circuitry is further configured to identify a current context of the vehicle, and wherein the commands to respond to the at least one potential threat are based on the current context of the vehicle (FIG. 13 depicts an example in which a planner may generate a trajectory, according to some examples. Diagram 1300 includes a trajectory evaluator 1320 and a trajectory generator 1324. Trajectory evaluator 1320 includes a confidence level generator 1322 and a teleoperator query messenger 1329. As shown, trajectory evaluator 1320 is coupled to a perception engine 1366 to receive static map data 1301, and current and predicted object state data 1303 [paragraph 89]. FIG. 15 shows an example of control over an autonomous vehicle, which involves message data being received at a teleoperator for managing a fleet of autonomous vehicles. The message data may indicate event attributes associated with a non-normative state of operation in the context of a planned path for an autonomous vehicle. For example, an event may be characterized as a particular intersection that becomes problematic due to, for example, a large number of pedestrians, hurriedly crossing the street against a traffic light. The event attributes describe the characteristics of the event, such as, for example, the number of people crossing the street, the traffic delays resulting from an increased number of pedestrians, etc. This leads into 1510, where data signals representing a selection (e.g., by teleoperator) of a recommended course of action are delivered and received by the autonomous vehicle [FIG. 15, paragraph 95]). Regarding Claim 4. Levinson teaches the CAD system of claim 3. Levinson also teaches: wherein the processing circuitry is further configured to learn operational experiences of the other vehicles, and to identify the current context based on the operational experiences (Examples of external objects likely to be labeled as dynamic include bicyclists, pedestrians, animals, other vehicles, etc. If the external object is labeled as dynamic, and further data about the external object may indicate a typical level of activity and velocity, as well as behavior patterns associated with the classification type. Further data about the external object may be generated by tracking the external object. As such, the classification type can be used to predict or otherwise determine the likelihood that an external object may, for example, interfere with an autonomous vehicle traveling along a planned path. For example, an external object that is classified as a pedestrian may be associated with some maximum speed, as well as an average speed (e.g., based on tracking data) [paragraph 64]. While the primary example given involves the speed and movement of pedestrians, Levinson explicitly teaches that other vehicles can be included as external dynamic objects as well, and tracking the speed of external vehicles (a type of operational experience) would also be within the scope of the disclosure while also providing context information). Regarding Claim 6. Levinson teaches the CAD system of claim 1. Levinson also teaches: wherein the processing circuitry is further configured to evaluate at least one message that identifies a potential threat to safe operation of the other vehicles (FIG. 15 shows an example of control over an autonomous vehicle, which involves message data being received at a teleoperator for managing a fleet of autonomous vehicles. The message data may indicate event attributes associated with a non-normative state of operation in the context of a planned path for an autonomous vehicle. For example, an event may be characterized as a particular intersection that becomes problematic due to, for example, a large number of pedestrians, hurriedly crossing the street against a traffic light, which is a type of obstacle [paragraph 95, FIG. 15]), and wherein the heat map is created based on the at least one message (FIG. 40 is a diagram depicting implementation of automated extraction of semantic information based on heat map data over a period of time, according to some examples. In diagram 4000, over a period time 4015, inferred semantic classification based on object activity, over time, in or around a location in the region 3623 may be determined using heat map data generated from sensor data output from one or more sensors of the sensor system of the autonomous vehicles in a road network. The heat map data may be communicated to an external resource (e.g., network 4099) for processing and analysis. In some examples, heat map data may be generated external of the autonomous vehicles by communicating sensor data the heat map data will be generated from to an external resource [paragraph 158]). Regarding Claim 7. Levinson teaches the CAD system of claim 6. Levinson also teaches: wherein the at least one message identifies at least one of adverse weather impact, road hazards, speed bumps, or steep terrain encountered by the other vehicles (An event may include weather-related conditions (e.g., loss of friction due to ice or rain) or the angle at which the sun is shining (e.g., at sunset), such as low angle to the horizon that cause sun to shine brightly in the eyes of human drivers of other vehicles [paragraph 58]). Regarding Claim 11. Levinson teaches at least one non-transitory computer-readable medium comprising instructions, wherein execution of the instructions by one or more processors is to cause a computer-assisted or autonomous driving (CAD) system of a vehicle to: process sensor data of the CAD system to identify at least one potential threat to safe operation of the vehicle during movement of the vehicle, wherein the at least one potential threat is identified on a path that the vehicle is to travel (FIG. 1, a sensor is shown at 139, and other sensors are available but not shown [paragraph 53]. FIG. 9 shows how a teleoperating computer system can be used to provide assistance to the driving of the vehicle [paragraph 80], but the system can also work for autonomous vehicles, and FIG. 38 is a diagram depicting an inferred semantic classification implemented in mapping data for autonomous vehicles in a fleet of autonomous vehicles [paragraph 43]. FIG. 2 shows a flow diagram to monitor a fleet of autonomous vehicles, wherein at 204, data representing an event associated with a calculated confidence level for a vehicle is detected. An event may be a condition or situation affecting operation, or potentially affecting operation, of an autonomous vehicle. The events may be internal to an autonomous vehicle, or external [paragraph 58]); evaluate a heat map, wherein the heat map is based on observations and actions of other vehicles on the path that the vehicle is to travel (FIG. 40 is a diagram depicting implementation of automated extraction of semantic information based on heat map data over a period of time, according to some examples. In diagram 4000, over a period time 4015, inferred semantic classification based on object activity, over time, in or around a location in the region 3623 may be determined using heat map data generated from sensor data output from one or more sensors of the sensor system of the autonomous vehicles in a road network. The heat map data may be communicated to an external resource (e.g., network 4099) for processing and analysis. In some examples, heat map data may be generated external of the autonomous vehicles by communicating sensor data the heat map data will be generated from to an external resource [paragraph 158]), and wherein the heat map depicts varying amounts of risk encountered on the path by the other vehicles based on the at least one potential threat (In FIG. 37, at stage 3710, based on the tracking in step 3708, a probability that the subset of objects will be positioned at a location in the region over the period of time may be determined [paragraph 149]); determine at least one safety action to perform with the CAD system, based on the at least one potential threat identified from the sensor data and the varying amounts of risk depicted in the heat map (Confidence level generator 1123 may be configured to analyze perception data 1132 to determine a state for the autonomous vehicle. For example, confidence level generator 1123 may use semantic information associated with static and dynamic objects, as well as associated probabilistic estimations, to enhance a degree of certainty that planner 1164 is determining safe course of action [FIG. 11, paragraph 86]); and generate commands to cause the vehicle to implement the at least one safety action with the CAD system, to respond to the at least one potential threat (According to some examples, the autonomous vehicle service platform can respond to the detection of an object obscuring a path or trajectory, by generating a response identifying geographic areas to exclude from planning a path and providing a path to follow, or the teleoperator may define areas of locations that the autonomous vehicle must avoid [paragraph 55]). Regarding Claim 13. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson also teaches: wherein the one or more processors are to execute the instructions to identify a current context of the vehicle, and the commands to respond to the at least one potential threat are based on the current context of the vehicle (FIG. 13 depicts an example in which a planner may generate a trajectory, according to some examples. Diagram 1300 includes a trajectory evaluator 1320 and a trajectory generator 1324. Trajectory evaluator 1320 includes a confidence level generator 1322 and a teleoperator query messenger 1329. As shown, trajectory evaluator 1320 is coupled to a perception engine 1366 to receive static map data 1301, and current and predicted object state data 1303 [paragraph 89]. FIG. 15 shows an example of control over an autonomous vehicle, which involves message data being received at a teleoperator for managing a fleet of autonomous vehicles. The message data may indicate event attributes associated with a non-normative state of operation in the context of a planned path for an autonomous vehicle. For example, an event may be characterized as a particular intersection that becomes problematic due to, for example, a large number of pedestrians, hurriedly crossing the street against a traffic light. The event attributes describe the characteristics of the event, such as, for example, the number of people crossing the street, the traffic delays resulting from an increased number of pedestrians, etc. This leads into 1510, where data signals representing a selection (e.g., by teleoperator) of a recommended course of action are delivered and received by the autonomous vehicle [FIG. 15, paragraph 95]). Regarding Claim 14. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson also teaches: wherein the instructions cause the CAD system to learn operational experiences of the other vehicles, and to identify the current context based on the operational experiences (Examples of external objects likely to be labeled as dynamic include bicyclists, pedestrians, animals, other vehicles, etc. If the external object is labeled as dynamic, and further data about the external object may indicate a typical level of activity and velocity, as well as behavior patterns associated with the classification type. Further data about the external object may be generated by tracking the external object. As such, the classification type can be used to predict or otherwise determine the likelihood that an external object may, for example, interfere with an autonomous vehicle traveling along a planned path. For example, an external object that is classified as a pedestrian may be associated with some maximum speed, as well as an average speed (e.g., based on tracking data) [paragraph 64]. While the primary example given involves the speed and movement of pedestrians, Levinson explicitly teaches that other vehicles can be included as external dynamic objects as well, and tracking the speed of external vehicles (a type of operational experience) would also be within the scope of the disclosure while also providing context information). Regarding Claim 16. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson also teaches: wherein the instructions cause the CAD system to evaluate at least one message that identifies a potential threat to safe operation of the other vehicles (FIG. 15 shows an example of control over an autonomous vehicle, which involves message data being received at a teleoperator for managing a fleet of autonomous vehicles. The message data may indicate event attributes associated with a non-normative state of operation in the context of a planned path for an autonomous vehicle. For example, an event may be characterized as a particular intersection that becomes problematic due to, for example, a large number of pedestrians, hurriedly crossing the street against a traffic light, which is a type of obstacle [paragraph 95, FIG. 15]), and wherein the heat map is created based on the at least one message (FIG. 40 is a diagram depicting implementation of automated extraction of semantic information based on heat map data over a period of time, according to some examples. In diagram 4000, over a period time 4015, inferred semantic classification based on object activity, over time, in or around a location in the region 3623 may be determined using heat map data generated from sensor data output from one or more sensors of the sensor system of the autonomous vehicles in a road network. The heat map data may be communicated to an external resource (e.g., network 4099) for processing and analysis. In some examples, heat map data may be generated external of the autonomous vehicles by communicating sensor data the heat map data will be generated from to an external resource [paragraph 158]). Regarding Claim 17. Levinson teaches the non-transitory computer-readable medium of claim 16. Levinson also teaches: wherein the at least one message identifies at least one of adverse weather impact, road hazards, speed bumps, or steep terrain encountered by the other vehicles (An event may include weather-related conditions (e.g., loss of friction due to ice or rain) or the angle at which the sun is shining (e.g., at sunset), such as low angle to the horizon that cause sun to shine brightly in the eyes of human drivers of other vehicles [paragraph 58]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. US 20170316333 A1 (“Levinson”) as applied to claims 1 and 11 above, and further in view of Ray et al. US 11158188 B2 (“Ray”). Regarding Claim 2. Levinson teaches the CAD system of claim 1. Levinson does not teach: wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat; and an evaluation of a determined level of threat from the at least one potential threat. However, Ray teaches: wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat (The vehicle optimization program shown at 122 of FIG. 1 can access safety data on other autonomous vehicles (e.g., various other computing devices) [Column 12, lines 9-24]. The server system can monitor safety data transmitted from either the computing device shown in 120 of FIG. 1, or other computing devices regarding the status of the autonomous vehicle’s trajectory (i.e., velocity, direction, etc.) and the surrounding driving patterns [Column 11, lines 23-43]. Additionally, in one embodiment, the vehicle optimization program receives data based on the trajectory and driving patterns of one or more autonomous vehicles [Column 8, lines 46-51], which in combination with the safety data transmitted from other autonomous vehicles means that the system can receive data regarding the actions taken by other robots in response to the adversities faced by the other robots); and an evaluation of a determined level of threat from the at least one potential threat (The factors detailed above are used to determine a level of safety associated with autonomous vehicles and to minimize potential negative interactions that may result from human perception and judgment [Column 6, lines 20-35]. A level of safety can also be interpreted as a level of threat, or an inverted level of threat to safe operation of the robotic system). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat; and an evaluation of a determined level of threat from the at least one potential threat as taught by Ray so that the autonomous vehicle can benefit from observations of how other vehicles respond to a potential threat. Regarding Claim 12. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson does not teach: wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat; and an evaluation of a determined level of threat from the at least one potential threat. However, Ray teaches: wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat (The vehicle optimization program shown at 122 of FIG. 1 can access safety data on other autonomous vehicles (e.g., various other computing devices) [Column 12, lines 9-24]. The server system can monitor safety data transmitted from either the computing device shown in 120 of FIG. 1, or other computing devices regarding the status of the autonomous vehicle’s trajectory (i.e., velocity, direction, etc.) and the surrounding driving patterns [Column 11, lines 23-43]. Additionally, in one embodiment, the vehicle optimization program receives data based on the trajectory and driving patterns of one or more autonomous vehicles [Column 8, lines 46-51], which in combination with the safety data transmitted from other autonomous vehicles means that the system can receive data regarding the actions taken by other robots in response to the adversities faced by the other robots); and an evaluation of a determined level of threat from the at least one potential threat (The factors detailed above are used to determine a level of safety associated with autonomous vehicles and to minimize potential negative interactions that may result from human perception and judgment [Column 6, lines 20-35]. A level of safety can also be interpreted as a level of threat, or an inverted level of threat to safe operation of the robotic system). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action is determined based at least in part on: an evaluation of whether the actions of the other vehicles are applicable to the at least one potential threat; an evaluation of whether the vehicle includes capabilities to avoid or mitigate the at least one potential threat using the actions of the other vehicles taken in response to the at least one potential threat; and an evaluation of a determined level of threat from the at least one potential threat as taught by Ray so that the autonomous vehicle can benefit from observations of how other vehicles respond to a potential threat. Claim(s) 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. US 20170316333 A1 (“Levinson”) as applied to claims 3 and 13 above, and further in view of Konrardy et al. US 10134278 B1 (“Konrardy”). Regarding Claim 5. Levinson teaches the CAD system of claim 3. Levinson does not teach: wherein the processing circuitry is further configured to obtain data associated with observed errors of the other vehicles, process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions. However, Konrardy teaches: wherein the processing circuitry is further configured to obtain data associated with observed errors of the other vehicles (A server in communication with a vehicle controller that keeps databases on with information on vehicle accidents, road conditions, etc. [Column 10, lines 42-49]. Smart infrastructure can display a warning message that an accident has been detected ahead and/or on a specific road [Column 15, lines 55-60]), process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions (In FIG. 5, Konrardy also shows that the autonomous vehicle system can determine if the vehicle will collide with a second vehicle (or other obstacle), identify a maneuver for the first vehicle to avoid the obstacle, and cause the first vehicle to move in accordance with the maneuver [FIG. 5, Column 28, lines 44-63]. The front-end components 102 may also obtain information regarding a vehicle 108 (e.g., a car, truck, motorcycle, etc.). Combined with the server in communication with a database on information regarding vehicle accidents and road conditions, this reads on identifying current context based on the learned environmental conditions). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the processing circuitry is further configured to obtain data associated with observed errors of the other vehicles, process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions as taught by Konrardy so that the vehicle can receive information from other vehicles that have already crashed due to hazardous conditions. Regarding Claim 15. Levinson teaches the non-transitory computer-readable medium of claim 13. Levinson does not teach: wherein the instructions cause the CAD system to obtain data associated with observed errors of the other vehicles, process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions. However, Konrardy teaches: wherein the instructions cause the CAD system to obtain data associated with observed errors of the other vehicles (A server in communication with a vehicle controller that keeps databases on with information on vehicle accidents, road conditions, etc. [Column 10, lines 42-49]. Smart infrastructure can display a warning message that an accident has been detected ahead and/or on a specific road [Column 15, lines 55-60]), process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions (In FIG. 5, Konrardy also shows that the autonomous vehicle system can determine if the vehicle will collide with a second vehicle (or other obstacle), identify a maneuver for the first vehicle to avoid the obstacle, and cause the first vehicle to move in accordance with the maneuver [FIG. 5, Column 28, lines 44-63]. The front-end components 102 may also obtain information regarding a vehicle 108 (e.g., a car, truck, motorcycle, etc.). Combined with the server in communication with a database on information regarding vehicle accidents and road conditions, this reads on identifying current context based on the learned environmental conditions). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the instructions cause the CAD system to obtain data associated with observed errors of the other vehicles, process the data associated with the observed errors of the other vehicles to learn about environmental conditions in an immediate surrounding area of the vehicle, and identify the current context based on the environmental conditions as taught by Konrardy so that the vehicle can receive information from other vehicles that have already crashed due to hazardous conditions. Claim(s) 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. US 20170316333 A1 (“Levinson”) as applied to claims 1 and 11 above, and further in view of Vose et al. US 9656606 B1 (“Vose”). Regarding Claim 8. Levinson teaches the CAD system of claim 1. Levinson does not teach: wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle, wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert, and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels. However, Vose teaches: wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle (a system and method for alerting a driver of a vehicle to collision risks. In FIG. 2, the system is illustrated to show how a vehicle/customer device (such as a cell phone, shown in FIG. 1). The system may determine if there is an elevated level of risk by accessing environment data and assessing the risk level [FIG. 2, numerals 226-230]. If the risk level is elevated, the system generates a notification at numeral 232, communicates the notification to the user through their device, which can either be an on-board infotainment console inside the vehicle, or a separate device such as a smartphone [Column 5, lines 64-67, Column 6, lines 1-14]. The system can determine if there is an elevated level of risk for a collision based upon the assessment of the risk level, which can have a threshold level of acceptable risk (for example, a 10% chance) compared to a calculated overall level of risk [Column 9, lines 28-40]. This can either be a level of risk represented by a number or measurement, or a qualitative risk level such as “low”), wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert (The alert can be in the form of an image, and audio alert, or haptic feedback [Column 10, lines 50-63]), and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels (The insurance provider may generate and communicate an alert or notification to the electronic device, where the alert or notification warns or notifies the vehicle operator that the vehicle may be at an elevated risk for an animal collision. In particular, the alert may include information identifying a specific type of animal, a reason for the elevated risk, and/or any other relevant information [Column 6, lines 39-52], which, in addition to the levels of threat, reads on a spectrum of threat levels). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle, wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert, and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels as taught by Vose so as to allow the operator to receive notice of the aversity levels for potential adversities. Regarding Claim 18. Levinson teaches the non-transitory computer-readable medium of claim 1. Levinson does not teach: wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle, wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert, and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels. However, Vose teaches: wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle (a system and method for alerting a driver of a vehicle to collision risks. In FIG. 2, the system is illustrated to show how a vehicle/customer device (such as a cell phone, shown in FIG. 1). The system may determine if there is an elevated level of risk by accessing environment data and assessing the risk level [FIG. 2, numerals 226-230]. If the risk level is elevated, the system generates a notification at numeral 232, communicates the notification to the user through their device, which can either be an on-board infotainment console inside the vehicle, or a separate device such as a smartphone [Column 5, lines 64-67, Column 6, lines 1-14]. The system can determine if there is an elevated level of risk for a collision based upon the assessment of the risk level, which can have a threshold level of acceptable risk (for example, a 10% chance) compared to a calculated overall level of risk [Column 9, lines 28-40]. This can either be a level of risk represented by a number or measurement, or a qualitative risk level such as “low”), wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert (The alert can be in the form of an image, and audio alert, or haptic feedback [Column 10, lines 50-63]), and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels (The insurance provider may generate and communicate an alert or notification to the electronic device, where the alert or notification warns or notifies the vehicle operator that the vehicle may be at an elevated risk for an animal collision. In particular, the alert may include information identifying a specific type of animal, a reason for the elevated risk, and/or any other relevant information [Column 6, lines 39-52], which, in addition to the levels of threat, reads on a spectrum of threat levels). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action includes output of an indicator of a level of threat to the safe operation of the vehicle, wherein the indicator comprises one or more of an audio alert, a visual alert, or a mechanical alert, and wherein the indicator is output to an operator of the vehicle and identifies the level of threat on a spectrum of levels as taught by Vose so as to allow the operator to receive notice of the aversity levels for potential adversities. Claim(s) 9-10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al. US 20170316333 A1 (“Levinson”) as applied to claims 1 and 11 above, and further in view of Nister et al. US 20190243371 A1 (“Nister”). Regarding Claim 9. Levinson teaches the CAD system of claim 1. Levinson does not teach: wherein the processing circuitry is further configured to identify a level of threat from the at least one potential threat to safe operation of the vehicle, and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation. However, Nister teaches: wherein the processing circuitry is further configured to identify a level of threat from the at least one potential threat to safe operation of the vehicle (When determining whether to implement a safety procedure or another set of controls, the system may calculate a safety potential associated with the safety procedure (in some examples, the safety potential is a representation of the degree of overlap between the vehicle-occupied trajectory(ies) and the object-occupied trajectory(ies)—e.g., the area or volume of overlap between the two)), and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation (a wait perceiver may be responsible for determining constraints on the vehicle as a result of rules, conventions, and/or practical considerations, such traffic lights, police, or other emergency personnel [paragraph 65]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the processing circuitry is further configured to identify a level of threat from the at least one potential threat to safe operation of the vehicle, and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation as taught by Nister so as to allow the robotic vehicle to identify potential emergency situations such as an emergency vehicle, which requires different laws to be followed than a normal civilian vehicle. Regarding Claim 10. Levinson teaches the CAD system of claim 1. Levinson does not teach: wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention. However, Nister teaches: wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention as taught by Nister so as to alert a human driver and ensure that they are paying attention to an upcoming adversity. Regarding Claim 19. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson does not teach: wherein the instructions cause the CAD system to identify a level of threat from the at least one potential threat to safe operation of the vehicle, and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation. However, Nister teaches: wherein the instructions cause the CAD system to identify a level of threat from the at least one potential threat to safe operation of the vehicle (When determining whether to implement a safety procedure or another set of controls, the system may calculate a safety potential associated with the safety procedure (in some examples, the safety potential is a representation of the degree of overlap between the vehicle-occupied trajectory(ies) and the object-occupied trajectory(ies)—e.g., the area or volume of overlap between the two)), and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation (a wait perceiver may be responsible for determining constraints on the vehicle as a result of rules, conventions, and/or practical considerations, such traffic lights, police, or other emergency personnel [paragraph 65]). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the instructions cause the CAD system to identify a level of threat from the at least one potential threat to safe operation of the vehicle, and the level of threat is based on a risk of the vehicle being operated into a potential emergency situation as taught by Nister so as to alert a human driver and ensure that they are paying attention to an upcoming adversity. Regarding Claim 20. Levinson teaches the non-transitory computer-readable medium of claim 11. Levinson does not teach: wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention. However, Nister teaches: wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Levinson with wherein the at least one safety action to respond to the at least one potential threat includes providing a stimulus to a human driver of the vehicle to pay extra attention as taught by Nister so as to alert a human driver and ensure that they are paying attention to an upcoming adversity. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON G CAIN/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Feb 09, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573302
METHOD FOR INFRASTRUCTURE-SUPPORTED ASSISTING OF A MOTOR VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12558790
METHOD AND COMPUTING SYSTEMS FOR PERFORMING OBJECT DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12552019
MACHINE LEARNING METHOD AND ROBOT SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12544144
DENTAL ROBOT AND ORAL NAVIGATION METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12541205
MOVEMENT CONTROL SUPPORT DEVICE AND METHOD
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
66%
With Interview (+26.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 130 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month