DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/30/2024 was filed and has been considered by the examiner.
Drawings
The drawings that were filed on 09/30/2024 have been considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20250065916 A1), and herein after will be referred to as Gupta, in view of Herman et al. (US 20200257308 A1), herein after will be referred to as Herman, and in further view of Haggblade et al. (US 20210271241 A1), herein after will be referred to as Haggblade.
Regarding Claim 1, Gupta teaches a processor implemented method, comprising steps of:
perceiving, via one or more hardware processors, a surrounding environment of an autonomous vehicle to identify one or more surrounding vehicles and a target vehicle from the one or more surrounding vehicles (Using processor and sensors to perceive the environment and identifying the ego vehicle and surrounding vehicles as distinct nodes in a spatial network; [0022] [0039]);
determining, via the one or more hardware processors, (i) a set of input features for the target vehicle with respect to the autonomous vehicle for an observation time window using a plurality of driving and sensor related information of the autonomous vehicle and the target vehicle (Determining input features, kinematic data like speed/distance for surrounding vehicles over a sequence of time frames; [0034] [0036]), and(ii) a set of classes of maneuvers for the target vehicle with respect to the autonomous vehicle (Random Forest Classifiers and clustering algorithms are used to categorize the input data; [0027]), wherein the set of classes of maneuvers comprises a non-incident maneuver, Lane change to Left (LCL), the Lane change to Right (LCR)… (Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]), and
wherein each class from the set of classes of maneuvers corresponds to a specific condition from a set of conditions (The collision coefficient is compared to threshold values for specific conditions and outcomes; [0044]);
training, via the one or more hardware processors, a stacked autoencoder based singular deep neural network using the set of input features and the set of classes of maneuvers, wherein a stacked autoencoder is used to initialize a plurality of weights and a plurality of biases of each layer of the stacked autoencoder based singular deep neural network (A multi-layer encoder trained to reconstruct the input data and is pretrained; [0036-0038]);
predicting, via the one or more hardware processors, a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to the autonomous vehicle within a prediction time window using the stacked autoencoder based singular deep neural network, wherein the prediction time window is indicative of a time interval between a time when the prediction is made and actual time of occurrence of the maneuver (The GNN predicting future trajectories of surrounding vehicles within a defined prediction horizon; [0040]);
determining in real time, via the one or more hardware processors, an outcome of a rule from a set of rules corresponding to a predicted maneuver from the set of predicted maneuvers of the target vehicle wherein the set of rules comprises: (a) occurrence of a safe event when the corresponding predicted maneuver is at least one of (Determining the outcome of a rule by calculating a collision coefficient and the collision coefficient being equal or less than the warning threshold repeats the process of generating a collision coefficient; [0044]):
(i) the Lane change to Left and (ii) the Lane change to Right (Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]); and
(c) computing the set of input features for the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles for the observation time window when the corresponding predicted maneuver is the non-incident maneuver (Computing the spatial relationships between all vehicles in the graph network; [0039]);
predicting, via the one or more hardware processors, a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles within a prediction time window using the stacked autoencoder based singular deep neural network (The trained GNN generates future trajectory predications of each surrounding vehicle based on the GNN spatial relationships; [0040-0041]).
Gupta does not explicitly teach communicating, via the one or more hardware processors, the predicted maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles from the autonomous vehicle to each surrounding vehicle using a vehicle-to-vehicle connection.
However, Herman discloses an autonomous vehicle system that utilizes vehicle-to-vehicle communication to communicate with nearby vehicles and determines the collision probability for the adjacent vehicle to perform a defensive driving maneuver ([0024] [0027]). This teaching is equivalent to the claimed limitation because the system communicates with the adjacent vehicle using vehicle-to-vehicle communication and determines a collision probability for the adjacent vehicle and performs a defensive driving maneuver when the collision probability is greater than a threshold.
Gupta and Herman are considered to be analogous to the claim invention because they are in the same field of autonomous vehicle maneuver predictions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta to incorporate the teachings of the vehicle-to-vehicle communication and collision probability thresholds as taught by Herman based on the motivation to transmit warnings to the surrounding vehicles to react and perform defensive maneuvers. This provides the benefit of extending the host vehicle’s sensing capabilities and the surrounding vehicles to improve overall road safety.
Gupta and Herman does not explicitly teach Cut-in to Left (CTL) and Cut-in to Right (CTR) and occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left and (ii) the Cut-in to Right.
However, Haggblade discloses techniques for training a model for detecting the likelihood of a vehicle to perform a cut-in maneuver. Haggblade teaches detecting cut-in maneuvers associated with left and right turns and predicting if a vehicle will enter the driving lane ([0076] [0090] [0092]). This teaching is equivalent to the claimed limitation of Cut-in to Left (CTL) and Cut-in to Right (CTR) because the model identifies cut-in maneuvers associated with left and right turns and distinguishing it from other non-cut-in maneuver behaviors. Furthermore, Haggblade teaches the onboard model determines the trajectory to accommodate other vehicles to decelerate or to stop and yield to other vehicles ([0012] [0093]). This teaching is equivalent to the claimed limitation of occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left and (ii) the Cut-in to Right because the system generates instructions for the vehicle to decelerate or stop as a result of applying the brakes in response to a cut-in event.
Gupta, Herman, and Haggblade are considered to be analogous to the claim invention because they are in the same field of autonomous vehicle maneuver predictions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta and Herman to incorporate the teachings of the cut-in maneuver detection and braking responses as taught by Haggblade based on the motivation to improve the autonomous vehicle safety operations by handling vehicles that aggressively cut into the vehicle’s lane.
Regarding Claim 2, Gupta, Herman, and Haggblade remains as applied above in claim 1. Gupta further teaches the set of input features comprises range, range rate, transversal, speed, and yaw angle (The kinematic data includes speed and acceleration to provide the range rate, distance that provides the range, spatiotemporal coordinates that necessitates transversal and longitudinal positions, and a yaw rate; [0034] [0040-0041]).
Regarding Claim 3, Gupta, Herman, and Haggblade remains as applied above in claim 1. Gupta further teaches the set of conditions for determining the set of classes of maneuvers for the autonomous vehicle and each surrounding vehicle includes (Determining the class of maneuvers/trajectories based on conditions like spatial relationships and kinematic data; [0027] [0040] [0044]):
(a) selecting the…maneuver when a lane identity value at a current time is greater than the lane identity value at a previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and a parameter…is less than a predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(b) selecting the Lane change to Right maneuver when the lane identity value at the current time is greater than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]);
(c) selecting the…maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…is less than the predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(d) selecting the Lane change to Left maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]); and
(e) selecting the non-incident maneuver when the lane identity of the autonomous vehicle is not equal to the lane identity of the target vehicle at the current time (Tracking a target vehicle in the left lane while the Ego vehicle is in the middle lane through spatial relationships where the collision coefficient is equal to or less than the warning threshold value; [0019] [0039] [0044]).
Gupta does not explicitly teach a parameter indicative of division of the range by the range rate.
However, Herman discloses an autonomous vehicle that uses vehicle-to-vehicle communication and determines a collision probability parameter based on the time-to-collision of an adjacent vehicle ([0038]). Herman further teaches that the controller determines the time-to-collision of the adjacent vehicle from the velocity, acceleration, direction-of-travel, the distance to the object, steering angle, steering angle rate-of-change, etc. ([0039]). These teachings are equivalent to the claimed limitation of a parameter indicative of division of the range by the range rate because the parameter of range divided by range rate, the change in range over time is velocity, is Time and when used in the context of collision probability is the time-to-collision parameter. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta to incorporate the teachings of the collision probability parameter based on the time-to-collision as taught by Herman based on the motivation to improve the collision coefficient metric and ensure the defensive actions are only triggered when the imminent collision threshold value is exceeded by the time-to-collision parameter.
Gupta and Herman does not explicitly teach the Cut-in to Right and Cut-in to Left.
However, Haggblade discloses detecting cut-in maneuvers based on vehicle position relative to lane region and the direction of travel. Haggblade teaches detecting when a vehicle is going to enter the lane region of the driving lane in front of the vehicle and when a vehicle (“the other vehicle”) entering the same lane region by performing a left turn “cut-in” into traffic in front of the detecting vehicle and in response, the detecting vehicle’s planning component changes the trajectory of the detecting vehicle by decelerating the vehicle to increase distance and yield/stop to the other vehicle, and/or perform any other combination of maneuvers the vehicle is capable of performing ([0029-0030]). This teaching is equivalent to the claim limitations because the perception component labels the act of the other vehicle entering the lane in front of the ego vehicle and selecting a corresponding deceleration or yield maneuver. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta and Herman to incorporate the teachings of classifying maneuvers where a vehicle enters the lane region and selecting the corresponding deceleration or yielding actions as taught by Haggblade based on the motivation to improve the system’s maneuver classification of a standard lane changes and cut-in maneuvers. This provides the benefit of the autonomous vehicle applying more targeted and appropriate responses to surrounding vehicles.
Regarding Claim 4, Gupta teaches a system, comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to (A system with processor, memory, communication interfaces, and local interface coupled with processors; [0022-0024]):
perceive a surrounding environment of an autonomous vehicle to identify one or more surrounding vehicles and a target vehicle from the one or more surrounding vehicles (Using processor and sensors to perceive the environment and identifying the ego vehicle and surrounding vehicles as distinct nodes in a spatial network; [0022] [0039]);
determine (i) a set of input features for the target vehicle with respect to the autonomous vehicle for an observation time window using a plurality of driving and sensor related information of the autonomous vehicle and the target vehicle (Determining input features, kinematic data like speed/distance for surrounding vehicles over a sequence of time frames; [0034] [0036]), and
(ii) a set of classes of maneuvers for the target vehicle with respect to the autonomous vehicle (Random Forest Classifiers and clustering algorithms are used to categorize the input data; [0027]), wherein the set of classes of maneuvers comprises a non-incident maneuver, lane change to left, lane change to right…(Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]), and
wherein each class from the set of classes of maneuvers corresponds to a specific condition from a set of conditions (The collision coefficient is compared to threshold values for specific conditions and outcomes; [0044]);
train a stacked autoencoder based singular deep neural network using the set of input features and the set of classes of maneuvers, wherein a stacked autoencoder is used to initialize a plurality of weights and a plurality of biases of each layer of the stacked autoencoder based singular deep neural network (A multi-layer encoder trained to reconstruct the input data and is pretrained; [0036-0038]);
predict a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to the autonomous vehicle within a prediction time window using the stacked autoencoder based singular deep neural network, wherein the prediction time window is indicative of a time interval between time when the prediction is made and actual time of occurrence of the maneuver (The GNN predicting future trajectories of surrounding vehicles within a defined prediction horizon; [0040]);
determine in real time, an outcome of a rule from a set of rules corresponding to a predicted maneuver from the set of predicted maneuvers of the target vehicle where in the set of rules comprises: (a) occurrence of a safe event when the corresponding predicted maneuver is at least one of (Determining the outcome of a rule by calculating a collision coefficient and the collision coefficient being equal or less than the warning threshold repeats the process of generating a collision coefficient; [0044]):
(i) the Lane change to Left (LCL) and (ii) the Lane change to Right (LCR) (Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]); and
(c) computing the set of input features for the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles for the observation time window when the corresponding predicted maneuver is the non-incident maneuver (Computing the spatial relationships between all vehicles in the graph network; [0039]);
predict a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles within a prediction time window using the stacked autoencoder based singular deep neural network (The trained GNN generates future trajectory predications of each surrounding vehicle based on the GNN spatial relationships; [0040-0041]).
Gupta does not explicitly teach communicate the predicted maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles from the autonomous vehicle to each surrounding vehicle using a vehicle-to-vehicle connection.
However, Herman discloses an autonomous vehicle system that utilizes vehicle-to-vehicle communication to communicate with nearby vehicles and determines the collision probability for the adjacent vehicle to perform a defensive driving maneuver ([0024] [0027]). This teaching is equivalent to the claimed limitation because the system communicates with the adjacent vehicle using vehicle-to-vehicle communication and determines a collision probability for the adjacent vehicle and performs a defensive driving maneuver when the collision probability is greater than a threshold. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta to incorporate the teachings of the vehicle-to-vehicle communication and collision probability thresholds as taught by Herman based on the motivation to transmit warnings to the surrounding vehicles to react and perform defensive maneuvers. This provides the benefit of extending the host vehicle’s sensing capabilities and the surrounding vehicles to improve overall road safety.
Gupta and Herman does not explicitly teach cut-in to left and cut-in to right and occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left (CTL) and (ii) the Cut-in to Right (CTR).
However, Haggblade discloses techniques for training a model for detecting the likelihood of a vehicle to perform a cut-in maneuver. Haggblade teaches detecting cut-in maneuvers associated with left and right turns and predicting if a vehicle will enter the driving lane ([0076] [0090] [0092]). This teaching is equivalent to the claimed limitation of Cut-in to Left (CTL) and Cut-in to Right (CTR) because the model identifies cut-in maneuvers associated with left and right turns and distinguishing it from other non-cut-in maneuver behaviors. Furthermore, Haggblade teaches the onboard model determines the trajectory to accommodate other vehicles to decelerate or to stop and yield to other vehicles ([0012] [0093]). This teaching is equivalent to the claimed limitation of occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left and (ii) the Cut-in to Right because the system generates instructions for the vehicle to decelerate or stop as a result of applying the brakes in response to a cut-in event. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta and Herman to incorporate the teachings of the cut-in maneuver detection and braking responses as taught by Haggblade based on the motivation to improve the autonomous vehicle safety operations by handling vehicles that aggressively cut into the vehicle’s lane.
Regarding Claim 5, Gupta, Herman, and Haggblade remains as applied above in claim 4. Gupta further teaches the set of input features comprises range, range rate, transversal, speed, and yaw angle (The kinematic data includes speed and acceleration to provide the range rate, distance that provides the range, spatiotemporal coordinates that necessitates transversal and longitudinal positions, and a yaw rate; [0034] [0040-0041]).
Regarding Claim 6, Gupta, Herman, and Haggblade remains as applied above in claim 4. Gupta further teaches the set of conditions for determining the set of classes of maneuvers for the autonomous vehicle and each surrounding vehicle include (Determining the class of maneuvers/trajectories based on conditions like spatial relationships and kinematic data; [0027] [0040] [0044]):
(a) selecting the…maneuver when a lane identity value at a current time is greater than the lane identity value at a previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and a parameter…is less than a predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(b) selecting the lane change to right maneuver when the lane identity value at the current time is greater than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]);
(c) selecting the…maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…is less than the predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(d) selecting the lane change to left maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter indicative of division of the range by the range rate exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]); and
(e) selecting the non-incident maneuver when the lane identity of the autonomous vehicle is not equal to the lane identity of the target vehicle at the current time (Tracking a target vehicle in the left lane while the Ego vehicle is in the middle lane through spatial relationships where the collision coefficient is equal to or less than the warning threshold value; [0019] [0039] [0044]).
Gupta does not explicitly teach a parameter indicative of division of the range by the range rate.
However, Herman discloses an autonomous vehicle that uses vehicle-to-vehicle communication and determines a collision probability parameter based on the time-to-collision of an adjacent vehicle ([0038]). Herman further teaches that the controller determines the time-to-collision of the adjacent vehicle from the velocity, acceleration, direction-of-travel, the distance to the object, steering angle, steering angle rate-of-change, etc. ([0039]). These teachings are equivalent to the claimed limitation of a parameter indicative of division of the range by the range rate because the parameter of range divided by range rate, the change in range over time is velocity, is Time and when used in the context of collision probability is the time-to-collision parameter. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta to incorporate the teachings of the collision probability parameter based on the time-to-collision as taught by Herman based on the motivation to improve the collision coefficient metric and ensure the defensive actions are only triggered when the imminent collision threshold value is exceeded by the time-to-collision parameter.
Gupta and Herman does not explicitly teach the Cut-in to Right and Cut-in to Left.
However, Haggblade discloses detecting cut-in maneuvers based on vehicle position relative to lane region and the direction of travel. Haggblade teaches detecting when a vehicle is going to enter the lane region of the driving lane in front of the vehicle and when a vehicle (“the other vehicle”) entering the same lane region by performing a left turn “cut-in” into traffic in front of the detecting vehicle and in response, the detecting vehicle’s planning component changes the trajectory of the detecting vehicle by decelerating the vehicle to increase distance and yield/stop to the other vehicle, and/or perform any other combination of maneuvers the vehicle is capable of performing ([0029-0030]). This teaching is equivalent to the claim limitations because the perception component labels the act of the other vehicle entering the lane in front of the ego vehicle and selecting a corresponding deceleration or yield maneuver. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta and Herman to incorporate the teachings of classifying maneuvers where a vehicle enters the lane region and selecting the corresponding deceleration or yielding actions as taught by Haggblade based on the motivation to improve the system’s maneuver classification of a standard lane changes and cut-in maneuvers. This provides the benefit of the autonomous vehicle applying more targeted and appropriate responses to surrounding vehicles.
Regarding Claim 7, Gupta teaches one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause (The controller comprising processor and memory such as a non-transitory computer readable memory of storing instructions and executed by the processor; [0023-0024]):
perceiving a surrounding environment of an autonomous vehicle to identify one or more surrounding vehicles and a target vehicle from the one or more surrounding vehicles (Using processor and sensors to perceive the environment and identifying the ego vehicle and surrounding vehicles as distinct nodes in a spatial network; [0022] [0039]);
determining (i) a set of input features for the target vehicle with respect to the autonomous vehicle for an observation time window using a plurality of driving and sensor related information of the autonomous vehicle and the target vehicle (Determining input features, kinematic data like speed/distance for surrounding vehicles over a sequence of time frames; [0034] [0036]), and
(ii) a set of classes of maneuvers for the target vehicle with respect to the autonomous vehicle (Random Forest Classifiers and clustering algorithms are used to categorize the input data; [0027]), wherein the set of classes of maneuvers comprises a non-incident maneuver, Lane change to Left (LCL), the Lane change to Right (LCR)…(Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]), and
wherein each class from the set of classes of maneuvers corresponds to a specific condition from a set of conditions (The collision coefficient is compared to threshold values for specific conditions and outcomes; [0044]);
training a stacked autoencoder based singular deep neural network using the set of input features and the set of classes of maneuvers, wherein a stacked autoencoder is used to initialize a plurality of weights and a plurality of biases of each layer of the stacked autoencoder based singular deep neural network (A multi-layer encoder trained to reconstruct the input data and is pretrained; [0036-0038]);
predicting a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to the autonomous vehicle within a prediction time window using the stacked autoencoder based singular deep neural network, wherein the prediction time window is indicative of a time interval between a time when the prediction is made and actual time of occurrence of the maneuver (The GNN predicting future trajectories of surrounding vehicles within a defined prediction horizon; [0040]);
determining in real time an outcome of a rule from a set of rules corresponding to a predicted maneuver from the set of predicted maneuvers of the target vehicle wherein the set of rules comprises: (a) occurrence of a safe event when the corresponding predicted maneuver is at least one of (Determining the outcome of a rule by calculating a collision coefficient and the collision coefficient being equal or less than the warning threshold repeats the process of generating a collision coefficient; [0044]):
(i) the Lane change to Left and (ii) the Lane change to Right (Predicting the trajectory of the encoded surrounding vehicles, which the trajectory can be a lane change to the left/right; [0040] [0035]);
(c) computing the set of input features for the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles for the observation time window when the corresponding predicted maneuver is the non-incident maneuver (Computing the spatial relationships between all vehicles in the graph network; [0039]);
predicting a maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles within a prediction time window using the stacked autoencoder based singular deep neural network (The trained GNN generates future trajectory predications of each surrounding vehicle based on the GNN spatial relationships; [0040-0041]).
Gupta does not explicitly teach communicating the predicted maneuver from the set of classes of maneuvers to be performed by the target vehicle with respect to each surrounding vehicle from the one or more surrounding vehicles from the autonomous vehicle to each surrounding vehicle using a vehicle-to-vehicle connection.
However, Herman discloses an autonomous vehicle system that utilizes vehicle-to-vehicle communication to communicate with nearby vehicles and determines the collision probability for the adjacent vehicle to perform a defensive driving maneuver ([0024] [0027]). This teaching is equivalent to the claimed limitation because the system communicates with the adjacent vehicle using vehicle-to-vehicle communication and determines a collision probability for the adjacent vehicle and performs a defensive driving maneuver when the collision probability is greater than a threshold. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta to incorporate the teachings of the vehicle-to-vehicle communication and collision probability thresholds as taught by Herman based on the motivation to transmit warnings to the surrounding vehicles to react and perform defensive maneuvers. This provides the benefit of extending the host vehicle’s sensing capabilities and the surrounding vehicles to improve overall road safety.
Gupta and Herman does not explicitly teach cut-in to left and cut-in to right and occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left (CTL) and (ii) the Cut-in to Right (CTR).
However, Haggblade discloses techniques for training a model for detecting the likelihood of a vehicle to perform a cut-in maneuver. Haggblade teaches detecting cut-in maneuvers associated with left and right turns and predicting if a vehicle will enter the driving lane ([0076] [0090] [0092]). This teaching is equivalent to the claimed limitation of Cut-in to Left (CTL) and Cut-in to Right (CTR) because the model identifies cut-in maneuvers associated with left and right turns and distinguishing it from other non-cut-in maneuver behaviors. Furthermore, Haggblade teaches the onboard model determines the trajectory to accommodate other vehicles to decelerate or to stop and yield to other vehicles ([0012] [0093]). This teaching is equivalent to the claimed limitation of occurrence of a non-safe event suggesting applying brakes when the corresponding predicted maneuver is at least one of: (i) the Cut-in to Left and (ii) the Cut-in to Right because the system generates instructions for the vehicle to decelerate or stop as a result of applying the brakes in response to a cut-in event. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Gupta and Herman to incorporate the teachings of the cut-in maneuver detection and braking responses as taught by Haggblade based on the motivation to improve the autonomous vehicle safety operations by handling vehicles that aggressively cut into the vehicle’s lane.
Regarding Claim 8, Gupta, Herman, and Haggblade remains as applied above in claim 7. Gupta further teaches the set of input features comprises range, range rate, transversal, speed, and yaw angle (The kinematic data includes speed and acceleration to provide the range rate, distance that provides the range, spatiotemporal coordinates that necessitates transversal and longitudinal positions, and a yaw rate; [0034] [0040-0041]).
Regarding Claim 9, Gupta, Herman, and Haggblade remains as applied above in claim 7. Gupta further teaches the set of conditions for determining the set of classes of maneuvers for the autonomous vehicle and each surrounding vehicle includes (Non-transitory medium for determining the class of maneuvers/trajectories based on conditions like spatial relationships and kinematic data; [0023-0024] [0027] [0040] [0044]):
(a) selecting the…maneuver when a lane identity value at a current time is greater than the lane identity value at a previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and a parameter…is less than a predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(b) selecting the Lane change to Right maneuver when the lane identity value at the current time is greater than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]);
(c) selecting the…maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…is less than the predefined threshold value (Tracking lane geometry and comparing a calculated collision coefficient to a threshold value to determine a specific maneuver class such as an escape route regardless of direction; [0017-0019] [0044]);
(d) selecting the Lane change to Left maneuver when the lane identity value at the current time is lesser than the lane identity value at the previous time and the lane identity value of the autonomous vehicles and the target vehicle are same at the current time and the parameter…exceeds the predefined threshold value (The GNN predicts future trajectories of a vehicle moving from one lane to another and uses the collision alert module to estimate the future trajectories. When the collision coefficient is equal to or less than the warning threshold value, the process of projecting future trajectories repeats; [0017] [0044] [0045]); and
(e) selecting the non-incident maneuver when the lane identity of the autonomous vehicle is not equal to the lane identity of the target vehicle at the current time (Tracking a target vehicle in the left lane while the Ego vehicle is in the middle lane through spatial relationships where the collision coefficient is equal to or less than the warning threshold value; [0019] [0039] [0044]).
Gupta does not explicitly teach a parameter indicative of division of the range by the range rate.
However, Herman discloses an autonomous vehicle that uses vehicle-to-vehicle communication and determines a collision probability parameter based on the time-to-collision of an adjacent vehicle ([0038]). Herman further teaches that the controller determines the time-to-collision of the adjacent vehicle from the velocity, acceleration, direction-of-travel, the distance to the object, steering angle, steering angle rate-of-change, etc. ([0039]). These teachings are equivalent to the claimed limitation of a parameter indicative of division of the range by the range rate because the parameter of range divided by range rate, the change in range over time is velocity, is Time and when used in the context of collision probability is the time-to-collision parameter. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta to incorporate the teachings of the collision probability parameter based on the time-to-collision as taught by Herman based on the motivation to improve the collision coefficient metric and ensure the defensive actions are only triggered when the imminent collision threshold value is exceeded by the time-to-collision parameter.
Gupta and Herman does not explicitly teach the Cut-in to Right and Cut-in to Left.
However, Haggblade discloses detecting cut-in maneuvers based on vehicle position relative to lane region and the direction of travel. Haggblade teaches detecting when a vehicle is going to enter the lane region of the driving lane in front of the vehicle and when a vehicle (“the other vehicle”) entering the same lane region by performing a left turn “cut-in” into traffic in front of the detecting vehicle and in response, the detecting vehicle’s planning component changes the trajectory of the detecting vehicle by decelerating the vehicle to increase distance and yield/stop to the other vehicle, and/or perform any other combination of maneuvers the vehicle is capable of performing ([0029-0030]). This teaching is equivalent to the claim limitations because the perception component labels the act of the other vehicle entering the lane in front of the ego vehicle and selecting a corresponding deceleration or yield maneuver. It would have been obvious to one having ordinary skill in the art at the time the invention was made to modify Gupta and Herman to incorporate the teachings of classifying maneuvers where a vehicle enters the lane region and selecting the corresponding deceleration or yielding actions as taught by Haggblade based on the motivation to improve the system’s maneuver classification of a standard lane changes and cut-in maneuvers. This provides the benefit of the autonomous vehicle applying more targeted and appropriate responses to surrounding vehicles.
Prior Art
The prior art made of record and not relied upon is considered pertinent, most relevant, to applicant's disclosure.
Nagaraja (US 20210070320 A1)
Rosales (US 20200249683 A1)
Villegas (US 20230088912 A1)
Chen (US 20240329639 A1)
Ng (US 20240059285 A1)
Kim (US 20190266421 A1)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD ANDREW IZON DIZON whose telephone number is (571)272-4834. The examiner can normally be reached M-F 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWARD ANDREW IZON DIZON/Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663