Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,096

Device and Method for Determining an Intention of a Driver to Turn

Final Rejection §101§103
Filed
May 01, 2024
Examiner
TRAN, THANG DUC
Art Unit
2686
Tech Center
2600 — Communications
Assignee
BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
356 granted / 468 resolved
+14.1% vs TC avg
Strong +24% interview lift
Without
With
+23.7%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
31 currently pending
Career history
499
Total Applications
across all art units

Statute-Specific Performance

§101
3.7%
-36.3% vs TC avg
§103
59.5%
+19.5% vs TC avg
§102
11.6%
-28.4% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed on 11/11/2025 have been considered. Claims 1-15 remain pending in the application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 reject under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claim(s) recite(s) a mental processes (concepts performed in the human mind, such as evaluation, comparison, and decision making) and abstract information. This judicial exception is not integrated into a practical application because the claims directed to the mental process and abstract information. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements in the claim more likely are generic, well understood, routine, and conventional in the network communication systems. Below is the analysis: Claim 1 recited “A device for determining a vehicle driver's intent to turn, the device comprising: a vehicle interior camera configured to capture image data depicting an area of the vehicle interior; a processing unit configured to process the image data, wherein the processing unit is configured to: determine a first result, based on the image data, wherein the first result is a focus area of a driver outside the vehicle, determine a second result, based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, determine a third result, via an environmental capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and determine the intent to turn based on the three results.”. Step 2A prong one: Yes, the claim is abstract idea for the following limitation: . “A device for determining a vehicle driver's intent to turn, the device comprising: a vehicle interior camera configured to capture image data depicting an area of the vehicle interior” is the step of data collection which is directed to mental process and abstract. . “a processing unit configured to process the image data, wherein the processing unit is configured to: determine a first result, based on the image data, wherein the first result is a focus area of a driver outside the vehicle” is the step of observation and categorizing information as the first result which is directed to mental process and abstract. . “determine a second result, based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area” is the step of evaluation according to the data collection and categorizing information as the second result which is directed to mental process and abstract. . “determine a third result, via an environmental capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and determine the intent to turn based on the three results.” is the step of evaluation according to the data collection which is directed to mental process and abstract. Step 2A prong two: Yes the claim is abstract idea because the claim do not recite any additional elements that integrate the judicial exception into a practical application. The claims is using a generic camera, environmental sensor and a processing unit and they are well understood, routine, and conventional in the art for data collection and evaluation. Regarding claims 2-14 are further depend on claim 1 and the limitation do not recited any significantly more than the abstract idea as cited above for claim 1, therefore claims 2-14 are also reject for the same reason. Claim 2 recited “The device according to claim 1, wherein the processing unit is configured to verify, in order to determine the first result, whether at least one turning option of the vehicle exists in the visual capture area of the driver and whether this turning option is within the determined focus area.” is direct to mental process and abstract. They do not add any technological improvement. Claim 3 recited “The device according to claim 1, wherein the processing unit is configured to determine at least one possible turning direction of the vehicle in the visual capture area of the driver based on the environmental information of the visual capture area of the vehicle provided by the environment capture unit and to verify the first result based thereon, wherein the processing unit is configured to change the first result based on the result of the verification.” is direct to mental process and abstract. They do not add any technological improvement. Claim 4 recited “The device according to claim 1, wherein the processing unit is configured to verify whether safe turning is possible based on odometry data from the vehicle in order to determine the second result.” is direct to mental process and abstract. They do not add any technological improvement. Claim 5 recited “The device according to claim 1, wherein the processing unit is configured to verify whether, based on stored odometry data associated with the driver and/or the vehicle and the odometry data of the vehicle, the intent to turn is probable in order to determine the second result.” is direct to mental process and abstract. They do not add any technological improvement. Claim 6 recited “The device according to claim 1, wherein the processing unit is configured to verify whether there is a traffic-related reason for the odometry data of the vehicle in order to determine the third result.” is direct to mental process and abstract. They do not add any technological improvement. Claim 7 recited “The device according to claim 1, wherein the processing unit is further configured to determine: a positive first result if the focus area exists outside the vehicle in a specific turning direction, a positive second result if the odometry data allow the vehicle to turn in the direction of the determined focus area, a positive third result if the determined odometry data have no other cause, and the intent to turn only when all three results are positive.” is direct to mental process and abstract. They do not add any technological improvement. Claim 8 recited “The device according to claim 1, wherein the processing unit is further configured to determine: as a first result, the probability of the existence of a focus area contingent on the intent to turn outside the vehicle, as a second result, the probability of the vehicle turning in the direction of the determined focus area based on the odometry data, an overall probability, at least based on the three results, and the intent to turn when the determined overall probability exceeds a preset stored limit value.” is direct to mental process and abstract. They do not add any technological improvement. Claim 9 recited “The device according to claim 1, wherein the processing unit is configured to: output information to the driver via an output unit of the vehicle, actuate the turn indicator automatically after a preset time if the driver does not abort.” is direct to mental process and abstract. They do not add any technological improvement. Claim 10 recited “The device according to claim 1, wherein the processing unit is configured to detect the gaze direction of the driver over a preset period of time of several seconds in order to determine the first result and to determine the focus area based on the gazes of the driver directed outside the vehicle.” is direct to mental process and abstract. They do not add any technological improvement. Claim 11 recited “The device according to claim 1, wherein the processing unit is configured to detect a head-eye rotation of the driver over a preset period of several seconds in order to determine the first result and to determine the focus area based on the detected head- eye rotation.” is direct to mental process and abstract. They do not add any technological improvement. Claim 12 recited “The device according to claim 1, wherein the processing unit is configured to determine an odometry pattern over a preset period of several seconds in order to determine the second result and preferably to compare it with odometry patterns stored for the driver and/or the vehicle.” is direct to mental process and abstract. They do not add any technological improvement. Claim 13 recited “The device according to claim 1, wherein the processing unit is configured to: determine a possible turning direction in order to determine the second result, determine the plausible driving intervals required to safely perform a turning maneuver in the determined turning direction, and verify the possibility of performing the determined driving intervals with the current driving speed data and direction data in order to render the intent to turn plausible.” is direct to mental process and abstract. They do not add any technological improvement. Claim 14 recited “The device according to claim 1, wherein the processing unit is configured to verify whether there are objects in the vehicle trajectory which account for the current travel speed data and direction data, and/or whether a collision or convergence of the trajectories of the detected objects with the vehicle trajectory is predictable.” is direct to mental process and abstract. They do not add any technological improvement. Claim 15 recited “A method for determining a driver a vehicle driver's intent to turn, the method comprising: capturing an area of the interior via an interior camera; determining, via a processing unit: a first result based on the image data, wherein the first result is a focus area of a driver outside the vehicle, a second result based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, a third result based on data from an environment capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and the intent to turn based on the three obtained results.”. Step 2A prong one: Yes, the claim is abstract idea for the following limitation: . “A method for determining a driver a vehicle driver's intent to turn, the method comprising: capturing an area of the interior via an interior camera” is the step of data collection which is directed to mental process and abstract. . “determining, via a processing unit: a first result based on the image data, wherein the first result is a focus area of a driver outside the vehicle” is the step of observation and categorizing information as the first result which is directed to mental process and abstract. . “a second result based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area” is the step of evaluation according to the data collection and categorizing information as the second result which is directed to mental process and abstract. . “a third result based on data from an environment capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and the intent to turn based on the three obtained results.” is the step of evaluation according to the data collection which is directed to mental process and abstract. Step 2A prong two: Yes the claim is abstract idea because the claim do not recite any additional elements that integrate the judicial exception into a practical application. The claims is using a generic camera, environmental sensor and a processing unit and they are well understood, routine, and conventional in the art for data collection and evaluation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-8, 10-11 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Qiao et al. US 20220089163 in view of Hartmann et al. US 20150371095. Regarding claim 1, Qiao et al. teach A device for determining a vehicle driver’s intent to turn, the device comprising: a vehicle interior camera configured to capture image data depicting an area of the vehicle interior; (Qiao et al. US 20220089163 abstract; paragraphs [0002]-[0009]; [0011]-[0015]; [0024]; [0028]-[0035]; [0039]; [0045]-[0049]; [0053]-[0055]; figures 1-6;) In accordance with an exemplary embodiment, a method is provided for controlling a vehicle. The method includes: monitoring an eye gaze of a driver of the vehicle; monitoring current traffic conditions surrounding the vehicle; predicting an intention of the driver to perform a lane change maneuver based on the eye gaze, a history of the eye gaze of the driver, and the current traffic conditions; and controlling, by the processor, the vehicle based on the predicted intention of the driver to perform a lane change maneuver (Qiao et al. par. 4). In various embodiments, the computer system 140 receives the camera images from the camera 132 and identifies the gaze direction of the eyes (or eye) of the driver of the vehicle 100 using the camera images. In various embodiments, the computer system 140 receives the sensor data from the perception system and identifies the current traffic conditions using the sensor data. (Qiao et al. par. 32). a processing unit configured to process the image data, wherein the processing unit is configured to: determine a first result, based on the image data, wherein the first result is a focus area of a driver outside the vehicle, In various embodiments, the processor is configured to monitor the eye gaze by counting a number of driver eye switches from a first on-road direction to a second side mirror direction, and wherein the processor is configured to predict the intention of the driver to perform a lane change maneuver based on the number (Qiao et al. par. 12). In various embodiments, the processor is configured to monitor the eye gaze by accumulating a time of focus of the eye gaze on a side mirror, and wherein the processor is configured to predict the intention of the driver to perform the lane change maneuver based on the accumulated time of focus (Qiao et al. par. 13). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. Qiao et al. do not explicitly teach determine a second result, based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, determine a third result, via an environmental capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and determine the intent to turn based on the three results. Hartmann et al. teach determine a second result, based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, determine a third result, via an environmental capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and determine the intent to turn based on the three results. (Hartmann et al. US 20150371095 abstract; paragraphs [0005]; [0021];[0042]-[0054]; [0060]-[0065]; [0094]; [0097]-[0108]; [0112]; [0115]; [0118]-[0123]; figures 1-10;) The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). Optionally, the trajectory or path of one's own vehicle may be predicted in step S14. Data from the vehicle's own sensors (V), e.g. steering angle, speed, etc. navigation system data or map data (N), or data from other environmental sensors such as radar, lidar, telematics unit, etc. may be taken into account here (Hartmann et al. par. 94). FIG. 2 shows an example of an image (I) of the vehicle environment lying ahead as taken from the front camera (6) of a moving vehicle. Camera-based driver assistance functionality can be implemented from the same image, e. g. a lane departure warning (LDW) function, a lane keeping assistance/system (LKA/LKS), a traffic sign recognition (TSR) function, an intelligent headlamp control (IHC) function, a forward collision warning (FCW) function, a precipitation detection function, an adaptive cruise control (ACC) function, a parking assistance function, an automatic emergency brake assist (EBA) function or emergency steering assist (ESA) function (Hartmann et al. par. 97). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a front camera 6 of the moving vehicle as the environment sensor that detect the obstacle 7 in front of the vehicle. At least one of vehicle driving assistance of vehicle like forward collision warning and emergency steering assist disclose in par. 97 help the driver to turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Therefore, the changing direction of the vehicle cause by a detection of the surround environment. Therefore, it would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the odometry data associated with environmental sensing to enhance reliability of turn determination as taught by Hartmann et al. reference into Qiao et al. reference and the result would be predictable with the turn or intent to turn base on all those three factors above. Regarding claim 2, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to verify, in order to determine the first result, whether at least one turning option of the vehicle exists in the visual capture area of the driver and whether this turning option is within the determined focus area. The intention prediction module 162 evaluates the condition flags 172, the driver gaze data 180, and the driver facial recognition data 182 as driver identification to load the driver's history of eye gaze activity pattern during a lane change maneuver in order to predict the driver's intentions 184 to perform a lane change maneuver (e.g., right lane change, left lane change, overtaking, etc.). For example, the intention prediction module 162 evaluates the driver gaze data 180 during a lane change maneuver over time to determine driver gaze behavior. The driver gaze behavior can include, for example, driver eye activity including the number of times the driver's eyes switch from on-road to a side mirror or window (left or right) in a short time interval and the time the driver's eyes are focused on the side mirror or window (left or right). with different lane change maneuver traffic conditions (Qiao et al. par. 45). The intention prediction module 162 then recognizes the driver based on the driver facial data and retrieves the same driver gaze behavior history data 186 for the recognized driver. The intention prediction module 162 then evaluates the condition flags 172 and compares the current driver gaze behavior data with the history data to determine the predicted intentions 184. For example, the intention prediction module 162 sets a left lane change flag to TRUE when the left lane change traffic condition flag 174 is TRUE and the current behavior data is less than or equal to the history data 186 for the left lane change (plus or minus an offset in some cases) with similar lane change traffic conditions such as ego vehicle speed, and adjacent lane traffics, etc. In another example, the intention prediction module 162 sets a right lane change flag to TRUE when the right lane change condition flag 176 is TRUE and the current behavior data is less than or equal to the history data 185 for the right lane change (plus or minus an offset in some cases) with similar lane change traffic conditions such as ego vehicle speed, and adjacent lane traffics, etc. In another example, the intention prediction module 162 sets an overtaking flag to TRUE when intention prediction flag for the right lane change or the left lane change is TRUE and the overtaking change condition flag 178 is TRUE (Qiao et al. par. 46). Regarding claim 3, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to determine at least one possible turning direction of the vehicle in the visual capture area of the driver based on the environmental information of the visual capture area of the vehicle provided by the environment capture unit The intention prediction module 162 evaluates the condition flags 172, the driver gaze data 180, and the driver facial recognition data 182 as driver identification to load the driver's history of eye gaze activity pattern during a lane change maneuver in order to predict the driver's intentions 184 to perform a lane change maneuver (e.g., right lane change, left lane change, overtaking, etc.). For example, the intention prediction module 162 evaluates the driver gaze data 180 during a lane change maneuver over time to determine driver gaze behavior. The driver gaze behavior can include, for example, driver eye activity including the number of times the driver's eyes switch from on-road to a side mirror or window (left or right) in a short time interval and the time the driver's eyes are focused on the side mirror or window (left or right). with different lane change maneuver traffic conditions (Qiao et al. par. 45). The intention prediction module 162 then recognizes the driver based on the driver facial data and retrieves the same driver gaze behavior history data 186 for the recognized driver. The intention prediction module 162 then evaluates the condition flags 172 and compares the current driver gaze behavior data with the history data to determine the predicted intentions 184. For example, the intention prediction module 162 sets a left lane change flag to TRUE when the left lane change traffic condition flag 174 is TRUE and the current behavior data is less than or equal to the history data 186 for the left lane change (plus or minus an offset in some cases) with similar lane change traffic conditions such as ego vehicle speed, and adjacent lane traffics, etc. In another example, the intention prediction module 162 sets a right lane change flag to TRUE when the right lane change condition flag 176 is TRUE and the current behavior data is less than or equal to the history data 185 for the right lane change (plus or minus an offset in some cases) with similar lane change traffic conditions such as ego vehicle speed, and adjacent lane traffics, etc. In another example, the intention prediction module 162 sets an overtaking flag to TRUE when intention prediction flag for the right lane change or the left lane change is TRUE and the overtaking change condition flag 178 is TRUE (Qiao et al. par. 46). and to verify the first result based thereon, wherein the processing unit is configured to change the first result based on the result of the verification. The history learning module 166 updates the history data datastore 168 with current driver eye gaze data at a corresponding datastore cell indexed by vehicle speed and the time waiting for traffic conditions met after receiving the confirmation information 192. For example, the history learning module 166 learns driver gaze behavior for a lane change maneuver for a driver and stores the information in a learning cell history data structure dedicated to that driver. FIG. 6 illustrates an exemplary history data structure 500. In various embodiments, the data structure 500 is defined by vehicle speed (MPH) on the x-axis 502 and the time of waiting for a lane change maneuver traffic conditions met on the y-axis 504. Each cell 506 of the data structure 500 stores a computed moving average of the driver gaze behavior data including a computed moving average of counts of eyes turning to the side mirrors, and a computed moving average of the accumulated time the eyes are on the side mirror during a lane change maneuver. The stored history data is then used by the intention prediction module 162 to determine the next prediction for the same driver (Qiao et al. par. 49). Regarding claim 4, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to verify whether safe turning is possible based on odometry data from the vehicle in order to determine the second result. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 6, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to verify whether there is a traffic-related reason for the odometry data of the vehicle in order to determine the third result. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 7, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is further configured to determine: a positive first result if the focus area exists outside the vehicle in a specific turning direction, In various embodiments, the processor is configured to monitor the eye gaze by counting a number of driver eye switches from a first on-road direction to a second side mirror direction, and wherein the processor is configured to predict the intention of the driver to perform a lane change maneuver based on the number (Qiao et al. par. 12). In various embodiments, the processor is configured to monitor the eye gaze by accumulating a time of focus of the eye gaze on a side mirror, and wherein the processor is configured to predict the intention of the driver to perform the lane change maneuver based on the accumulated time of focus (Qiao et al. par. 13). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. a positive second result if the odometry data allow the vehicle to turn in the direction of the determined focus area, a positive third result if the determined odometry data have no other cause, and the intent to turn only when all three results are positive. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 8, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is further configured to determine: as a first result, the probability of the existence of a focus area contingent on the intent to turn outside the vehicle, In various embodiments, the processor is configured to monitor the eye gaze by counting a number of driver eye switches from a first on-road direction to a second side mirror direction, and wherein the processor is configured to predict the intention of the driver to perform a lane change maneuver based on the number (Qiao et al. par. 12). In various embodiments, the processor is configured to monitor the eye gaze by accumulating a time of focus of the eye gaze on a side mirror, and wherein the processor is configured to predict the intention of the driver to perform the lane change maneuver based on the accumulated time of focus (Qiao et al. par. 13). FIG. 4 illustrates an exemplary method of determining the driver behavior data (step 210 of FIG. 3) including the driver switch count and the driver focus time. In FIG. 4, the method may begin at 315. The driver gaze data 180 is received and eye movement of the driver is evaluated to determine a direction or point of interest of driver's eye gaze at 320. When it is determined that the driver's eye gaze switches from on-road to a side mirror or window (left or right) at 325, timers and counter that track the driver's gaze behavior are updated (Qiao et al. par. 53). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. as a second result, the probability of the vehicle turning in the direction of the determined focus area based on the odometry data, an overall probability, at least based on the three results, and the to turn when the determined overall probability exceeds a preset stored limit value. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 10, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to detect the gaze direction of the driver over a preset period of time of several seconds in order to determine the first result and to determine the focus area based on the gazes of the driver directed outside the vehicle. The intention prediction module 162 evaluates the condition flags 172, the driver gaze data 180, and the driver facial recognition data 182 as driver identification to load the driver's history of eye gaze activity pattern during a lane change maneuver in order to predict the driver's intentions 184 to perform a lane change maneuver (e.g., right lane change, left lane change, overtaking, etc.). For example, the intention prediction module 162 evaluates the driver gaze data 180 during a lane change maneuver over time to determine driver gaze behavior. The driver gaze behavior can include, for example, driver eye activity including the number of times the driver's eyes switch from on-road to a side mirror or window (left or right) in a short time interval and the time the driver's eyes are focused on the side mirror or window (left or right). with different lane change maneuver traffic conditions (Qiao et al. par. 45). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. Regarding claim 11, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to detect a head-eye rotation of the driver over a preset period of several seconds in order to determine the first result and to determine the focus area based on the detected head-eye rotation. The intention prediction module 162 evaluates the condition flags 172, the driver gaze data 180, and the driver facial recognition data 182 as driver identification to load the driver's history of eye gaze activity pattern during a lane change maneuver in order to predict the driver's intentions 184 to perform a lane change maneuver (e.g., right lane change, left lane change, overtaking, etc.). For example, the intention prediction module 162 evaluates the driver gaze data 180 during a lane change maneuver over time to determine driver gaze behavior. The driver gaze behavior can include, for example, driver eye activity including the number of times the driver's eyes switch from on-road to a side mirror or window (left or right) in a short time interval and the time the driver's eyes are focused on the side mirror or window (left or right). with different lane change maneuver traffic conditions (Qiao et al. par. 45). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. Regarding claim 13, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to: determine a possible turning direction in order to determine the second result, determine the plausible driving intervals required to safely perform a turning maneuver in the determined turning direction, and verify the possibility of performing the determined driving intervals with the current driving speed data and direction data in order to render the intent to turn plausible. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). As an example of such an adjustment of the first image area (R1), we use an adjustment based on the vehicle's own speed, the course of the traffic lane while driving through a bend, and the predicted vehicle path in an avoiding maneuver (Hartmann et al. par. 108). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 14, the combination of Qiao et al. and Hartmann et al. disclose The device according to claim 1, wherein the processing unit is configured to verify whether there are objects in the vehicle trajectory which account for the current travel speed data and direction data, and/or whether a collision or convergence of the trajectories of the detected objects with the vehicle trajectory is predictable. The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). As an example of such an adjustment of the first image area (R1), we use an adjustment based on the vehicle's own speed, the course of the traffic lane while driving through a bend, and the predicted vehicle path in an avoiding maneuver (Hartmann et al. par. 108). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a camera detect the obstacle 7 in front of the vehicle. The system turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Regarding claim 15, Qiao et al. teach A method for determining a vehicle driver’s intent to turn, the method comprising: capturing image data depicting an area of the interior via an interior camera; (Qiao et al. US 20220089163 abstract; paragraphs [0002]-[0009]; [0011]-[0015]; [0024]; [0028]-[0035]; [0039]; [0045]-[0049]; [0053]-[0055]; figures 1-6;) In accordance with an exemplary embodiment, a method is provided for controlling a vehicle. The method includes: monitoring an eye gaze of a driver of the vehicle; monitoring current traffic conditions surrounding the vehicle; predicting an intention of the driver to perform a lane change maneuver based on the eye gaze, a history of the eye gaze of the driver, and the current traffic conditions; and controlling, by the processor, the vehicle based on the predicted intention of the driver to perform a lane change maneuver (Qiao et al. par. 4). In various embodiments, the computer system 140 receives the camera images from the camera 132 and identifies the gaze direction of the eyes (or eye) of the driver of the vehicle 100 using the camera images. In various embodiments, the computer system 140 receives the sensor data from the perception system and identifies the current traffic conditions using the sensor data. (Qiao et al. par. 32). determining, via a processing unit: a first result based on the image data, wherein the first result is a focus area of a driver outside the vehicle, In various embodiments, the processor is configured to monitor the eye gaze by counting a number of driver eye switches from a first on-road direction to a second side mirror direction, and wherein the processor is configured to predict the intention of the driver to perform a lane change maneuver based on the number (Qiao et al. par. 12). In various embodiments, the processor is configured to monitor the eye gaze by accumulating a time of focus of the eye gaze on a side mirror, and wherein the processor is configured to predict the intention of the driver to perform the lane change maneuver based on the accumulated time of focus (Qiao et al. par. 13). According to the cite passages and figures, examiner interpret the number of time driver focus of the eye gaze on the side mirror as the focus area of the driver look outside the vehicle. Qiao et al. do not explicitly teach a second result based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, a third result based on data from an environment capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and the intent to turn based on the three obtained results. Hartmann et al. teach a second result based on odometry data from the vehicle, wherein the second result is the possibility of the vehicle turning in the direction of the determined focus area, a third result based on data from an environment capture unit, wherein the third result verifies whether odometry data indicating the possibility of the turn has another cause distinct from the intent to turn, and the intent to turn based on the three obtained results. (Hartmann et al. US 20150371095 abstract; paragraphs [0005]; [0021];[0042]-[0054]; [0060]-[0065]; [0094]; [0097]-[0108]; [0112]; [0115]; [0118]-[0123]; figures 1-10;) The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle (or yaw angle), and tracked dynamically (Hartmann et al. par. 50). The first image area is at least one dynamic image section which is projected in the direction of travel in front of the vehicle based on vehicle odometry data and tracked dynamically (Hartmann et al. par. 51). Optionally, the trajectory or path of one's own vehicle may be predicted in step S14. Data from the vehicle's own sensors (V), e.g. steering angle, speed, etc. navigation system data or map data (N), or data from other environmental sensors such as radar, lidar, telematics unit, etc. may be taken into account here (Hartmann et al. par. 94). FIG. 2 shows an example of an image (I) of the vehicle environment lying ahead as taken from the front camera (6) of a moving vehicle. Camera-based driver assistance functionality can be implemented from the same image, e. g. a lane departure warning (LDW) function, a lane keeping assistance/system (LKA/LKS), a traffic sign recognition (TSR) function, an intelligent headlamp control (IHC) function, a forward collision warning (FCW) function, a precipitation detection function, an adaptive cruise control (ACC) function, a parking assistance function, an automatic emergency brake assist (EBA) function or emergency steering assist (ESA) function (Hartmann et al. par. 97). In such an emergency maneuver, however, determining the road condition or camera-based estimation of the friction coefficient is extremely important since the brake and steering system brakes or steers up to the limit of the friction coefficient. A puddle (2) on an otherwise dry road (1) as shown in FIG. 2 could mean that a collision with the obstacle cannot be avoided or that one's own vehicle leaves the road. FIG. 10 shows a camera image (I) depicting a stationary obstacle (7), e.g. a vehicle, in the traffic lane used by the ego vehicle (6). It shows in addition to the calculated vehicle path (or corridor of movement) T with the continuous median trajectory and the dotted sidelines for an avoiding maneuver how a prediction horizon X.sub.pVeh, Y.sub.pVeh determined from FIG. 9 can be transformed in the image (I) by adjusting the image area from R1 to R1″. An intermediate step of the adjustment (R1′) is also shown (Hartmann et al par. 122). According to the cited passages and figures, examiner interpret a front camera 6 of the moving vehicle as the environment sensor that detect the obstacle 7 in front of the vehicle. At least one of vehicle driving assistance of vehicle like forward collision warning and emergency steering assist disclose in par. 97 help the driver to turn a vehicle into the different direction to avoid the obstacle 7 as show in the figure 10 based on those image data captured by the vehicle camera show in the figures 3 and 10. Therefore, the changing direction of the vehicle cause by a detection of the surround environment. Therefore, it would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to incorporate the odometry data associated with environmental sensing to enhance reliability of turn determination as taught by Hartmann et al. reference into Qiao et al. reference and the result would be predictable with the turn or intent to turn base on all those three factors above. Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Qiao et al. US 20220089163 in view of Hartmann et al. US 20150371095 and further in view of Glas US 20190135299. Regarding claim 5, the combination of Qiao et al. and Hartmann et al. teach all the limitation in the claim 1. The combination of Qiao et al. and Hartmann et al. do not explicitly teach The device according to claim 1, wherein the processing unit is configured to verify whether, based on stored odometry data associated with the driver and/or the vehicle and the odometry data of the vehicle, the intent to turn is probable in order to determine the second result. Glas teaches The device according to claim 1, wherein the processing unit is configured to verify whether, based on stored odometry data associated with the driver and/or the vehicle and the odometry data of the vehicle, the intent to turn is probable in order to determine the second result. (Glas US 20190135299 abstract; paragraphs [0011]; [0030]-[0034]; [0040]; figures 1-2) Accordingly, a method for providing driver assistance is provided, which method comprises the following steps, namely recording at least one movement pattern of a vehicle together with activated vehicle functions, and providing the respective vehicle function on the basis of detection of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created by way of odometry sensors (Glas par. 11). According to another aspect of the present invention, the detection of at least one part of a movement pattern which has already been recorded comprises comparing captured movement patterns with stored movement patterns, wherein both substantially match (Glas par. 31). According to the invention, a driver assistance system provides driver assistance, having a sensor unit set up to record at least one movement pattern of a vehicle together with activated vehicle functions, and an output unit set up to provide the respective vehicle function on the basis of detection of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created using odometry sensors (Glas par. 34). FIG. 1 shows a schematic flowchart of a method for providing driver assistance, having the steps of recording 100 at least one movement pattern of a vehicle together with activated vehicle functions, and providing 102 the respective vehicle function on the basis of detection 101 of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created 100 using odometry sensors (Glas par. 40). Therefore, it would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to combine Qiao et al. and Hartmann et al. with Glas by comprising the teaching of Glas into the system of Qiao et al. and Hartmann et al.. The motivation to combine these arts to store the movement pattern created by odometry sensors from Glas reference into Qiao et al. and Hartmann et al. reference so the system can easily determine any abnormal movement pattern of the vehicle by verify with the historical data to avoid any mishap. Regarding claim 12, the combination of Qiao et al., Hartmann et al. and Glas disclose The device according to claim 1, wherein the processing unit is configured to determine an odometry pattern over a preset period of several seconds in order to determine the second result and preferably to compare it with odometry patterns stored for the driver and/or the vehicle. Accordingly, a method for providing driver assistance is provided, which method comprises the following steps, namely recording at least one movement pattern of a vehicle together with activated vehicle functions, and providing the respective vehicle function on the basis of detection of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created by way of odometry sensors (Glas par. 11). According to another aspect of the present invention, the detection of at least one part of a movement pattern which has already been recorded comprises comparing captured movement patterns with stored movement patterns, wherein both substantially match (Glas par. 31). According to the invention, a driver assistance system provides driver assistance, having a sensor unit set up to record at least one movement pattern of a vehicle together with activated vehicle functions, and an output unit set up to provide the respective vehicle function on the basis of detection of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created using odometry sensors (Glas par. 34). FIG. 1 shows a schematic flowchart of a method for providing driver assistance, having the steps of recording 100 at least one movement pattern of a vehicle together with activated vehicle functions, and providing 102 the respective vehicle function on the basis of detection 101 of at least one part of a movement pattern which has already been recorded during a journey, wherein the movement pattern is created 100 using odometry sensors (Glas par. 40). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Qiao et al. US 20220089163 in view of Hartmann et al. US 20150371095 and further in view of Winner et al. US 20030163239. Regarding claim 9, the combination of Qiao et al. and Hartmann et al. teach all the limitation in the claim 1. The combination of Qiao et al. and Hartmann et al. do not explicitly teach The device according to claim 1, wherein the processing unit is configured to: output information to the driver via an output unit of the vehicle, actuate the turn indicator automatically after a preset time if the driver does not abort. Winner et al. teach The device according to claim 1, wherein the processing unit is configured to: output information to the driver via an output unit of the vehicle, actuate the turn indicator automatically after a preset time if the driver does not abort. (Winner et al. US 20030163239 abstract; paragraphs [0038]-[0040]; figures 1-4;) If, in countries with right-hand traffic, the left turn signal indicator is actuated in the situation shown in FIG. 2, then this can mean that the driver would like to pass preceding vehicle 32. However, it can also mean that the driver, without intention of passing, would simply like to change lanes for other reasons (Winner et al. par. 38). Therefore, it would have been obviously to one of ordinary skill in the art before the effective filing date of the claim invention to combine Qiao et al. and Hartmann et al. with Winner et al. by comprising the teaching of Winner et al. into the system of Qiao et al. and Hartmann et al.. The motivation to combine these arts to provide a simple substitution of actuated the signal indicator from Winner et al. reference into Qiao et al. and Hartmann et al. reference and the results of the substitution would have been predictable to inform the surround traffic of the vehicle intent to turn or changing lane. Response to Arguments Applicant's arguments filed 11/11/2025 have been fully considered but they are not persuasive. In the remarks applicant argues in substance: Applicant argument: First, applicant argues the combination of Qiao et al. and Hartmann et al. failed to teach or suggest “determining the second result – i.e., the possibility of the vehicle turning in the direction of a determined focus area based on vehicle odometry data.”. Second, applicant argues the combination of Qiao et al. and Hartmann et al. failed to teach or suggest “determining the third result – i.e., whether the odometry data indicating the possibility of the turn has another cause distinct from the intent to turn – via an environmental capture unit”. Third, applicant argues the combination of Qiao et al. and Hartmann et al. failed to teach or suggest “determining the intent to turn based on the three results”. Examiner response: Examiner respectfully disagree with applicant. First, examiner respectfully submit that the combination of Qiao et al. and Hartmann et al. do teach or suggest “determining the second result – i.e., the possibility of the vehicle turning in the direction of a determined focus area based on vehicle odometry data.”. According to the cited passages and figures above, Qiao et al. reference disclose an interior camera to capture images of the driver within a vehicle interior and monitoring driver eye gaze using the camera as depict in the paragraphs 4, 12 and 32 of Qiao et al. reference (Examiner interprets the images monitoring driver eye gaze as the first result). Hartmann et al. reference do disclose the use of vehicle odometry data, for example paragraph 50 depict the direction of travel in front of the vehicle based on GPS vehicle data, preferably in accordance with the vehicle speed and heading angle and paragraph 51 depict the direction of travel in front of the vehicle based on vehicle odometry data and examiner interprets the odometry data as the second result which is similar to the paragraph 13 of the specification define odometry data as “The odometry data of the vehicle include, in particular, steering angle, speed and/or acceleration (positive and negative acceleration).”. Second, examiner respectfully submit that the combination of Qiao et al. and Hartmann et al. do teach or suggest “determining the third result – i.e., whether the odometry data indicating the possibility of the turn has another cause distinct from the intent to turn – via an environmental capture unit”. Hartmann et al. reference disclose a front camera detect obstacle 7 in front of the vehicle that cause the driver to turn the vehicle in the different direction to avoid an accident as depict in paragraphs 50-51, 94, 97, 122 and figure 10. Examiner interprets the turn cause by obstacle 7 as another cause due to environment factors (obstacle, pedestrian, debris or other moving object like traffic). Third, examiner respectfully submit that the combination of Qiao et al. and Hartmann et al. do teach or suggest “determining the intent to turn based on the three results”. According to the cited passages and figures, both Qiao et al. and Hartmann et al. references are in the same field that teaching the camera or sensor to monitoring the vehicle maneuver to promote vehicle safety. For example, first result teach by Qiao et al. reference, second result and third result teach by Hartmann et al. reference. Therefore, it would have been obviously to one of ordinary skill in the art to incorporate the odometry data associated with environmental sensing to enhance reliability of turn determination as taught by Hartmann et al. reference into Qiao et al. reference and the result would be predictable with the turn or intent to turn base on all those three factor above. Since art of the references still read on the claim invention, therefore the rejection stand. Please see above rejection. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG D TRAN whose telephone number is (408)918-7546. The examiner can normally be reached Monday - Friday 8:00 am - 5:30 pm (pacific time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian A Zimmerman can be reached at 571-272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THANG D TRAN/Examiner, Art Unit 2686 /BRIAN A ZIMMERMAN/Supervisory Patent Examiner, Art Unit 2686
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Aug 11, 2025
Non-Final Rejection — §101, §103
Nov 11, 2025
Response Filed
Jan 19, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592143
SYSTEM AND DEVICE FOR THREAT MONITORING
2y 5m to grant Granted Mar 31, 2026
Patent 12581277
VEHICLE-MOUNTED COMMUNICATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12559121
DRIVING ASSISTANCE DEVICE, DRIVING ASSISTANCE METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12548445
METHOD TO LEARN PARKING RESTRICTIONS BASED ON PARKING BEHAVIORS OF RELIABLE PARKERS
2y 5m to grant Granted Feb 10, 2026
Patent 12535478
SYSTEMS, DEVICES, AND METHODS FOR WIRELESS COMMUNICATIONS IN ANALYTE MONITORING SYSTEMS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.7%)
2y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month