Prosecution Insights
Last updated: April 19, 2026
Application No. 18/941,485

LANE DEPARTURE INTENTION ESTIMATION DEVICE

Non-Final OA §101§102§103§112
Filed
Nov 08, 2024
Examiner
SU, STEPHANIE T
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
96 granted / 139 resolved
+17.1% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
18.5%
-21.5% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 139 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) was submitted on November 8, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Status of the Claims This Office Action is in response to the claims filed on November 8, 2024. Claims 1-5 have been presented for examination. Claims 1-5 are currently rejected. Claims 4-5 are rejected under 35 U.S.C. 112. Claims 1-5 are rejected under 35 U.S.C. 101. Claims 1-3 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kuehnle (U.S. Patent Publication Number 2022/0379900). Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kuehnle (U.S. Patent Publication Number 2022/0379900) in view of Zhu et al. (U.S. Patent Publication Number 2019/0071091). Claim Interpretation This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an estimation section” and “a prediction section” in at least claim 1 “a control section” in at least claim 2 “a driver monitor section” in at least claim 4 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Structure is provided for these limitations in Fig. 1 and corresponding paragraph 17 describing the sections to be components of processor 163. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites the following limitation: “the result of the estimation indicating that the driver of the subject vehicle has no lane departure intention, the result of the prediction indicating that the driver of the subject vehicle has a lane departure intention.” Requiring both “no lane departure intention” and “lane departure intention,” as written, cannot occur simultaneously, rendering the claim to be unclear. Should the claim be intended to be interpreted as either having no lane departure intention or having a lane departure intention, the Examiner recommends amending the claim to recite: “the result of the estimation indicating that the driver of the subject vehicle has no lane departure intention, or the result of the prediction indicating that the driver of the subject vehicle has a lane departure intention.” For purposes of prior art examination, the claim is interpreted as outlined above. Claim 5 depends from claim 4 and inherits the deficiencies of claim 4 and is thereby rejected under 35 U.S.C. 112. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 Claim 1. A lane departure intention estimation device comprising: an estimation section that estimates that a driver of a subject vehicle has no lane departure intention in a case where the driver of the subject vehicle is in a distracted state or a non-awake state; and a prediction section that uses a learned machine learning model to predict whether or not the driver of the subject vehicle has a lane departure intention based on a time-series signal and a first classification signal in a case where the driver of the subject vehicle is in neither the distracted state nor the non-awake state, the time-series signal including vehicle information that is information regarding the subject vehicle, lane information that is information regarding a lane in which the subject vehicle is traveling, and target information that is information regarding a target present around the subject vehicle, the first classification signal including a signal indicating that the driver of the subject vehicle is in neither the distracted state nor the non-awake state, wherein the learned machine learning model is obtained by learning in which a learning time-series signal and learning data are used, the learning time-series signal including learning vehicle information that is information regarding a learning vehicle, learning lane information that is information regarding a lane in which the learning vehicle is traveling, and learning target information that is information regarding a target present around the learning vehicle, the learning data being a data set of a learning first classification signal and a label, the learning first classification signal indicating that a driver of the learning vehicle is in neither the distracted state nor the non-awake state, the label indicating whether or not the driver of the learning vehicle has a lane departure intention. 101 Analysis - Step 1: Statutory category – Yes The claim recites a method including at least one step. The claim falls within one of the four statutory categories. See MPEP 2106.03. 101 Analysis - Step 2A Prong one evaluation: Judicial Exception – Yes – Mental processes In Step 2A, Prong one of the 2019 Patent Eligibility Guidance (PEG), a claim is to be analyzed to determine whether it recites subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) mental processes, and/or c) certain methods of organizing human activity. The Office submits that the foregoing bolded limitation(s) constitutes judicial exceptions in terms of “mental processes” because under its broadest reasonable interpretation, the limitations can be “performed in the human mind, or by a human using a pen and paper”. See MPEP 2106.04(a)(2)(III) The claim recites the limitation of: estimates that a driver of a subject vehicle has no lane departure intention in a case where the driver of the subject vehicle is in a distracted state or a non-awake state; and predict whether or not the driver of the subject vehicle has a lane departure intention based on a time-series signal and a first classification signal in a case where the driver of the subject vehicle is in neither the distracted state nor the non-awake state, the time-series signal including vehicle information that is information regarding the subject vehicle, lane information that is information regarding a lane in which the subject vehicle is traveling, and target information that is information regarding a target present around the subject vehicle, the first classification signal including a signal indicating that the driver of the subject vehicle is in neither the distracted state nor the non-awake state, wherein ... learning in which a learning time-series signal and learning data are used, the learning time-series signal including learning vehicle information that is information regarding a learning vehicle, learning lane information that is information regarding a lane in which the learning vehicle is traveling, and learning target information that is information regarding a target present around the learning vehicle, the learning data being a data set of a learning first classification signal and a label, the learning first classification signal indicating that a driver of the learning vehicle is in neither the distracted state nor the non-awake state, the label indicating whether or not the driver of the learning vehicle has a lane departure intention. This limitation, as drafted, is a simple process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of “a learned machine learning model”. That is, other than reciting “a learned machine learning model” nothing in the claim elements precludes the step from practically being performed in the mind. For example, but for the “a learned machine learning model” language, the claim encompasses a person looking at data collected and forming a simple judgement. Specifically, the claim encompasses visually estimating that a driver of the vehicle does not intend to depart from the lane over intervals of time, and mentally classifying that the driver is either distracted or non-awake, or neither distracted nor non-awake. In combination with observations made of the vehicle in the lane, or learning lane information, the person may use the observation of the driver and vehicle to learn the driver intention with respect to a lane lines. The mere nominal recitation of “a learned machine learning model” does not take the claim limitations out of the mental process grouping. Thus, the claim recites a mental process. 101 Analysis - Step 2A Prong two evaluation: Practical Application - No In Step 2A, Prong two of the 2019 PEG, a claim is to be evaluated whether, as a whole, it integrates the recited judicial exception into a practical application. As noted in MPEP 2106.04(d), it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. The courts have indicated that additional elements such as: merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” The Office submits that the foregoing underlined limitation(s) recite additional elements that do not integrate the recited judicial exception into a practical application. The claim recites additional elements or steps of an estimation section, a prediction section that uses a learned machine learning model, and that the learned machine learning model is obtained. The obtaining step is recited at a high level of generality (i.e. as a general means of using an already learned machine learning model), and amount to mere data gathering, which is a form of insignificant extra-solution activity. The “learned machine learning model” merely describes how to generally “apply” the otherwise mental judgements using a generic or general-purpose computing environment (i.e., a computer) and merely automates the evaluating step. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. 101 Analysis - Step 2B evaluation: Inventive concept - No In Step 2B of the 2019 PEG, a claim is to be evaluated as to whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the receiving steps and the displaying step were considered to be insignificant extra-solution activity in Step 2A, and thus they are re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The background recites that the sensors are all conventional sensors mounted on the vehicle, and the specification does not provide any indication that the vehicle controller is anything other than a conventional computer within a vehicle. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). Further, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), for example, indicated that the mere displaying of data is a well understood, routine, and conventional function. Accordingly, a conclusion that the collecting step is well-understood, routine, conventional activity is supported under Berkheimer. Thus, the claim is ineligible. Dependent Claims Dependent claims 2-5 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of the dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-5 are not patent eligible under the same rationale as provided for in the rejection of independent claim 1. Therefore, claims 1-5 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kuehnle (U.S. Patent Publication Number 2022/0379900). Regarding claim 1, Kuehnle discloses a lane departure intention estimation device comprising: an estimation section that estimates that a driver of a subject vehicle has no lane departure intention in a case where the driver of the subject vehicle is in a distracted state or a non-awake state; and (Kuehnle ¶ 3 discloses “a driver behavior model [i.e., an estimation section]” which includes a “driver’s lateral behavior when operating the vehicle,” such that a “driver's lateral behavior includes maintaining an approximately central position of the vehicle in a driving lane [i.e., no lane departure intention]”) a prediction section that uses a learned machine learning model to predict whether or not the driver of the subject vehicle has a lane departure intention (Kuehnle ¶ 42 discloses that a “fit line 410 [i.e., a prediction section] may also be used to extrapolate future blocks, thus predicting future driver performance,” such that “neural network [i.e., a learned machine learning model] may learn to predictively replicate how the driver reacts with time, using a prior time interval to predict a current value,” see ¶ 51) based on a time-series signal and a first classification signal in a case where the driver of the subject vehicle is in neither the distracted state nor the non-awake state, the time-series signal including vehicle information that is information regarding the subject vehicle, lane information that is information regarding a lane in which the subject vehicle is traveling, and target information that is information regarding a target present around the subject vehicle, the first classification signal including a signal indicating that the driver of the subject vehicle is in neither the distracted state nor the non-awake state, (Kuehnle ¶ 51 discloses “the neural network may learn to predictively replicate how the driver reacts with time, using a prior time interval to predict a current value [i.e., a time series signal]” such that “any input ... may be provided to predict how the driver will react.” For example, “the system may evaluate whether the driver's pattern of behavior, based on the trend-over-time model, exhibits unsatisfactory performance, and/or predicts unsatisfactory performance in the near future” and “If not [i.e., first classification signal indicating that the driver is in neither the distracted state nor the non-awake state], process 300 returns to S301 to continue accumulating blocks of driver behavior data) wherein the learned machine learning model (Kuehnle ¶ 51) is obtained by learning in which a learning time-series signal and learning data are used, (Kuehnle ¶ 51 discloses that “Once the relationship between the lane position and the driver 164 has been modeled in the form of a lateral dynamics network, the system may attempt to answer the question, “Will the driver respond quickly enough?” based on determining a “step response time [i.e., time-series signal],” wherein the system will “learn the relationships between lane position and driver action” using the collected behavior data) the learning time-series signal including learning vehicle information that is information regarding a learning vehicle, learning lane information that is information regarding a lane in which the learning vehicle is traveling, and learning target information that is information regarding a target present around the learning vehicle, (Kuehnle ¶ 51 discloses that the system “may consider what is happening now; e.g., how fast the vehicle is traveling, the distance to the lane marking or vehicle ahead [i.e., learning target information regarding a target present around the learning vehicle], the yaw angle with respect to the lane marking, the curve radius of a turn, and the recent history of these values to determine how to react,” ) the learning data being a data set of a learning first classification signal and a label, the learning first classification signal indicating that a driver of the learning vehicle is in neither the distracted state nor the non-awake state, the label indicating whether or not the driver of the learning vehicle has a lane departure intention. (Kuehnle ¶ 17 discloses identifying “a driver's trend of lateral and/or longitudinal control to assess whether he or she is suffering from drowsiness and intervene once his/her performance has diminished to a threshold level” and using the collected behavior data to “learn the relationships between lane position and driver action,” such as learning to determine a “delay in responding” and to “predict how the driver will react” such as if the driver 164 is deemed “too drowsy”) Regarding claim 2, Kuehnle discloses the lane departure intention estimation device according to claim 1, comprising: a control section that causes an output of a lane departure alert to be restricted in a case where the prediction section predicts that the driver of the subject vehicle has a lane departure intention, (Kuehnle Fig. 3 depicts determining whether the driver behavior model trend values indicate imminent poor performance, such as “poor lane keeping [i.e., lane departure intention],” see ¶ 47, and based on determining driver drowsiness, generating “alerts to a fleet manager or supervisor, to intervene by providing warnings to the driver”) the lane departure alert being an alert for a lane departure of the subject vehicle. (Kuehnle ¶ 19 discloses a Lane Departure Warning (LDW) system 222 t hat generates “signals indicative of actual lane departure, such as lane wandering or crossing” which includes “a warning sound, or provide other forms of haptic, visual, or audio feedback,” see ¶ 60, because “Doing so may alert the driver 164 to his or her drowsy condition, thus improving the safety of operating vehicles, such as commercial trucks,” see ¶ 60) Regarding claim 3, Kuehnle discloses the lane departure intention estimation device according to claim 1, comprising: a control section that causes execution of a lane keeping assist to be restricted in a case where the prediction section predicts that the driver of the subject vehicle has a lane departure intention. (Kuehnle ¶ 19 discloses “a system 200 may include, for example, a Lane Departure Warning (LDW) system 222 (FIG. 2A) that may generate signals indicative of an actual lane departure, such as lane wandering or crossing,” wherein the system “predicts unsatisfactory performance in the near future,” see ¶ 51, such that “The control signal may instruct the systems 233 to ... intervene in the operation of the vehicle 112 to initiate corrective action,” see ¶ 28. Also see ¶ 52 disclosing “deploying an active vehicle dynamic addition (e.g., active lane-keeping assist)” as an intervention measure based on identifying driver drowsiness) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kuehnle (U.S. Patent Publication Number 2022/0379900) in view of Zhu et al. (U.S. Patent Publication Number 2019/0071091). Regarding claim 4, Kuehnle discloses the lane departure intention estimation device according to claim 1, comprising: a driver monitor section that outputs the first classification signal and a second classification signal based on an image captured by a driver monitor camera, (Kuehnle ¶ 52 discloses a driver-facing camera 245 to identify driver drowsiness, such as “identifying a decreasing variance in the driver's head yaw over time, identifying the driver's head pitching forward, and/or identifying the driver's eyes closing.” One having ordinary skill in the art would recognize that identifying that the driver’s eyes are closed would include identifying that the driver’s eyes are open, thus providing a first and second classification signal.) the image including the driver of the subject vehicle (Kuehnle ¶ 31 discloses driver-facing imaging devices) the first classification signal including the signal indicating that the driver of the subject vehicle is in neither the distracted state nor the non-awake state, (Kuehnle ¶ 41 discloses “In S305, the system may evaluate whether the driver's pattern of behavior, based on the trend-over-time model, exhibits unsatisfactory performance, and/or predicts unsatisfactory performance in the near future,” and “If not, [i.e., a first classification signal including the signal indicating that the driver of the subject vehicle is in neither the distracted state nor the non-awake state] process 300 returns to S301 to continue accumulating blocks of driver behavior data,” wherein the driver drowsiness is identified by collecting driver behavior data using driver facing cameras 245, see ¶ 52. Also see Fig. 3.) the second classification signal including a signal indicating that the driver of the subject vehicle is in the distracted state or the non-awake state, (Kuehnle ¶ 47 discloses that “If any of the triplet values” such as delay, corrective force, or damping, see ¶ 45, “meet or exceed a threshold indicating current or future poor performance in S305, such as a delay of five seconds, the system may intervene in S306,” wherein the triplet values are used in combination with the driver-facing camera 245 to determine driver drowsiness, see ¶ 52) wherein in a case where the driver monitor section outputs the second classification signal including the signal indicating that the driver of the subject vehicle is in the distracted state or the non-awake state (Kuehnle ¶ 38 discloses identifying driver drowsiness by collecting “driver behavior data as it relates to the driver’s lateral and longitudinal performance,” such that “If any of the triplet values meet or exceed a threshold indicating current or future poor performance in S305, such as a delay of five seconds, the system may intervene in S306,” wherein the triplet values are used in combination with the driver-facing camera 245 to determine driver drowsiness, see ¶ 52) and the prediction section predicts that the driver of the subject vehicle has a lane departure intention, (Kuehnle ¶ 19 discloses “system 200 may include, for example, a Lane Departure Warning (LDW) system 222 (FIG. 2A) that may generate signals indicative of an actual lane departure, such as lane wandering or crossing,” wherein the blocks of driver behavior data are collected over time and used for “predicting future driver performance ... to determine whether the system will cause one or more acts of intervention to be carried out,” see ¶ 42) Kuehnle does not expressly disclose: a result of the estimation by the estimation section is given priority over a result of the prediction by the prediction section, the result of the estimation indicating that the driver of the subject vehicle has no lane departure intention, the result of the prediction indicating that the driver of the subject vehicle has a lane departure intention. However, Zhu discloses: a result of the estimation by the estimation section is given priority over a result of the prediction by the prediction section, (Zhu ¶ 36 discloses performing a prediction “based on the perception data perceiving the driving environment at the point in time in view of a set of map/rout information 311 and traffic rules 312,” and “if a turn signal has been turned on indicating a lane changing direction conforming to the drifting direction [i.e., prediction] of the vehicle, it is determined that the driver intends to change lane [i.e., an estimation].” One having ordinary skill in the art would recognize that a driver turning on a turn signal to indicate a lane change is prioritized over a prediction that the vehicle is drifting to determine user intention.) the result of the estimation indicating that the driver of the subject vehicle has no lane departure intention, (Zhu ¶ 44 discloses “If the moving direction of the vehicle is not aligned with the lane markings, the system will consider that the driver is unintentionally leaving the lane,” also see ¶ 46 “user intention determination module 350 is invoked to determine whether the driver of the vehicle intentionally drives the vehicle off the lane based user actions of the driver and the driving environment surrounding the vehicle at the point in time”) the result of the prediction indicating that the driver of the subject vehicle has a lane departure intention. (Zhu ¶ 55 discloses “if the vehicle is drifting towards to the right side of the lane and a right turn signal has been turned on by the driver, in operation 602, it is considered the driver intends to change lane”) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the lane keeping and lane departure warning of Kuehnle with prioritizing an estimation over a prediction, as disclosed by Zhu, with reasonable expectation of success, to provide lane assistance to a driver based on the driver’s intention (Zhu ¶ 14), such that the vehicle can be controlled safely and efficiently (Zhu ¶ 27). By prioritizing the estimation over the prediction, such a combination would further accurately catch the driver’s intention and prevent uncomfortable or unsafe situations (Zhu ¶ 3 and MPEP 2143.01(G)), rendering the limitation to be an obvious modification. Regarding claim 5, Kuehnle discloses the lane departure intention estimation device according to claim 4, wherein: the first classification signal output from the driver monitor section is input to the prediction section; and (Kuehnle ¶ 41 discloses evaluating a driver’s pattern of behavior, wherein driver drowsiness is identified using driver-facing cameras 245, see ¶ 52, and predicting unsatisfactory performance in the near future, also see ¶ 42 disclosing collecting blocks of driver behavior over time to generate fit line 410 used to extrapolate future driver performance. See Fig. 4.) the first classification signal includes a signal indicating an internal state of the driver of the subject vehicle, (Kuehnle ¶ 52 discloses using driver-facing cameras 245 to identify driver drowsiness, also see Fig. 3 and corresponding ¶ 38) the internal state being estimated by the driver monitor section based on the image captured by the driver monitor camera, (Kuehnle ¶ 52 discloses a driver-facing camera 245 to identify driver drowsiness, such as “identifying a decreasing variance in the driver's head yaw over time, identifying the driver's head pitching forward, and/or identifying the driver's eyes closing.” One having ordinary skill in the art would recognize that identifying that the driver’s eyes are closed would include identifying that the driver’s eyes are open, thus providing a first and second classification signal) the image including the driver of the subject vehicle. (Kuehnle ¶ 31 discloses driver-facing imaging devices) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kim et al. (U.S. Patent Publication Number 2019/0135303) discloses a driver assistance system that collects personal vehicle information and information for a personal driving tendency and predicting a driver-customized machine learning model, including providing an ID for a drowsy driving warning during lane-departure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHANIE T SU whose telephone number is (571)272-5326. The examiner can normally be reached Monday to Friday, 9:30AM - 5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEPHANIE T SU/Patent Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Nov 08, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103
Mar 27, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12542054
Managing Vehicle Behavior Based On Predicted Behavior Of Other Vehicles
2y 5m to grant Granted Feb 03, 2026
Patent 12539916
Method for Maneuvering a Vehicle
2y 5m to grant Granted Feb 03, 2026
Patent 12539859
CONTROL DEVICE FOR HYBRID VEHICLE
2y 5m to grant Granted Feb 03, 2026
Patent 12534082
VEHICLE FOR CONTROLLING REGENERATIVE BRAKING AND A METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Jan 27, 2026
Patent 12529575
SYSTEM AND METHOD FOR DETECTING ACTIVE ROAD WORK ZONES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+32.3%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 139 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month