Prosecution Insights
Last updated: April 19, 2026
Application No. 18/834,959

Method for Monitoring a State of a Driver of a Vehicle by a Camera in an Interior of the Vehicle, Computer-readable Medium, System and Vehicle

Non-Final OA §101§102§103§112
Filed
Jul 31, 2024
Examiner
EDWARDS, TYLER B
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
359 granted / 468 resolved
+18.7% vs TC avg
Moderate +14% lift
Without
With
+14.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
14 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
25.4%
-14.6% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 468 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/31/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the computer readable medium, system for monitoring a state of a driver, and vehicle containing the system described in claims 25- 28 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. The only drawing shows method 100, which includes method steps 102-110. As such, there are no drawings pertaining to the system and it's components, or the vehicle and its components. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 29-31 are objected to because of the following informalities: These claims are dependent from claim 28, which refers to a system for monitoring a state of a driver of a vehicle. Dependent claims 29-31 refer to this claim as if it were a method claim. This appears to be a typographical error, as these claims are substantially similar to claims 13-14 and 19, which are dependent upon a method claim. Appropriate correction is required. For the purpose of the prior art rejections below, these claims will be assumed to be system claims, and not method claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 25 rejected under 35 U.S.C. 101 because the computer readable medium is not defined in the specification. The United States Patent and Trademark Office (USPTO) is required to give claims their broadest reasonable interpretation consistent with the specification during proceeding before the USPTO. See In re Zletz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility under 35 U.S.C. 101 Aug. 24, 2009; p. 2. The Examiner suggests that the Applicant add the limitation “non-transitory” to the computer readable medium as recited in the claim(s) in order to properly render the claim(s) in statutory form in view of their broadest reasonable interpretation in light of the originally filed specification. The Examiner also suggests that the specification may be amended to include the term “non-transitory computer readable medium” disclosed in the claims and specification to avoid a potential objection to the specification for a lack of antecedent basis of the claimed terminology. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 17 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The claim limitations of claim 16 from which this claim depends include that the target viewing sequence of the active driving situation is determined depending on at least one of the group consisting of, among other options, an operating interaction of the driver with a driving-relevant operating component of the vehicle. Based on the “depending on at least one of” limitation of this claim, one of the broadest reasonable scopes of this claim is only that limitation regarding the driving-relevant operating component. In this case, the limitations of claim 17 are merely a repetition of the same limitation, and as such, do not further limit the claim. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim 18 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The claim limitations of claim 16 from which this claim depends include that the target viewing sequence of the active driving situation is determined depending on at least one of the group consisting of, among other options, a model of the surroundings of the vehicle. Based on the “depending on at least one of” limitation of this claim, one of the broadest reasonable scopes of this claim is only that limitation regarding the model of the surroundings of the vehicle. In this case, the limitations of claim 18 are merely a repetition of the same limitation, and as such, do not further limit the claim. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 12-18 and 20-30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Nania (U.S. Publication No. 2018/0015825), hereinafter referred to as Nania. In regard to claim 12, Nania teaches a method of monitoring a state of a driver of a vehicle using a camera in the interior of the vehicle (Nania paragraph 18 noting occupant monitor 108 monitors the driver 114 for signs of drowsiness. To detect the signs of drowsiness, the occupant monitor 108, via the driver cameras 106 the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114. The occupant monitor 108 determines that the driver 114 is drowsy), the method comprising: determining a viewing behavior of a driver of the vehicle during an active driving situation with the vehicle using the camera in the interior of the vehicle (Nania Fig. 4 showing the flowchart of driver monitoring during active driving; and Nania paragraph 31 noting occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy). determining a target viewing sequence of the active driving situation (Nania paragraph 18 and 31 noting the possible criteria of signs of drowsiness, and that each are compared to thresholds, implying that there are acceptable ranges of eye activity during the driving situation that would be a target viewing sequence of a non-drowsy driver); determining an actual viewing sequence during the active driving situation depending on the viewing behavior of the driver of the vehicle (Nania paragraph 18 noting occupant monitor 108 monitors the driver 114 for signs of drowsiness. To detect the signs of drowsiness, the occupant monitor 108, via the driver cameras 106 the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114. The occupant monitor 108 determines that the driver 114 is drowsy); determining a change in the actual viewing sequence from the target viewing sequence during the active driving situation (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy); and determining the state of the driver as tired (Nania paragraph 31 noting If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406) if the change in the actual viewing sequence from the target viewing sequence indicates that a time interval between at least two views of the actual viewing sequence exceeds a predetermined threshold value (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 13, Nania teaches all of the limitations presented in claim 12 as discussed above. In addition, Nania teaches wherein the camera is integrated into the interior of the vehicle and enables an unrestricted view of a driver (Nania Fig. 2 showing the camera integrated into the rearview mirror 116 of the vehicle; and Nania paragraph 17 noting the driver cameras 106 monitor a driver 114 to detect when the driver 114 is drowsy. The driver cameras 106 are mounted on a front of the rear-view mirror 116. In the illustrated example, the driver cameras 106 include integrated facial-feature recognition with infrared thermal imaging. The driver cameras 106 detects (a) the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114). In regard to claim 14, Nania teaches all of the limitations of claim 13 as discussed above. In addition, Nania teaches wherein the camera is integrated into an interior mirror of the vehicle (Nania Fig. 2 showing the camera integrated into the rearview mirror 116 of the vehicle; and Nania paragraph 17 noting the driver cameras 106 monitor a driver 114 to detect when the driver 114 is drowsy. The driver cameras 106 are mounted on a front of the rear-view mirror 116. In the illustrated example, the driver cameras 106 include integrated facial-feature recognition with infrared thermal imaging. The driver cameras 106 detects (a) the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114). In regard to claim 15, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches wherein the camera is integrated into an interior mirror of the vehicle (Nania Fig. 2 showing the camera integrated into the rearview mirror 116 of the vehicle; and Nania paragraph 17 noting the driver cameras 106 monitor a driver 114 to detect when the driver 114 is drowsy. The driver cameras 106 are mounted on a front of the rear-view mirror 116. In the illustrated example, the driver cameras 106 include integrated facial-feature recognition with infrared thermal imaging. The driver cameras 106 detects (a) the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114). In regard to claim 16, Nania teaches all of the limitations presented in claim 12 as discussed above. In addition, Nania teaches wherein the target viewing sequence of the active driving situation (Nania paragraph 18 and 31 noting the possible criteria of signs of drowsiness, and that each are compared to thresholds, implying that there are acceptable ranges of eye activity during the driving situation that would be a target viewing sequence of a non-drowsy driver) is determined depending on at least one of the group consisting of: a navigation task along a navigation route of the vehicle; an operating interaction of the driver with a driving-relevant operating component of the vehicle; a driving event; and a model of the surroundings of the vehicle (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 17, Nania teaches all of the limitations presented in claim 16 as discussed above. In addition, Nania teaches wherein the target viewing sequence of the active driving situation (Nania paragraph 18 and 31 noting the possible criteria of signs of drowsiness, and that each are compared to thresholds, implying that there are acceptable ranges of eye activity during the driving situation that would be a target viewing sequence of a non-drowsy driver) is determined depending on an operating interaction of the driver with a driving-relevant operating component of the vehicle (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 18, Nania teaches all of the limitations presented in claim 16 as discussed above. In addition, Nania teaches wherein the target viewing sequence of the active driving situation (Nania paragraph 18 and 31 noting the possible criteria of signs of drowsiness, and that each are compared to thresholds, implying that there are acceptable ranges of eye activity during the driving situation that would be a target viewing sequence of a non-drowsy driver) is determined depending on a model of the surroundings of the vehicle (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 20, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches wherein the target viewing sequence includes a sequence of monitoring views of the driver for monitoring the driving events in the active driving situation (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy; and Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 21, Nania teaches all of the limitations of claim 20 as discussed above. In addition, Nania teaches wherein the target viewing sequence includes a time interval between each two views of the target viewing sequence (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 22, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches wherein the actual viewing sequence includes at least one of the group consisting of: a sequence of monitoring views of the driver for monitoring the driving events during the active driving situation; and a time interval between each two views of the target viewing sequence during the active driving situation (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy; and Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 23, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches determining the state of the driver as inattentive if the change in the actual viewing sequence from the target viewing sequence indicates a change or an omission of a view of the target viewing sequence (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy). In regard to claim 24, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches determining the state of the driver as tired and inattentive if the change in the actual viewing sequence from the target viewing sequence indicates that a time interval between at least two views of the actual viewing sequence exceeds a predetermined threshold value (Nania paragraph 13 noting that the system looks for signs of drowsiness, and takes an escalating series of actions based on determining drowsiness characteristics, and after taking one action, if a subsequent detection of the signs of drowsiness occurs, a second action is taken. As such, it can be seen that at least two views of the actual viewing sequence exceeded predetermined thresholds in such a scenario), and the change in the actual viewing sequence from the target viewing sequence indicates a change or an omission of a view of the target viewing sequence (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy; and Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 25, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches a computer-readable medium for monitoring a state of a driver of a vehicle using a camera in an interior of the vehicle, wherein the computer-readable medium contains instructions which, when executed on a control unit or a computer (Nania paragraph 6 noting an example tangible computer readable medium comprises instructions that, when executed, cause a vehicle to monitor, with a camera affixed to a rear-view mirror, the driver to detect drowsiness events. The example instructions also cause the vehicle to, in response to detecting a first drowsiness event, provide feedback to a driver. Additionally, the example instructions cause the vehicle to, in response to detecting a second drowsiness event after the first select an accommodation in the geographic vicinity of the vehicle, and set a navigation system to navigate to the selected accommodation), carry out the method as claimed in claim 12 (Nania teaches all of the limitations of claim 12 as discussed in the rejection of claim 12 above). In regard to claim 26, Nania teaches all of the limitations of claim 12 as discussed above. In addition, Nania teaches a system for monitoring a state of a driver of a vehicle using a camera in an interior of the vehicle (Nania abstract noting systems and methods are disclosed for occupant alertness-based navigation. An example vehicle includes a camera and an occupant monitor. The example camera is affixed to a rear-view mirror of the vehicle to detect drowsiness events associated with a driver), wherein the system is designed to carry out the method as claimed in claim 12 (Nania teaches all of the limitations of claim 12 as discussed in the rejection of claim 12 above). In regard to claim 27, Nania teaches all of the limitations of claim 26 as discussed above. In addition, Nania teaches a vehicle containing the system for monitoring the state of the driver of the vehicle using the camera in an interior of the vehicle as claimed in claim 26 (Nania abstract noting systems and methods are disclosed for occupant alertness-based navigation. An example vehicle includes a camera and an occupant monitor. The example camera is affixed to a rear-view mirror of the vehicle to detect drowsiness events associated with a driver; and Nania Fig. 1-2 showing a vehicle containing the system for monitoring, and comprising a camera 106 in the interior of the vehicle). In regard to claim 28, Nania teaches a system for monitoring a state of a driver of a vehicle, comprising: at least a first camera configured to obtain information representative of a viewing behavior of a driver of the vehicle during an active driving situation (Nania Fig. 4 showing the flowchart of driver monitoring during active driving; and Nania paragraph 31 noting occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy) with the vehicle using the camera in the interior of the vehicle (Nania abstract noting systems and methods are disclosed for occupant alertness-based navigation. An example vehicle includes a camera and an occupant monitor. The example camera is affixed to a rear-view mirror of the vehicle to detect drowsiness events associated with a driver; and Nania Fig. 1-2 showing a vehicle containing the system for monitoring, and comprising a camera 106 in the interior of the vehicle; Nania paragraph 18 noting occupant monitor 108 monitors the driver 114 for signs of drowsiness. To detect the signs of drowsiness, the occupant monitor 108, via the driver cameras 106 the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114. The occupant monitor 108 determines that the driver 114 is drowsy); a control unit configured (Nania paragraph 25 noting on-board computing platform 302 includes a processor or controller 312 and memory 314. In some examples, the on-board computing platform 302 is structured to include the occupant monitor 108) to, determine a target viewing sequence of the active driving situation (Nania paragraph 18 and 31 noting the possible criteria of signs of drowsiness, and that each are compared to thresholds, implying that there are acceptable ranges of eye activity during the driving situation that would be a target viewing sequence of a non-drowsy driver); determine an actual viewing sequence during the active driving situation depending on the information representative of the viewing behavior of the driver of the vehicle (Nania paragraph 18 noting occupant monitor 108 monitors the driver 114 for signs of drowsiness. To detect the signs of drowsiness, the occupant monitor 108, via the driver cameras 106 the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114. The occupant monitor 108 determines that the driver 114 is drowsy); determine a change in the actual viewing sequence from the target viewing sequence during the active driving situation (Nania Fig. 4 showing the flowchart of the active process of monitoring the driver during a driving scenario, and that the monitoring driver step at 402 would loop and continue if the driver is not drowsy, but would enter a different stage of the flowchart once a drowsiness determination is made, indicating a change in viewing sequence from non-drowsy to drowsy); and determine the state of the driver as tired (Nania paragraph 31 noting If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406) if the change in the actual viewing sequence from the target viewing sequence indicates that a time interval between at least two views of the actual viewing sequence exceeds a predetermined threshold value (Nania paragraph 31 noting FIG. 4 is a flow diagram of a method to provide alerts for occupant alertness that may be implemented by the electronic components 300 of FIG. 3. Initially, at block 402, the occupant monitor 108 monitors the driver 114 via the driver camera(s) 106. At block 404, the occupant monitor 108 determines whether there is an indication that the driver 114 is drowsy. For example, the occupant monitor 108 may determine that the driver 114 is drowsy when (i) the head of the driver 114 nods for a threshold period of time (e.g., three seconds, etc.), (ii) the head of the driver 114 nods and is followed by a sharp jerk, (iii) the eyes of the driver 114 are closed for a threshold period of time, (iv) the time for the eyes of the driver 114 to transition from the open state to the closes state is greater than a threshold period of time (e.g., two seconds, etc.) or (v) the gaze of the driver 114 is at a threshold angle (e.g., 45 degrees below the horizon, etc.) for a threshold period of time (e.g., five second, etc.). If the occupant monitor 108 determines that the driver 114 is drowsy, the method continues at block 406). In regard to claim 29, Nania teaches all of the limitations presented in claim 28 as discussed above. In addition, Nania teaches wherein the camera is integrated into the interior of the vehicle and enables an unrestricted view of a driver (Nania Fig. 2 showing the camera integrated into the rearview mirror 116 of the vehicle; and Nania paragraph 17 noting the driver cameras 106 monitor a driver 114 to detect when the driver 114 is drowsy. The driver cameras 106 are mounted on a front of the rear-view mirror 116. In the illustrated example, the driver cameras 106 include integrated facial-feature recognition with infrared thermal imaging. The driver cameras 106 detects (a) the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114). In regard to claim 30, Nania teaches all of the limitations of claim 28 as discussed above. In addition, Nania teaches wherein the camera is integrated into an interior mirror of the vehicle (Nania Fig. 2 showing the camera integrated into the rearview mirror 116 of the vehicle; and Nania paragraph 17 noting the driver cameras 106 monitor a driver 114 to detect when the driver 114 is drowsy. The driver cameras 106 are mounted on a front of the rear-view mirror 116. In the illustrated example, the driver cameras 106 include integrated facial-feature recognition with infrared thermal imaging. The driver cameras 106 detects (a) the position of the head of the driver 114, (b) the state of the eyes (e.g., open, partially open, or closed) of the driver 114, and/or (c) the direction of the gaze of the driver 114). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 19 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Nania (U.S. Publication No. 2018/0015825), hereinafter referred to as Nania, in view of Pugh et al. (U.S. Publication No. 2017/0031159), hereinafter referred to as Pugh. In regard to claim 19, Nania teaches all of the limitations of claim 12 as discussed above. However, Nania does not expressly disclose wherein the target viewing sequence is a driver-specific target viewing sequence of the active driving situation; and the driver-specific target viewing sequence is preferably learned within a predetermined time interval using a historical viewing behavior of the driver of the vehicle in the active driving situation. In the same field of endeavor, Pugh teaches wherein the target viewing sequence is a driver-specific target viewing sequence of the active driving situation; and the driver-specific target viewing sequence is preferably learned within a predetermined time interval using a historical viewing behavior of the driver of the vehicle in the active driving situation (Pugh paragraph 15 noting providing monitoring of the aware/drowsiness levels of workers, such as truck drivers; Pugh paragraph 86 noting eye ratios and thresholds are user-specific; and paragraph 130 noting that the system inputs, reactions, and decision thresholds can be trained or part of an adaptive/learning algorithm which refines the response in operation based on an individual’s deviations from the general expected response and/or changes in blink frequency or general characteristics of eye movements. As such, it can be seen that the thresholds for detecting drowsy/non-drowsy eye movements can be set based on a learning process that is unique to the individual driver). It would have been obvious, for a person having ordinary skill in the art before the effective filing date, to combine the teachings of Nania with the teachings of Pugh, because both disclosures relate to systems that use electronic devices comprising cameras to monitor someone such as the driver of a vehicle to determine drowsiness levels based on characteristics and movements of their eyes. Both disclosures describe comparing the activity of the eyes of the driver of a vehicle to thresholds to determine whether their actions are within an acceptable range of alertness, and to take action based on determining whether the driver is drowsy. The teachings of Pugh would benefit the teachings of Nania by providing additional details regarding training the system to consider individual idiosyncrasies of a specific driver and how their normal alert actions might deviate from what would be considered normal or average thresholds, to better adapt the system to a specific driver. As such, modified to incorporate these teachings, the system taught by Nania would include all of the limitations presented in claim 19. In regard to claim 31, Nania teaches all of the limitations of claim 28 as discussed above. However, Nania does not expressly disclose wherein the target viewing sequence is a driver-specific target viewing sequence of the active driving situation; and the driver-specific target viewing sequence is preferably learned within a predetermined time interval using a historical viewing behavior of the driver of the vehicle in the active driving situation. In the same field of endeavor, Pugh teaches wherein the target viewing sequence is a driver-specific target viewing sequence of the active driving situation; and the driver-specific target viewing sequence is preferably learned within a predetermined time interval using a historical viewing behavior of the driver of the vehicle in the active driving situation (Pugh paragraph 15 noting providing monitoring of the aware/drowsiness levels of workers, such as truck drivers; Pugh paragraph 86 noting eye ratios and thresholds are user-specific; and paragraph 130 noting that the system inputs, reactions, and decision thresholds can be trained or part of an adaptive/learning algorithm which refines the response in operation based on an individual’s deviations from the general expected response and/or changes in blink frequency or general characteristics of eye movements. As such, it can be seen that the thresholds for detecting drowsy/non-drowsy eye movements can be set based on a learning process that is unique to the individual driver). It would have been obvious, for a person having ordinary skill in the art before the effective filing date, to combine the teachings of Nania with the teachings of Pugh for the same reasons as discussed above regarding claim 19. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Reed et al. – U.S. Publication No. 2016/0180677 An apparatus to indicate that the driver is distracted in response to the eye gaze of the driver being diverted off of a road for a period of time that exceeds an associated time threshold Higgins-Luthman et al. – U.S. Publication No. 2010/0020170 A vision system that may detect characteristics of the driver that may be indicative of the driver being inattentive, drowsy, under the influence of substance use, bored, young/old, healthy, having less than 20/20 vision, color deficient, poor field of view, poor contrast sensitivity, poor clutter analysis, poor reaction time, poor car maintenance, or encountering a challenging environment, such as rain, snow, fog, traffic in cocoon, safe space around car, poor lighting, unsafe area-past accidents, icy conditions, curves, intersections and/or the like Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYLER B EDWARDS whose telephone number is (571)272-2738. The examiner can normally be reached 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sathyanarayanan Perungavoor can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TYLER B. EDWARDS/ Examiner Art Unit 2488 /HOWARD D BROWN JR/Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Mar 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581042
EVENT RECOGNITION SYSTEMS AND METHODS
2y 5m to grant Granted Mar 17, 2026
Patent 12561983
SENSOR PROCESSING METHOD, APPARATUS, COMPUTER PROGRAM PRODUCT, AND AUTOMOTIVE SENSOR SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12556689
INTRA PREDICTION METHOD AND DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12552316
VEHICULAR MIRROR CONTROL SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12556693
INTRA PREDICTION METHOD AND DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
91%
With Interview (+14.5%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 468 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month