Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,923

SELF-DRIVING TAKEOVER DETERMINING METHOD AND SYSTEM THEREOF

Non-Final OA §103
Filed
Nov 30, 2023
Examiner
ALZATEEMEH, HUSSAM ALDEEN
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Automotive Research & Testing Center
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
89%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
11 granted / 22 resolved
-2.0% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
53
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
57.3%
+17.3% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-14 have been presented for examination. Claims 1-3, 5, 7, and 12-14 are rejected. Allowable Subject Matter Claim 4, 6, and 8-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/30/2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a self-driving unit” and “a processing unit” in claims 12, 13, and 14. See specification [0020-0021]. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 7, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Oba (US 20220289250 A1), in view of Tahara (US 20220346684 A1). Regarding Claim 1, Oba discloses a self-driving takeover determining method, for determining whether a driver located on a driver's seat in a vehicle satisfies a self-driving takeover condition in a self-driving mode [0138] “The driver information acquisition unit 12 acquires, for example, information for determining the arousal level of the driver, state information of the driver, and the like.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.”, and the self-driving takeover determining method comprising: a driver calibrating step comprising, before entering the self-driving mode of the vehicle, capturing a plurality of calibration images of the driver by at least one camera and generating a plurality of driver calibration parameters by a calibration module according to the calibration images, wherein the driver calibration parameters are a plurality of relative position parameters of the driver in a cockpit of the vehicle [0135] “The position sensor is, for example, a GPS receiver or the like, and the ambient information detection sensor is, for example, a camera, a stereo camera, a ToF sensor, an ultrasonic sensor, a radar, a light detection and ranging or a laser imaging detection and ranging (LiDAR), a sonar, or the like.” [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.” [0144] “FIG. 2 illustrates an example of various sensors for obtaining information of the driver inside the vehicle included in the driver information acquisition unit 12. For example, the driver information acquisition unit 12 includes a camera, a stereo camera, a ToF sensor, a seat strain gauge, and the like as detectors for detecting the position and posture of the driver.” [0212] “First, in step S11, driver authentication is performed. This driver authentication is performed using knowledge authentication using a password, a PIN, or the like, biometric authentication using the face, a fingerprint, an iris of a pupil, a voiceprint, or the like, or the knowledge authentication and the biometric authentication together. By performing the driver authentication in this way, information for determining the notification timing can be accumulated in association with each driver even in a case where a plurality of drivers drives the same vehicle.” Oba’s driver authentication process includes biometric authentication using the face, a fingerprint, an iris of a pupil, a voiceprint, or the like necessarily requires capturing one or more driver biometric samples (e.g., face/iris images) and extracting/storing driver-specific reference parameters for later matching; these extracted reference parameters correspond to the claimed “driver calibration parameters” generated from calibration images prior to entering/using the self-driving mode.; an image capturing step comprising, during a first detection time period in the self-driving mode, capturing a plurality of driver images of the driver by the camera [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.”; a face detecting step comprising, by the detection module according to the driver images, detecting whether the driver satisfies at least one face characteristic condition, which is at least one face detection result [0145] “the driver information acquisition unit 12 includes a driver authentication (driver identification) unit. Note that, as an authentication method, biometric authentication using a face, a fingerprint, an iris of a pupil, a voiceprint, or the like can be considered in addition to knowledge authentication using a password, a personal identification number, or the like.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.”; a driver availability determining step comprising, by an availability determination module according to the at least one face detection result, determining whether the driver satisfies an availability condition, which is an availability determination result [0033] “calculates a driver evaluation value that is an index value indicating whether or not the driver is in a state of being able to start the manual driving on the basis of the observation information, and stores the calculated driver evaluation value in the storage unit as the conversion unnecessary data.” [0156] “The data processing unit 11 further calculates safety index values indicating the state of the driver in the vehicle, for example, whether or not the driver in the automatic driving vehicle is in a safe manual driving executable state, and moreover, whether or not the driver in the manual driving is executing safe driving, for example.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.”; and a self-driving takeover determining step comprising determining whether the self-driving takeover condition is satisfied according to the availability determination result [0157] “Moreover, for example, in a case where necessity of switching from the automatic driving mode to the manual driving mode arises, the data processing unit 11 executes processing of issuing notification for switching to the manual driving mode via the notification unit 15.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.” [0033] “calculates a driver evaluation value that is an index value indicating whether or not the driver is in a state of being able to start the manual driving on the basis of the observation information, and stores the calculated driver evaluation value in the storage unit as the conversion unnecessary data.”. Oba does not appear to teach the full claim limitation regarding “a detection module updating step comprising updating a detection module according to the driver calibration parameters” However, Tahara teaches equivalent teachings wherein a detection module updating step comprising updating a detection module according to the driver calibration parameters [0106] “When completing the setting of the abnormal-state determination threshold, the threshold setting unit 14 sets the estimatable flag “1”. Specifically, in a case where the estimatable flag is “0”, the control unit causes the threshold setting unit 14 to perform the process of setting the abnormal-state determination threshold.” [0109] “When the threshold setting unit 14 sets the estimatable flag to “1” and then the control unit determines that the estimatable flag is “1”, the control unit causes the estimation unit 15 to estimate the abnormal state of the occupant. That is, in the driver availability detection device 1, after the threshold setting unit 14 sets the abnormal-state determination threshold, the abnormal state of the occupant is estimated.” [0112] “The estimation unit 15 estimates that the occupant is in the abnormal state in a case where the abnormal state score obtained by inputting the feature amount related to the occupant calculated by the feature-amount calculation unit 13 to the machine learning model 17 is larger than the abnormal-state determination threshold set by the threshold setting unit 14.”; It would have been obvious to a person that is skilled in the art before the effective filling date to combine Oba and Tahara to make the system wherein a detection module updating step comprising updating a detection module according to the driver calibration parameters. A person that is skilled in the art would have been motivated to combine Oba and Tahara to improve overall system operation and reduce erroneous estimation of the abnormal state of the occupant [Tahara 0011] “According to the present invention, in the driver availability detection device that estimates the abnormal state of the occupant of the vehicle on the basis of the information related to the occupant and the machine learning model, it is possible to prevent erroneous estimation of the abnormal state of the occupant when estimating whether or not the occupant is in the abnormal state.” Regarding Claim 2, The combination of Oba with Tahara teaches the self-driving takeover determining method of claim 1, further comprising: Oba discloses a vehicle body signal acquiring step comprising, during the first detection time period in the self-driving mode, by at least one vehicle body sensor, acquiring a plurality of vehicle body signals of the driver's seat, which comprise at least partial signals of a plurality of seat belt buckle signals and a plurality of driver's seat pressure signals [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.” [0274] “action information of the driver: also including smoothness information of posture transition such as orientation of the body of the driver, seat rotation,” [0476] “in addition to posture tracking by the ToF sensors or the cameras, a rotational driving posture recovery detection of the seat, a seating sensor, a body temperature distribution and a vital signal detection, a seat belt wearing sensor, and the like. It is possible to evaluate the recovery quality on the basis of the detection information over time.; a driver presence determining step comprising, by a presence determination module according to at least one of the driver images and the vehicle body signals, determining whether the driver satisfies a presence condition, which is a presence determination result [0138] “The driver information acquisition unit 12 acquires, for example, information for determining the arousal level of the driver, state information of the driver, and the like.” [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.”; and an alarming step [0458] “In a case where the system detects that the driver is sleeping in the passive monitoring period at or before time t0, the system needs to calculate optimum timing of sounding the wake-up alarm before the handover processing.”; wherein the self-driving takeover determining step comprises determining whether the self-driving takeover condition is satisfied according to the availability determination result and the presence determination result [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.” Oba's step S3 corresponds to the claimed “presence determination” (driver seated/recovered), Oba step S4 corresponds to the claimed “availability determination” (arousal level via face/eyeball behavior), and Oba step S6 completes the handover based on those evaluated conditions (i.e., takeover satisfaction is determined according to the presence and availability results.); wherein when the self-driving takeover condition is not satisfied in the self-driving takeover determining step, the alarming step comprises generating at least one of a visual alarm, an auditory alarm and a vibration alarm to alarm the driver [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.” [0458] “In a case where the system detects that the driver is sleeping in the passive monitoring period at or before time t0, the system needs to calculate optimum timing of sounding the wake-up alarm before the handover processing.”. Regarding Claim 3, The combination of Oba with Tahara teaches the self-driving takeover determining method of claim 1, further comprising: Oba discloses a physiological signal acquiring step comprising, during the first detection time period in the self-driving mode, by at least one physiological sensor, acquiring a plurality of physiological signals of the driver, which comprise at least one of a plurality of heart rate signals and a plurality of respiratory signals [0139] “the driver information acquisition unit 12 includes an imaging device that images a driver, a biosensor (i.e., physiological sensor) that detects biometric information of the driver, a microphone that collects sound in the interior of the vehicle, and the like. The biosensor is provided on, for example, a seating surface, a steering wheel, or the like, and detects a seating state of an occupant sitting on a seat or biometric information of the driver holding the steering wheel.” [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.”; a human body posture detecting step comprising, by the detection module according to the driver images, detecting whether the driver satisfies a non-sleeping posture characteristic condition, which is a non-sleeping posture detection result [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0458] “In a case where the system detects that the driver is sleeping in the passive monitoring period at or before time t0, the system needs to calculate optimum timing of sounding the wake-up alarm before the handover processing.”; and a heart rate detecting step comprising, by the detection module according to the physiological signals, detecting whether the driver satisfies a heart rate characteristic condition, which is a heart rate detection result [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0476] “The system (data processing unit 11) monitors that a necessary recovery procedure is normally performed, such as the driver recovering to the seat in the rotated state from the driving posture, and returning the seat to the direction where the driver can drive and wears a seat belt. This recovery procedure evaluation is performed by using, in addition to posture tracking by the ToF sensors or the cameras, a rotational driving posture recovery detection of the seat, a seating sensor, a body temperature distribution and a vital signal detection, a seat belt wearing sensor, and the like. It is possible to evaluate the recovery quality on the basis of the detection information over time.” Oba expressly shows “heart rate” among “vital signals” and further teaches using “vital signal detection” over time to evaluate recovery quality. Thus, the cited vital-signal detection corresponds to detecting whether the driver satisfies a heart-rate/vital-signal characteristic condition and generating the claimed “heart rate detection result.”; wherein the driver availability determining step comprises, by the availability determination module according to the at least one face detection result, the non-sleeping posture detection result and the heart rate detection result, determining whether the driver satisfies the availability condition, which is the availability determination result [0033] “calculates a driver evaluation value that is an index value indicating whether or not the driver is in a state of being able to start the manual driving on the basis of the observation information, and stores the calculated driver evaluation value in the storage unit as the conversion unnecessary data.” [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0156] “The data processing unit 11 further calculates safety index values indicating the state of the driver in the vehicle, for example, whether or not the driver in the automatic driving vehicle is in a safe manual driving executable state, and moreover, whether or not the driver in the manual driving is executing safe driving, for example.” [0208] “In step S3, whether or not the driver has been seated and recovered is confirmed. In step S4, an internal arousal level state of the driver is confirmed by analyzing a face or an eyeball behavior such as saccade. In step S5, stability of an actual steering situation of the driver is monitored. Then, in step S6, the handover from the automatic driving to the manual driving is completed.” [0458] “In a case where the system detects that the driver is sleeping in the passive monitoring period at or before time t0, the system needs to calculate optimum timing of sounding the wake-up alarm before the handover processing.”. Regarding Claim 7, The combination of Oba with Tahara teaches the self-driving takeover determining method of claim 1, Oba discloses wherein the detection module comprises an eye-opening detection portion, a view angle detection portion and a head deflection detection portion, which are all detection algorithms based on some learning… [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0144] “the driver information acquisition unit 12 includes a camera, a stereo camera, a ToF sensor, a seat strain gauge, and the like as detectors for detecting the position and posture of the driver. Furthermore, the driver information acquisition unit 12 includes a face recognition device (face (head) recognition), a driver eye tracker, a driver head tracker, and the like, as detectors for obtaining activity observable information of the driver.” wherein the face detecting step comprises: an eye-opening detecting step comprising, by the eye-opening detection portion of the detection module according to the driver images, detecting whether the driver satisfies an eye-opening characteristic condition, which is an eye-opening detection result [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.”; a view angle detecting step comprising, by the view angle detection portion of the detection module according to the driver images, detecting whether the driver satisfies a view angle characteristic condition, which is a view angle detection result [0291] “There is behavior analysis for an eyeball as an effective means for confirming the driver's consciousness state. For example, it is conventionally known that it is possible to analyze a line-of-sight by analyzing a direction in which the line-of-sight is directed. By further developing this technology and analyzing the line-of-sight behavior at high speed, more detailed behavior detection of the eyeball can be performed.” [0570] “The second-order parameters (driver information) include a percentage of eye closure evaluation value (PERCLOS), a face orientation, and a line-of-sight stability evaluation value (coordinates and indexing data).; and a head deflection detecting step comprising, by the head deflection detection portion of the detection module according to the driver images, detecting whether the driver satisfies a head deflection characteristic condition, which is a head deflection detection result [0275] “(A3) driver's face and head information: face, head orientation, posture, movement information, and the like, and [0276] (A4) biometric information of the driver: heart rate, pulse rate, blood flow, respiration, electroencephalogram, sweating state, eye movement, eyeball behavior, gaze, blinking, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), iris pupil reaction, and the like.” [0572] “The percentage of eye closure evaluation value (PERCLOS), the face orientation, and the line-of-sight stability evaluation value (coordinates and indexing data), which are the second-order parameters (driver information), are used for evaluation of fatigue, drowsiness, sleepiness sign, reduced consciousness, and line-of-sight stability, and steering and pedal steering stability, for example.”; wherein a number of the at least one face detection result is at least three, and the face detection results comprise the eye-opening detection result, the view angle detection result and the head deflection detection result [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0570] “The second-order parameters (driver information) include a percentage of eye closure evaluation value (PERCLOS), a face orientation, and a line-of-sight stability evaluation value (coordinates and indexing data).” [0275] “(A3) driver's face and head information: face, head orientation, posture, movement information, and the like, and [0276] (A4) biometric information of the driver: heart rate, pulse rate, blood flow, respiration, electroencephalogram, sweating state, eye movement, eyeball behavior, gaze, blinking, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), iris pupil reaction, and the like.” Oba system shows at least three distinct face/driver monitoring outputs such as blink/PERCLOS (eye opening), Line of sight direction/ stability, and Head/face orientation (head deflection). Oba does not teach the claim limitation regarding performing the detection “based on a machine learning” However, Tahara teaches equivalent teachings wherein performing the detection based on a machine learning [0112] “The estimation unit 15 estimates that the occupant is in the abnormal state in a case where the abnormal state score obtained by inputting the feature amount related to the occupant calculated by the feature-amount calculation unit 13 to the machine learning model 17 is larger than the abnormal-state determination threshold set by the threshold setting unit 14.” It would have been obvious to a person that is skilled in the art before the effective filling date to combine Oba and Tahara to make the system wherein performing the detection based on a machine learning. A person that is skilled in the art would have been motivated to combine Oba and Tahara to improve overall system operation and reduce erroneous estimation of the abnormal state of the occupant [Tahara 0011] “According to the present invention, in the driver availability detection device that estimates the abnormal state of the occupant of the vehicle on the basis of the information related to the occupant and the machine learning model, it is possible to prevent erroneous estimation of the abnormal state of the occupant when estimating whether or not the occupant is in the abnormal state.” Regarding Claim 12, The claim recites a system (See. Fig 7 and [Oba 0076] FIG. 7 is a diagram illustrating an example of a mode switching sequence from an automatic driving mode to a manual driving mode executed by the mobile device of the present disclosure.) of the parallel limitations in claim 1, respectively for the reasons discussed above. Therefore, claim 12 is rejected using the same rational reasoning. Regarding Claim 13, The claim recites a system (See. Figs 3, 15, and 16) of the parallel limitations in claim 2 and 3, respectively for the reasons discussed above. Therefore, claim 13 is rejected using the same rational reasoning. Regarding Claim 14, The claim recites a system of the parallel limitations in claim 7, respectively for the reasons discussed above. Therefore, claim 14 is rejected using the same rational reasoning. Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over Oba (US 20220289250 A1), in view of Tahara (US 20220346684 A1), and further in view of Wacquant (US 20150296135 A1). Regarding Claim 5, The combination of Oba with Tahara teaches the self-driving takeover determining method of claim 1, Oba discloses wherein the driver calibration parameters comprise a sight range related information of the driver [0140] “As a vital signal, diversified observable data is available such as heart rate, pulse rate, blood flow, respiration, mind-body correlation, visual stimulation, EEG, sweating state, head posture behavior, eye, gaze, blink, saccade, microsaccade, fixation, drift, stare, percentage of eye closure evaluation value (PERCLOS), and iris pupil reaction.” [0570] “The second-order parameters (driver information) include a percentage of eye closure evaluation value (PERCLOS), a face orientation, and a line-of-sight stability evaluation value (coordinates and indexing data).”, The combination of Oba with Tahara does not appear to teach the full claim limitation regarding “a lateral distance and a longitudinal distance between the driver and at least one target object among a rearview mirror, a left rearview mirror, a right rearview mirror, a carputer, a steering wheel, an instrument panel and a glove compartment of the vehicle” However, Wacquant teaches equivalent teachings a lateral distance and a longitudinal distance between the driver and at least one target object among a rearview mirror, a left rearview mirror, a right rearview mirror, a carputer, a steering wheel, an instrument panel and a glove compartment of the vehicle [0004] “to capture image data representative of the driver's head and eyes to determine a head and gaze direction of the driver. The system includes a control having an image processor operable to process image data captured by the cameras. The control, responsive to processing by the image processor of image data captured by both cameras of the pair of cameras, is operable to determine a three-dimensional eye position and a three-dimensional gaze vector for at least one of the driver's eyes.” [0022] “FIG. 17 is an in vehicle cabin shot from the right eye tracker camera which is installed beside the vehicle steering wheel facing inbound capturing a mirror image at a target mirror which shows a target fixed (in real) in the in cabin mirror region in a virtual distance within the virtual space, with the target mirror also having a target stitched to the mirror plane” [0027] “The vision system 12 includes a control or electronic control unit (ECU) or processor 18 that is operable to process image data captured by the cameras and may provide displayed images at a display device 16 for viewing by the driver of the vehicle (although shown in FIG. 1 as being part of or incorporated in or at an interior rearview mirror assembly 20 of the vehicle, the control and/or the display device may be disposed elsewhere at or in the vehicle).” [0076] “a system that provides enhanced eye and gaze detection to determine a driver's eye gaze direction and focus distance via image processing of image data captured by cameras disposed in the vehicle and having fields of view that encompass the driver's head region. The determination of the driver's eye gaze direction may be used to actuate or control or adjust a vehicle system or accessory or function. For example, the captured image data may be processed for determination of the driver's or passenger's eye gaze direction and focus distance for various applications or functions, such as for use in association with activation of a display or the like” Wacquant explicitly teaches calibrating the driver’s eye position in space, using in-cabin mirror/display targets as reference points, and converting data into an eye-tracker coordinate system. The "three-dimensional eye position and a three-dimensional gaze vector", "virtual distance within the virtual space, with the target mirror " and "a driver's eye gaze direction and focus distance via image processing of image data captured by cameras disposed in the vehicle and having fields of view that encompass the driver's head region" shows that lateral/longitudinal distances are considered and used in the calibration parameters. It would have been obvious to a person that is skilled in the art before the effective filling date to combine Oba, Tahara, and Wacquant to make the system wherein a lateral distance and a longitudinal distance between the driver and at least one target object among a rearview mirror, a left rearview mirror, a right rearview mirror, a carputer, a steering wheel, an instrument panel and a glove compartment of the vehicle are considered and used in the calibration parameters. A person that is skilled in the art would have been motivated to combine Oba, Tahara, and Wacquant to improve overall system operation and reduce erroneous estimation [Wacquant 0011] “For system calibration it is known to try to calibrate eye gaze systems without interaction with the user/driver. It is also known to try to calibrate eye gaze systems in a way that the user/driver doesn't notice the calibration. A calibration to a fixating point that the system may assume the driver may focus at a point of time may just deliver one gaze direction measurement reference, but the x, y, z positional error may falsify the result. To accommodate this, the present invention may measure several gaze vectors of fixated points (by the user/driver) which may differ in position, especially the distance, whereby the system may be able to calibrate both the eye gaze origin (the eye position in space) and the eye gaze.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUSSAM ALZATEEMEH whose telephone number is (703)756-1013. The examiner can normally be reached 8:00-5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUSSAM ALDEEN ALZATEEMEH/ Examiner, Art Unit 3662 /Madison R. Inserra/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Nov 30, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591235
SYSTEM AND METHOD FOR CONTROLLING UNMANNED AUTONOMOUS VEHICLES
2y 5m to grant Granted Mar 31, 2026
Patent 12555480
INFORMATION PROCESSING APPARATUS, MOVING OBJECT, SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM TO IDENTIFY A RISK AREA
2y 5m to grant Granted Feb 17, 2026
Patent 12554267
AUTOMATIC DRIVING METHOD, APPARATUS AND SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12547191
CONTROL DEVICE FOR ROBOT IN MULTI-AGENT SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12528432
APPARATUS AND METHOD FOR REDUCING CURRENT DRAINAGE FROM A BATTERY OF A VEHICLE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
89%
With Interview (+39.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month