Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Applicant’s election without traverse of Invention I, claims 1-6, in the reply filed November 25th, 2025, is acknowledged.
Claims 7-29 withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to nonelected inventions II, III, and IV, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on November 25th, 2025.
Claim Objections
Claim 1 objected to because of the following informalities: all "optional" claim limitations (such as limitations that start with "if used") should be required, as per paragraph [0034] of the specification and Figure 1 that show these "modules" and their outputs to be connected to each other and are not optional. These limitations were read as required during examination.
Appropriate correction is required.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "module" in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 and 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US-20210001862-A1) and further in view of el Kaliouby et al. (US-20200226355-A1) and Gallagher et al. (US-20190332902-A1).
Regarding claim 1, Senechal teaches a system comprising:
a task for an automobile interior (“The in-cabin sensor data includes images of the vehicle interior,” Para [0023]) having at least one subject (“an occupant of the vehicle,” Para [0024]) that creates a video input (“Images of a vehicle interior can be collected. The images can include video,” Para [0025]), an audio input (“Other in-vehicle sensors can be used for data collection, such as a microphone for collecting audio data or voice data,” Para [0025]), and a context descriptor input (“other sensors to collect physiological data,” Para [0025]);
wherein the video input relating to the at least one subject is processed by a face detection module (“The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model,” Para [0023]) and a facial point registration module to produce a first output (“The input layer can then perform processing tasks such as identifying boundaries of the face, identifying landmarks of the face, extracting features of the face,” Para [0091]);
wherein the first output is further processed by at least one of: a facial point tracking module (“the deep learning can be applied to vehicular in-cabin facial tracking,” Para [0086]), a head orientation tracking module (“Facial data as contained in the raw video data can include… head gestures,” Para [0088]), a body tracking module,” (“A third track 564 can include upper body data 540,” Para [0078]), and an action unit intensity tracking module (“The processing can include analysis of… action units,” Para [0088]);
wherein, if used, the facial point tracking module produces a facial point coordinates output (“In embodiments, a much less computationally intense model, such a facial landmark model, can be used to track movement, especially facial movement,” Para [0063]);
wherein, if used, the head orientation tracking module produces a head orientation angles output (“Gestures can also be identified, and can include a head tilt to the side,” Para [0088]);
wherein, if used, the body tracking module produces a body point coordinates output (“The flow 200 further includes tracking upper body movement 250 of the vehicle occupant, based on analysis of further additional images,” Para [0063]);
wherein, if used, the action unit intensity tracking module produces an action unit intensities output (“The action units can be used to identify smiles, frowns, and other facial indicators of expressions,” Para [0088]);
wherein a temporal behavior primitives buffer processes: the face bounding box output; the valence and arousal scores output; if used, the facial point coordinates output; if used, the head orientation angles output; if used, the body point coordinates output; if used, the gaze direction output; and, if used, the action unit intensities output, all to produce a temporal behavior output (“The cognitive state data that is analyzed can include image data, facial data, audio data, voice data, speech data, non-speech vocalizations, physiological data, and the like,” Para [0042]);
wherein the context descriptor input relating to the at least one subject produces a context descriptor output (“Respiration, heart rate, heart rate variability, perspiration, temperature, and other physiological indicators of cognitive state can be determined by analyzing the images and video data,” Para [0088]);
wherein a mental state prediction module processes the content descriptor output, the second output, and the temporal behavior output to predict a mental state of at least one subject in the automobile interior (“The multiple mobile devices, vehicles, and locations 900 can be used separately or in combination to collect images, video data, audio data, physio data, etc., on a user 910… The data collected on the user 910 can be analyzed and viewed for a variety of purposes including… expression analysis, mental state analysis, cognitive state analysis, and so on,” Para [0104]).
Senechal fails to teach the following limitations as further claimed. el Kaliouby, however, further teaches a social gaze tracking module (el Kaliouby, “The cognitive state data can include one or more of… gaze direction,” Para [0054]),
wherein, the face detection module produces a face bounding box output (el Kaliouby, “the detection of the first face, the second face, and multiple faces can include identifying facial landmarks, generating a bounding box, and predicting of a bounding box and landmarks for a next frame,” Para [0082]);
and wherein, if used, the social gaze tracking module produces a gaze direction output (el Kaliouby, “The cognitive state data can include one or more of… gaze direction. Various cognitive states can be inferred,” Para [0054]).
Senechal and el Kaliouby fail to teach the following limitations as further claimed. Gallagher, however, further teaches
wherein the audio input relating to the at least one subject (Senechal, “The cognitive state data that is analyzed can include image data, facial data, audio data,” Para [0042]) is processed by a valence and arousal affect states tracking module (Gallagher, “a process 300C,” Para [0072]) to produce a second output (Gallagher, Fig. 4, “stress”, “frustrated”, or other emotions in the graph) and to produce a valence and arousal scores output (Gallagher, “vector 404” plotted on graph in Fig. 4);
PNG
media_image1.png
428
680
media_image1.png
Greyscale
wherein the valence and arousal affect states tracking module (Gallagher, “a process 300C,” Para [0072]) processes the temporal behavior output (Gallagher, “A large magnitude arousal state may indicate that the occupant is fearful or angry or is drowsy or distracted. At 405, an output to the occupant of the vehicle is produced in an attempt to return the occupant to a calm and alter state,” Para [0075]).
El Kaliouby is considered to be analogous to the claimed invention because they are both in the field of determining an occupant’s mental state in a vehicle using multiple sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of el Kaliouby into Senechal for the benefit of improved mental state detection.
Gallagher is considered to be analogous to the claimed invention because they are both in the field of determining an occupant’s mental state in a vehicle using multiple sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Gallagher into Senechal for the benefit of improved mental state detection.
Additionally, in regards to the limitation “the head orientation tracking module produces a head orientation angles output”, Considering that Senechal teaches identification of “a head tilt to the side” (see [0088]), one of ordinary skill in the art would have found it obvious to determine, as claimed, the head orientation angles output. Specifically, tilting one’s head to the side inherently involves a change in a head orientation angle. The Supreme Court set forth in KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) an "Obvious to try" rationale that encompassed choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success.
Specifically, the rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. If this leads to the anticipated success, it is likely that product [was] not of innovation but of ordinary skill and common sense. In that instance the fact that a combination was obvious to try might show that it was obvious under § 103." KSR, 550 U.S. at 421, 82 USPQ2d at 1397.
Therefore, one of ordinary skill in the art of image analysis would have found it obvious to look to head tilt angles in order to determine whether a person’s head is tilting in a captured image. An artisan taking into consideration head tilt angles would have been one of a finite number of predictable head tilt identification solutions and have with it a reasonable expectation of success.
See also MPEP 2143(I)(E).
Regarding claim 2, the rejection of claim 1 is incorporated herein. Senechal in view of el Kaliouby and Gallagher teach the system of claim 1, and Senechal further teaches wherein the mental states comprise at least one of: pain, mood, drowsiness, engagement, depression, and anxiety (“A human perception metric can include a quantification of activity, involvement, cognitive load, distractedness, drowsiness, or impairment evaluation for the occupant, demographics, mood, etc,” Para [0044]).
Regarding claim 5, the rejection of claim 1 is incorporated herein. Senechal in view of el Kaliouby and Gallagher teach the system of claim 1, and Gallagher further teaches the task activating a self-driving system in response to the mental state of the at least one subject (Gallagher, “The present system can be used in an autonomous vehicle, e.g., a levels 1-2 automobile(s), where the vehicle uses the level of distraction, a determination of distractedness, or the multiple sensor determination of a distracted driver, to be able to judge the most appropriate time to switch from manual to autonomous drive and vice-versa,” Para [0111]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Gallagher into Senechal for the benefit of improved safety for occupants in a vehicle that are unfit to drive.
Regarding claim 6, the rejection of claim 1 is incorporated herein. Senechal in view of el Kaliouby and Gallagher teach the system of claim 1, and Senechal further teaches the task activating an emergency communication system in response to the mental state of the at least one subject (“In embodiments, V2V or V2I communications can be used to alert authorities that a particular vehicle occupant is impaired, can contact emergency services for a sick vehicle occupant, and the like,” Para [0060]).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US-20210001862-A1), el Kaliouby et al. (US-20200226355-A1) and Gallagher et al. (US-20190332902-A1) as applied to claim 1 above, and further in view of Ferren et al. (US-20120157127-A1).
Regarding claim 3, the rejection of claim 1 is incorporated herein. Senechal in view of el Kaliouby and Gallagher teach the system of claim 1, but fail to teach the following limitations as further claimed. Ferren, however, further teaches wherein the task verifies which of the at least one subject is creating the audio input (“audio inputs may be processed to identify things such as but not limited to… the identity of the person speaking,” Para [0026]).
Ferren is considered to be analogous to the claimed invention because they are both in the field of systems used to determine the mental state of a user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated Ferren into Senechal, el Kaliouby, and Gallagher for the benefit of a system that can be used on multiple passengers of a vehicle.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Senechal et al. (US-20210001862-A1), el Kaliouby et al. (US-20200226355-A1) and Gallagher et al. (US-20190332902-A1) as applied to claim 1 above, and further in view of Sobhany (US-20200242421-A1).
Regarding claim 4, the rejection of claim 1 is incorporated herein. Senechal in view of el Kaliouby and Gallagher teach the system of claim 1, but fail to teach the following limitations as further claimed. Sobhany, however, further teaches: a query to the at least one subject about the mental state of the at least one subject (“upon detecting an emotional state of the user, the system may provide a prompt to the user requesting confirmation of the detected emotional state. For instance, the prompt may include the question “We have detected that you are stressed, is that correct?”,” Para [0151]).
Sobhaany is considered to be analogous to the claimed invention because they are both in the field of detecting an emotional state of a person in a vehicle. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Sobhany into Senechal, el Kaliouby, and Gallagher for the benefit of fewer false emotional state detections.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Tao et al. (CN-113642522-A) teaches a method for detecting driver fatigue using an array of sensors.
Tamrakar et al. (US-20210129748-A1) teaches a method for monitoring facial features and body language of a driver of a vehicle to determine their drowsiness state.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Rachel Anne Ometz/Examiner, Art Unit 2668 12/16/25
/VU LE/Supervisory Patent Examiner, Art Unit 2668