DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending and examined below. This action is in response to the claims filed 2/9/26.
Continued Examination Under 37 CFR 1.114
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/9/26 has been entered.
Response to Amendment
Applicant’s arguments, see Applicant Remarks Sections I. filed on 2/9/26, regarding 35 USC § 103 rejections are persuasive in view of amendments filed 2/9/26.
However, upon further consideration, new grounds of rejection are made in view of further citations to the art of record below.
Claim Objections
Claim 20 is objected to because of the following informalities:
Claim 20 currently is amended as follows:
PNG
media_image1.png
199
649
media_image1.png
Greyscale
It appears that the amendment was intended to read “the virtual mode is one of an AR mode and a VR mode” where currently AR is removed via the amendments of 2/9/26. This appears to be accidental and will be interpreted as not having removed “AR” from the claim unless otherwise explicitly noted.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaya Kumar et al. (US 2017/0355377), herein “Kumar”, in view of Harvey (US 2025/0086399).
Regarding claims 1, 10, and 12, Kumar discloses a system for detecting and responding to physiological characteristics of a driver including a prediction system/non-transitory computer-readable medium/method comprising (Abstract and ¶136):
a memory storing instructions that, when executed by a processor (¶55), cause the processor to:
acquire multi-modal data about a vehicle occupant within a virtual mode of an automated vehicle, and the multi-modal data includes a description of an environment and a location (¶22, ¶44 and ¶68 – virtual processing environment corresponding to the recited virtual mode of an autonomous vehicle corresponding to the recited automated vehicle which processes sensor data including driver characteristics corresponding to the recited multi-modal data about a vehicle occupant as well as vehicle position/pose and the environment about the vehicle corresponding to the recited a description of an environment and a location);
estimate a physiological state and an emotional state associated with the vehicle occupant and match the physiological state and the emotional state with preference data using a learning model (¶17-21 – determining driver state corresponding to the recited physiological state and emotional state associated with the vehicle occupant which is matched with prior activity data using learning functions corresponding to the recited matching the states with preference data using a learning model); and
adapt a vehicle surrounding and a travel plan using a pre-trained generative model to decrease a negative emotion parameter associated with the physiological state and the emotional state within the virtual mode, and control a maneuver of the automated vehicle with the travel plan, the virtual mode is associated with a historical state of the vehicle occupant and displayed (¶44, ¶143-144, and ¶230-237 - navigation maneuver timing and automated driving corresponding to the recited maneuver control can be modified based on driver profile information including present driver state information, historic driver information, and learned information corresponding to the recited historical state of the vehicle occupant as processed within the virtual processing environment corresponding to the recited virtual mode utilizing pre-generated learning data corresponding to the recited pre-trained generative model in order to decrease low consumer morale corresponding to the recited decrease a negative emotion parameter associated with the physiological state where this information may be provided to the driver via notifications corresponding to the recited displayed virtual mode vehicle surrounds/travel plan information); and
receive from a server the model that is retrained according to the vehicle surrounding (¶21-22 and ¶201-213 – driver profile model is stored on a remote server which is updated at the server utilizing driver and other contextual data corresponding to the recited retrained model according to the vehicle surrounding).
While Kumar does disclose utilizing AI learning models for generating determinations and responses, it does not explicitly disclose the AI is a neural network model or the use of AR/VR.
However, Harvey discloses a smart vehicle assistant including neural network model (¶251 – learning model may be based upon convolutional neural networks, fully-connected neural networks, other types of neural networks)
the virtual mode is one of an augmented reality (AR) mode and a virtual reality (VR) mode (¶195 – driving assistance may be provided on AR glasses, VR headsets)
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model based displays of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model based displays of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers and/or vehicles (Harvey - ¶195).
Regarding claims 2, 11, and 13, Kumar further discloses wherein the instructions to estimate the physiological state and the emotional state further include instructions to (¶17-21 – determining driver state corresponding to the recited physiological state and emotional state associated with the vehicle occupant):
receive information from a galvanic-response sensor and an image from a camera associated with the vehicle occupant, wherein the information includes pulse data and temperature data (¶80 and ¶228 – information is collected during the drive corresponding to the recited continuously receive information including Galvanic Skin Response data, camera data as well as temperature and pulse data);
derive facial features of the vehicle occupant from the image using the learning model; and predict arousal and sentiment associated with the vehicle occupant by correlating variability of the information with the facial features (¶21 and ¶93-95 – driver features including facial features are determined utilizing learning functions corresponding to the recited learning model derived facial features from the image which is used to determine drowsiness, fatigues anxiety, or other impairing condition corresponding to the recited predict arousal and sentiment associated with the vehicle occupant).
While Kumar does disclose utilizing AI learning models for generating determinations and responses, it does not explicitly disclose the AI is a neural network model.
However, Harvey discloses a smart vehicle assistant including neural network model (¶251 – learning model may be based upon convolutional neural networks, fully-connected neural networks, other types of neural networks).
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model details of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model details of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers and/or vehicles (Harvey - ¶195).
Regarding claims 3 and 14, Kumar further discloses wherein the instructions to derive the facial features further include instructions to (¶21 and ¶93-95 – determining driver features including facial features):
adjust hyperparameters of the learning model according to the facial features that reduce stress data outputted by the galvanic-response sensor and increase a happiness factor of the vehicle occupant, and according to the location associated with a past destination of the vehicle occupant (¶147-153, ¶162-167, and ¶179-187 – learning database is updated utilizing present driver characteristics corresponding to the recited adjust hyperparameters of the learning model and driver preferences corresponding to the recited happiness factor of the vehicle occupant to update more personalized/accurate relationships between conditions and driver performance such as more-effective provision of alerts or notifications to the driver and more-effective changes to vehicle dynamic operations corresponding to the recited association of driver states including anxiety with facial features and galvanic response data as well as associated provisions to reduce negative driver states such as the driver pulling over and resting as recommended under present conditions based on driver performance data gathered from previous trips corresponding to the recited the location associated with a past destination of the vehicle occupant).
While Kumar does disclose utilizing AI learning models for generating determinations and responses, it does not explicitly disclose the AI is a neural network model.
However, Harvey discloses a smart vehicle assistant including neural network model (¶251 – learning model may be based upon convolutional neural networks, fully-connected neural networks, other types of neural networks).
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model details of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model details of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers and/or vehicles (Harvey - ¶195).
Regarding claims 4 and 15, Kumar further discloses predict by the learning model a travel stop that will reduce stress data outputted by the galvanic-response sensor; and add the travel stop to the travel plan and update the vehicle surrounding for the travel stop (¶182 and ¶200 – learning particular characteristics, behaviors, tendencies, and/or other qualities related to the driver such as pulling over and resting as recommended under present conditions corresponding to the recited predict by the learning model a travel stop that will reduce stress data outputted by the galvanic-response sensor where the vehicle can take over and pull over to a rest area corresponding to the recited add the stop to the travel plan and update the vehicle surrounding for the travel stop), and
While Kumar does disclose utilizing AI learning models for generating determinations and responses, it does not explicitly disclose the AI is a neural network model or that the generated audiovisual content is displayed on a window display.
However, Harvey discloses a smart vehicle assistant including generate audiovisual content on a window display of the automated vehicle using the pre- trained generative model, wherein the audiovisual content is associated with the vehicle surrounding and the travel stop, and the pre-trained generative model is a generative pre-trained transformer (GPT) model that is independent of the neural network model (¶142, ¶196, and ¶251 – learning model may be based upon convolutional neural networks, fully-connected neural networks, other types of neural networks to generate driving assistance for a chatbot or voice bot corresponding to the recited audiovisual content which may be displayed on a surface/window display which is used to provide the driving assistance, directions, instructions, and/or indicators via audible or verbal assistance, directions or instructions where the neural network is used for determining vehicle maneuvers and the chatbot/audiovisual content is generated utilizing a GPT model or other generative AI corresponding to the recited a generative pre-trained transformer (GPT) model that is independent of the neural network model).
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model based display of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the machine learning model based display of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers and/or vehicles (Harvey - ¶195).
Regarding claims 5 and 16, Kumar further discloses wherein the instructions to adapt the vehicle surrounding and the travel plan further include instructions to: recreate a dream state for the travel plan that reduces negative parameters associated with the physiological state and the emotional state (¶182 and ¶200 – learning particular characteristics, behaviors, tendencies, and/or other qualities related to the driver such as pulling over and resting as recommended under present conditions, where pulling over and resting corresponding to the recited dream state that reduces negative parameters associated with the driver state. Dream state is being interpreted utilizing BRI as a resting state); and
generate audiovisual content on a window display of the automated vehicle using the pre-trained generative model (¶39, ¶144, and ¶147 – alerts include tone, volume, and features of visual alerts corresponding to the recited audiovisual content on a rear-view mirror screen corresponding to the recited window display utilizing pre-generated learning data corresponding to the recited pre-trained generative model),
While Kumar does disclose utilizing AI learning models for generating determinations and responses, it does not explicitly disclose the AI is a GPT model.
However, Harvey further discloses wherein the pre-trained generative model is a generative pre-trained transformer (GPT) model that is independent of the neural network model (¶142, ¶195-196, and ¶251 – GPT based bots for processing sensor data to output assistance to a driver or autonomous vehicle where the neural network is used for determining vehicle maneuvers and the chatbot/audiovisual content is generated utilizing a GPT model or other generative AI corresponding to the recited a generative pre-trained transformer (GPT) model that is independent of the neural network model).
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the GPT based bot of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the GPT based bot of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers and/or vehicles (Harvey - ¶195).
Regarding claims 6 and 17, Kumar further discloses wherein the instructions to adapt the vehicle surrounding and the travel plan further include instructions to (¶44 and ¶233 - navigation maneuver timing and automated driving can be modified based on present driver state information):
produce interactive commentary and interactive narration about the physiological state and the emotional state using the pre-trained generative model (¶143 and ¶219-220 - dialogue with the user, such as oral or other verbal interactions corresponding to the recited interactive commentary and interactive narration that raise driver alertness corresponding to the recited about the physiological state and the emotional state pre-generated learning data corresponding to the recited pre-trained generative model).
Regarding claims 7 and 18, Kumar further discloses wherein the instructions to estimate the physiological state and the emotional state further include instructions to (¶17-21 – determining driver state corresponding to the recited estimate physiological state and emotional state):
prevent dangerous maneuvers during the travel plan that reduce a safety parameter by removing negative states from the physiological state and the emotional state (¶152 and ¶182 - changes to vehicle dynamic operations based on data showing how the driver reacted to stimuli corresponding to the recited preventing maneuvers that reduce a safety parameter by removing negative states from the drivers state. The changes in vehicle dynamic operations utilizing prior data which indicates more effective driver response discloses the prevention of maneuvers that reduce a safety parameter therefore removing negative states from the drivers state).
Regarding claims 8 and 19, Kumar further discloses wherein the physiological state and the emotional state include responses that are one of eye movement, gaze estimates, galvanic skin inputs, conversational responses, tone, and audible sentiment (¶162-166 – driver characteristics from which driver state is derived corresponding to the recited physiological state and emotional state include monitoring head or eye movement corresponding to the recited eye movement, gaze estimates, Galvanic Skin Response corresponding to the recited galvanic skin inputs, statements, or utterances of the driver corresponding to the recited conversational responses, tone, and audible sentiment. The claim element “that are one of” only requires one of the following to be present to disclose the claim as written).
Regarding claims 9 and 20, Kumar further discloses the preference data includes historical selections by the vehicle occupant (¶17-21 – prior activity data corresponding to the recited preference data);
While Kumar does disclose virtual processing, it does not explicitly disclose this is in AR or VR, however Harvey further discloses the virtual mode is one of an AR mode and a VR mode (¶195 – driving assistance may be provided on AR glasses, VR headsets); and
the neural network model is a data-driven model and the pre-trained generative model is a large language model; and the neural network model is separate from the pre-trained generative model (¶41, ¶67-69, and ¶251 – the machine learning model may be based upon convolutional neural networks, fully-connected neural networks, other types of neural networks which is a pretrained model and the chatbot corresponding to the recited pre-trained generative model may be based upon a generative pre-trained transformer (GPT) model, similar to other GPT-based models such as ChatGPT®. Such a GPT-based chatbot can be based upon a large language model where the neural network model is separate from the chatbot corresponding to the recited generative model)
The combination of the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the VR/AR processing and detailed machine learning models of Harvey fully discloses the elements as claimed.
It would have been obvious to one of ordinary skill in the art before the filing date to have combined the system for detecting and responding to physiological characteristics of a driver utilizing artificial intelligence models of Kumar with the VR/AR processing and detailed machine learning models of Harvey in order to facilitate providing driving assistance and/or driving instructions to drivers (Harvey - ¶195).
Additional References Cited
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cuddihy et al. (US 2011/0224875) discloses a vehicular based driver biometric monitoring system which modifies vehicle control parameters based on the driver’s biometric state including reducing stress and improving happiness (¶22).
Zijderveld et al. (US 2018/0143635) discloses a vehicle control system which monitors and improves a cognitive state profile for the occupant of a vehicle including cognitive states for the occupant such as sadness, stress, happiness, mirth, etc. (¶40).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew J Reda whose telephone number is (408)918-7573. The examiner can normally be reached on Monday - Friday 7-4 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached on (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW J. REDA/Primary Examiner, Art Unit 3665