DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/20/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-29 stand rejected:
Claims 1 and 22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are about a “chatbot” also defined as “an artificially intelligent companion” (Preamble Cl. 1) “proactively starts a conversation when context indicates occurrence of a condition” (Sp. ¶ 0023 S1) in a “vehicle” with a “person” in that “vehicle”. The “context” is defined very broadly in the disclosure; e.g., Sp ¶ 0013-0014 “comprises” “vehicle’s kinematics state” “future location” “readings from sensors in the vehicle” “biometric information” “an emotional state of a person” “prosodic information” “vehicle’s environment” “local news” “new movies” “books” “weather”, wherein the “sensor” information according to Sp. ¶ 0048 further comprises “weight sensors in the vehicle” “such as weight sensor on seats, cabin air various in-car settings”. Even more according to Sp. ¶ 0045 it can detect the “person’s” “drowsiness” by detecting “physiological” “signals” “measur[ed]” by “electrical activity of the person’s brain” “identifying drowsiness” (Sp. ¶ 0045 S1). The “person” in the “vehicle” communicates with the “chatbot” by the aid of a “prompter” that delivers the “person’s” “prompt” derived from an “audio” “receiv[ed]” from a “microphone”, and the “chatbot” responds back via a “loudspeaker”.
These limitations under their broadest reasonable interpretation, cover their performance in the mind but for the recitation of generic software “artificially intelligent companion” “chatbot”, “microphone” and “loudspeaker”; i.e., all of these can be done by a human accompanying the claim’s “person” in the “vehicle”; e.g., that human can easily detect if the “person” is “drowsy” and or can guess “vehicle’s” “location” and/or determine the state of “emotion” of that “person”, and/or report the weather, news, information about movies, books, …etc. The microphone and loudspeaker are solely being used to perform pre-solution activity of data gathering and post solution activity of outputting the data. Therefore, other than reciting by the “chatbot” “microphone” and/or “loudspeaker”, noting in the claim elements precludes their limitations from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components and/or hardware, then it falls withing the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
The judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of chatbot, microphone and loudspeaker, wherein the chatbot is responsible for the last 3 limitations of the claims and is recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using generic computer components and/or software. Accordingly, this additional element defined merely as “chatbot 42 as generative language model” (Sp. ¶ 0036 last S) does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, and that definition is merely definition of a generic computer element with a generic computer function tasked with three of the limitations without imposing any meaningful limits on practicing the abstract idea. The claims are thus directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of chatbot, to perform all the above limitations, amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not therefore patent eligible.
Regarding claim 2, “a large language model” is simply an additional element, and precluding it will not alter the claim and the only definition in the specification is: [0021] “Among the practices of the method are those in which the chatbot implements a large-language model and those in which it implements a generative model” which does not indicate it is anything more than a general purpose “large language model”.
Regarding claim 3, the human accompanying the driver could be a close friend and inherently knows the friend’s name, family, personal preferences and other personal information and while engaging in conversation to use them.
Regarding claim 4, the human accompanying the driver could base his conversation with the “driver” not only based on what the driver had initiated but also base on a conversation that preceded the “driver” initiation.
Regarding claim 5, it is quite normal for e.g. the human accomplice to discuss trip route based on current and/or destination location (kinematic information).
Regarding claim 6, the human accomplice could speak about destination location (a future location of their vehicle).
Regarding claim 7, the human accompanying the driver could be a close friend and inherently knows the friend’s name, family, personal preferences and other personal and/or information pertaining specifically to that driver while engaging in conversation.
Regarding claim 8, the ”sensors” are additional elements involved in pre solution activity; precluding them does not alter the claim limitation; i.e., other than mentioning the word “sensor” everything else in the claim can be done by the human accompanying the driver. All the “sensors” mentioned in the disclosure (paragraphs 0013, 0017 and 0043 (“haptic”), 0030, 0032, 0042-43, 45, 46, 47 (“cabin”), 0044 (“electrodermal-activity” “galvanic skin response”, “sweat”, “weight” (0048) are all general purpose sensors well known and amount to routine functionality use.
Regarding claim 9, the accompanying human can easily detect e.g. accent (a biometric parameter) of the driver and adjust his way of speech e.g., by speaking slower with the driver.
Regarding claim 10, the accompanying human can detect change in tone (a prosodic parameter) of the driver while he was speaking in a first round and try to e.g., provide comfort if needed.
Regarding claim 11, the accompanying human can detect change in emotion of the driver while he was speaking in a first round and try to e.g., provide comfort if needed.
Regarding claim 12, the accompanying human can engage in conversation regarding local news and/or current affair and/or weather with the driver.
Regarding claim 13, the accompanying human can engage in conversations to help driver’s cognitive state should he detect drowsiness and/or lack of response from the driver.
Regarding claim 14, the accompanying human can engage in conversation about topics not require any judgement such as weather.
Regarding claim 15, the accompanying human can engage in debates with the driver.
Regarding claim 16, the accompanying human can converse with the driver about e.g. weather (associated with a sensor data and a topic of the conversation)).
Regarding claim 17, the accompanying human can initiate a conversation when in particular he feels the driver is not attentive (due to e.g. being sleepy).
Regarding claim 19, the “camera” is an additional element involved in capturing image (a data gathering activity of content information). Excluding the “camera” does not alter anything in the claim whose steps can all be done by the accompanying human. According to Sp. ¶ 0017 last S: “Examples of cameras include those sensitive to visible radiation and those sensitive to radiation outside the visible range, such as infrared radiation, ultraviolet radiation, and ionizing radiation, such as X-rays”, which implies it is merely a general purpose camera.
Regarding claim 20, the “haptic sensor” is an additional element involved in capturing e.g. a human movement in the vehicle. Excluding this “haptic sensor” does not alter the impact of what is claimed and the accompanying human could very well carry out the claim steps. According to Sp. ¶ 0043 S2: “Examples include a haptic sensor coupled to the vehicle's seat, in which case a person's frequent movement on the seat may indicate restlessness. Other examples include a haptic sensor coupled to the vehicle's steering wheel, which measures a person's grip and thus provides an indicator of stress”. These simply imply the “haptic sensor” used to be a general purpose “haptic sensor.
Regarding claim 21, the “sensor that obtains a physiological signal” is an additional element excluding which in the claim has not impact of the functionality of the claim. According to Sp. ¶ 0044 S2: “Examples include sensors that measure electrical properties of the skin, such as the skin's conductance and/or capacitance”. This implies the said “sensors” amount to no more than general purpose sensors.
Regarding claim 23, the human accompanying the driver could warn the “driver” that he appears e.g., drowsy by not being attentive and offer him to switch the driving.
Regarding claim 24, the human accompanying the driver could base his conversation with the “driver” not only based on what the driver had initiated but also base on a conversation that preceded the “driver” initiation which involved him.
Regarding claim 25, the human accompanying the driver conversation with the “driver” is basically since the “driver” initiated and the prior conversations (zeroth round) cannot get counted.
Regarding claim 26, the human accompanying the driver could base his conversation with the “driver” not only based on what the driver had initiated but also base on a conversation that preceded the “driver” but initiated by the human accomplice.
Regarding claim 27, generative model is an extra solution activity.
Regarding claim 28, the human accompanying the driver could base his conversation with the “driver” not only based on what the driver had initiated but also base on a conversation that preceded the “driver” initiation which involved him mandated by e.g., some environmental impact such as a thunder storm.
Regarding claim 29, the human accompanying the driver could base his conversation with the “driver” not only based on what the driver had initiated but also base on a conversation that preceded the “driver” initiation which involved him and it can take as long as (i.e., length of time) the driver continues to drive.
Claims 1-21 appear to fall within a statutory category (i.e., an “apparatus” (system or apparatus or method)), and recite nothing more than software structure; i.e., the “chatbot” “prompter” and “context source” are not hardware but software elements in the art; see Fig. 2 and Sp. ¶ 0006.
Thus claims 1-21 are directed to non-statutory subject matter because their scope includes software, an abstract data structure which does not fall within one of the four statutory categories and it includes no physical transformation (i.e., it is directed to a program or software per se). See MPEP § 2106.IV.B. 1 .a. Data structures not claimed as embodied in computer readable media are descriptive material per se and are not statutory because they are not capable of causing functional change in the computer. See, e.g., Warmerdam, 33 F.3d at 1361, 31 USPQ2d at 1760 (claim to a data structure per se held nonstatutory). Such claimed data structures do not define any structural and functional interrelationships between the data structure and the computer software and hardware components which permit the data structure’s functionality to be realized, and is thus statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer, which permit the computer program’s functionality to be realized. Furthermore, the recited steps do not cause any physical transformation as they amount to simple manipulation of data.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 3-26, 28-29 is are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wipperfurth (US 2023/0186878).
Regarding claim 1, Wipperfurth does teach an apparatus comprising an artificially intelligent companion that is configured for engaging in a conversation with a person of a vehicle (¶ 0100 S1: “During the trip, the AI assistant” (an artificially intelligent companion) “may offer audio prompts” (engages in a conversation with) “to the driver” (with a person inside a vehicle); ¶ 0166 last S: “chat bot assistant” “is more of companion” (the AI (chatbot) is a companion chatbot)),
said artificially intelligent companion comprising
a chatbot (¶ 0165: “AI Sidekick/interactive chatbot” (the artificial intelligence companion comprising a chatbot),
a prompter that provides a prompt to said chatbot, said prompt being derived from an audio microphone signal that is provided by a microphone that receives, from said person, a first round of said conversation (¶ 0160 S. before last: “user” (the person) “may be able to” “[interact] through a selection on one or more of the user interfaces” (via a prompter) “in the vehicle or by instructing” (prompting) “the AI chatbot” (the chatbot) “through an audio command” (using according to ¶ 0064 last S. “In-vehicle audio elements, such as a vehicle microphone” (via a microphone) “to receive user audio input” “and speakers to communicate and/or provide sound to the user”); ¶ 0169 last S: “chatbot” “may be set to simply be reactive and let the user initiate interaction” (a prompt from the “user” (person) to the chatbot to begin a first round of conversation)), and
a context source that provides context to said chatbot (¶ 0168 S1: “To implement the chatbot role, the Trip Brian may use various data sources including vehicle sensors” (context sources to the chatbot providing sensory (context) data); and/or ¶ 0169 last S: “the interactive chatbot may be set to either be more proactive and assess the validity of self-reported information or initiate appropriate questions based on sensory input” (the chatbot receives “sensory input” (context))),
wherein said chatbot uses said context to generate a second round of said conversation for delivery to said person via a loudspeaker (¶ 0169 last S: “the interactive chatbot may” “initiate appropriate questions” (chatbot to deliver to the said person) “based on sensory input” (based on the context to generate a second round of said conversation) “and let the user initiate interaction” (in response to the person initiating the first round and this is done using “speakers to communicate and/or provide sound to the user” (loudspeaker (¶ 0064 last S))).
Regarding claim 3, Wipperfurth does teach the apparatus of claim 1, wherein said context used by said chatbot to generate said second round includes personal information about said person (¶ 0176: “Interactive Chatbot or AI Sidekick” (the chatbot) “may be predictive and opportunistic, proactively starting conversations” (in engaging e.g., the second round) “may include utilizing personal information” (uses personal information) “and drive histories to learn preferences and interests and adjusting behavior accordingly, and yet may be ready to be used out of the box without a time-consuming set-up”).
Regarding claim 4, Wipperfurth does teach the apparatus of claim 1, wherein said chatbot is further configured to generate said second round based on context obtained from at least one of said first round and a preceding round that occurred prior to said first round (¶ 0169 last S: “the interactive Chatbot may be set to either more proactive and assess the validity of self-reported information” (a round of conversation by the chatbot) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (before the first round to generate the “appropriate questions” (the second round of said conversation)).
Regarding claim 5, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises information concerning said vehicle's kinematic state (¶ 0227 S1: “In implementations, a combination of GPS (start and end points) data” (information associated with a vehicle’s kinematic state obtained from “GPS” (a “sensor” (source of context information))) “calendar entry, time of day, pattern, and social dynamic in the car may be used by the system and/or ML model to determine or suggest an intent of a trip”).
Regarding claim 6, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises a future location of said vehicle (¶ 0227 S1: “In implementations, a combination of GPS” (context information comprises) “(start and end points) data” (“end point” (a future location of said vehicle)) “calendar entry, time of day, pattern, and social dynamic in the car may be used by the system and/or ML model to determine or suggest an intent of a trip”).
Regarding claim 7, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises information concerning said person and wherein said chatbot uses said information concerning said person to generate said second round of said conversation (¶ 0176: “Interactive Chatbot or AI Sidekick” (the chatbot) “may be predictive and opportunistic, proactively starting conversations” (in engaging e.g., the second round) “may include utilizing personal information” (uses personal information or information concerning the said person) “and drive histories to learn preferences” (other examples of information concerning said person) “and interests and adjusting ¶consuming set-up”).
Regarding claim 8, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises readings from sensors in said vehicle and wherein said chatbot uses said readings from said sensors when generating said second round of said conversation (¶ 0168 S1: “To implement the chatbot role, the Trip Brian may use various data sources including vehicle sensors” (context sources to the chatbot providing sensory (context) data); and/or ¶ 0169 last S: “the interactive chatbot may be set to either be more proactive and assess the validity of self-reported information or initiate appropriate questions” (for second round of conversation) “based on sensory input” (using the chatbot “sensory input” (context))).
Regarding claim 9, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises biometric information concerning said person and wherein said chatbot uses said biometric information concerning said person in the course of generating said second round of said conversation ( ¶ 0012: “using data gathered from one of biometric sensors” (using “biometric” “sensor” (context data)) “and vehicle sensors, training a machine learning model to determine a mental state of a driver” “automatically initiating one or more interventions” (e.g., in a second round of conversations) “configured to alter the mental state of the driver”).
Regarding claim 10, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises prosodic information concerning said person's speech and wherein said chatbot bases said second round of said conversation at least in part on said prosodic information (¶ 0223 1st and last sentences respectively: “In implementations biometric and vehicle sensor” (context information obtained) “information may be used by the ML model to determine or infer three emotional” (by determining prosody) “criteria” “a raised chin, a sucked lip, an inner brow raise, a lip corner depression, a lip stretch” (via a person’s speech) “and so forth, may be indicators of specific emotions”, ¶ 0182: “Companion: The Interactive Chatbot” (in a second round the chatbot) “invites a driver to channel his or her emotions” (conversation is based on the “emotion” (prosodic information)) “without judgement”).
Regarding claim 11, Wipperfurth does teach the apparatus of claim 1, wherein said context comprises information concerning an emotional state of said person and wherein said chatbot uses said information concerning said emotional state to generate said second round of said conversation (¶ 0223 1st sentence: “In implementations biometric and vehicle sensor” (context information obtained) “information may be used by the ML model to determine or infer three emotional” (by determining emotion) “criteria”, ¶ 0182: “Companion: The Interactive Chatbot” (in a second round the chatbot) “invites a driver to channel his or her emotions” (conversation is based on the “emotion” (emotion)) “without judgement”).
Regarding claim 12, Wipperfurth does teach The apparatus of claim 1, wherein said context comprises information concerning one or more of information concerning current affairs, information concerning the vehicle's environment, information concerning said person, local news, information concerning new movies, information concerning books, and information concerning weather (¶ 0221 lines 11+: “External environment sensors” (context information) “could determine external temperature, weather conditions” (pertaining to weather obtained) so that according to ¶ 0093 last S: “Chatbot” (in a second round) “narrate an overview of the trip to the driver synchronous with the animation, providing information that includes expected duration of trip, route, weather conditions” (the “weather” (context information) is provided to the drivers)).
Regarding claim 13, Wipperfurth does teach the apparatus of claim 1, wherein said chatbot is configured to adjust a cognitive load of said conversation in response to said context (¶ 0179 S1: “¶ 0179 S1: “Helping manage children: The AI Sidekick” (the chatbot) “can help keep children in the car entertained, thereby reducing” (can adjust) “the cognitive load” (cognitive load) “on the driver”, wherein the said “cognitive” state is determined using according to ¶ 0209 S1: “The industry is currently in an arms race to deliver sensor” (context data) “technology and software that can detect nuanced human emotions, complex cognitive states” (to determine the “cognitive load”)).
Regarding claim 14, Wipperfurth does teach the apparatus of claim 1, wherein said chatbot has been trained to engage in a non-judgmental conversation (¶ 0160 S. before last: “user” (the person) “may be able to” “[interact] through a selection on one or more of the user interfaces” (via a prompter) “in the vehicle or by instructing” (engages in conversation with) “the AI chatbot” (the chatbot) “through an audio command” (which is non-judgmental)).
Regarding claim 15, Wipperfurth does teach the apparatus of claim 1, wherein said chatbot has been trained to engage in a debate with said person (¶ 0289 S2: “The conversation agent” (the chatbot) “can use the gathered data to provide socially-aware conversation” (can engage conversations on social issues (e.g. debate)); ¶ 0180 “Social ice-breaker: If desired by the car inhabitants, when there is a lull in the conversation with more than one person in the vehicle, the AI Sidekick” (chatbot) “may be configured to initiate a conversation by, for example, talking about something in the news, sharing a dilemma” (engages in other examples of debating)).
Regarding claim 16, Wipperfurth does teach the apparatus of claim 1, wherein said chatbot chooses a topic of said conversation based on said context (¶ 0221 lines 11+: “External environment sensors” (context information) “could determine external temperature, weather conditions” (pertaining to weather obtained) so that according to ¶ 0093 last S: “Chatbot” (chatbot) “narrate” (converses) “an overview of the trip to the driver synchronous with the animation, providing information that includes expected duration of trip, route, weather conditions”(about “weather” (context or topic)).
Regarding claim 17, Wipperfurth does teach the apparatus of claim 1, wherein said artificially intelligent companion is configured to sense a lack of attention in said person and to initiate said conversation in response to having sensed said lack of attention (¶ 0180: “Social ice-breaker: If desired by the car inhabitants, when there is a lull in the conversation” (when a lack of attention is sensed) “with more than one person in the vehicle, the AI Sidekick may be configured to initiate” (the AI companion initiates a conversation) “a conversation by, for example, talking about something in the news, sharing a dilemma, or starting a game”).
Regarding claim 18, Wipperfurth does teach the apparatus of claim 1, wherein said artificially intelligent companion further comprises a classifier that receives information concerning said person and uses said information to provide context to said context source, wherein said information is obtained from said audio microphone signal (¶ 0115 S 5: “Places that are more than 10 minutes off-route also may not be displayed, though again this may in implementations also be changed by editing a user's preferences in a settings interface” (context source where a “user’s preferences” (information concerning a person) can be used as context and inputted by e.g., a “microphone” (microphone) since according to ¶ 0065 S3 “speakers and microphone, not shown) and biometric sensors which together comprise the vehicle user interface”).
Regarding claim 19, Wipperfurth does teach The apparatus of claim 1, wherein said artificially intelligent companion further comprises a classifier that receives information concerning said person and uses said information to provide context to said prompter for use in generating said prompt, wherein said information is obtained from a camera signal from a camera that points at said person (¶ 0082: “In implementations the occupants' state of mind can be determined via the vehicle's biometric, voice and face recognition” (a context based on a camera signal) “sensors, the usage of the climate control system (e.g., heat), infotainment selection or lack thereof, and so on. For example, a driver of the vehicle may be in a bad mood (as determined by gripping the steering wheel harder than usual and their tone of voice, use of language, or use of climate control system) and may be accelerating too quickly or driving at a high speed. The system may be configured to provide appropriate feedback” (used to generate a prompt based on the context (image)) “to the driver” (at said person) “responsive to such events”).
Regarding claim 20, Wipperfurth does teach The apparatus of claim 1, wherein said artificially intelligent companion further comprises a classifier that receives information concerning said person and uses said information to provide context to said prompter for use in generating said prompt, wherein said information is obtained from a haptic sensor that is in mechanical communication with said person (¶ 0224: “As indicated above, vehicle sensors may include pressure sensors. In implementations, seat pressure sensors” (sensors of haptic type that are in mechanical connection to e.g., the “driver” (the person)) “may measure body posture and/or may provide the following data types: body activity and direction leaning (i.e., a direction in which the traveler is leaning). Such information may be used by the system and/or ML model to determine or infer driver engagement, arousal and alertness” “The system and/or ML model may use this data to determine or infer valence, arousal, alertness” (provide context) “state of flow, the social dynamic in the car, and strength of social connection(s) amongst the passengers”; ¶ 0023: “Initiating the one or more interventions” (generating a prompt to the “driver” (person)) “to alter the mental state of the driver may include initiating one or more interventions to alter a valence level, an arousal level” (based on the context obtained by the haptic sensor) “and/or an alertness level of the driver”).
Regarding claim 21, Wipperfurth does teach The apparatus of claim 1, wherein said artificially intelligent companion further comprises a classifier that receives information concerning said person and uses said information to provide context to said prompter for use in generating said prompt, wherein said information is obtained from a sensor that obtains a physiological signal from said person (¶ 0082: “In implementations the occupants' state of mind can be determined via the vehicle's biometric” ((a context based on a physiological signal) “voice and face recognition” “sensors, the usage of the climate control system (e.g., heat), infotainment selection or lack thereof, and so on. For example, a driver of the vehicle may be in a bad mood (as determined by gripping the steering wheel harder than usual and their tone of voice, use of language, or use of climate control system) and may be accelerating too quickly or driving at a high speed. The system may be configured to provide appropriate feedback” (used to generate a prompt based on the context (physiological signal)) “to the driver” (at said person) “responsive to such events”).
Regarding claim 22, Wipperfurth does teach a method comprising causing an artificially intelligent companion to engage in conversation with person in a vehicle (¶ 0100 S1: “During the trip, the AI assistant” (an artificially intelligent companion) “may offer audio prompts” (engages in a conversation with) “to the driver” (with a person inside a vehicle); ¶ 0166 last S: “chat bot assistant” “is more of companion” (the AI (chatbot) is a companion chatbot)),
Wherein causing said artificially intelligent companion to engage in said conversation comprises
receiving, via a microphone, a first sound of said conversation, providing a prompt to a chatbot, said prompt having been derived at least in part from said first round (¶ 0165: “AI Sidekick/interactive chatbot” (the artificial intelligence companion comprising a chatbot); ¶ 0160 S. before last: “user” (the person) “may be able to” “[interact] through a selection on one or more of the user interfaces” (via a prompter) “in the vehicle or by instructing” (prompting) “the AI chatbot” (the chatbot) “through an audio command” (using according to ¶ 0064 last S. “In-vehicle audio elements, such as a vehicle microphone” (via a microphone) “to receive user audio input” “and speakers to communicate and/or provide sound to the user”); ¶ 0169 last S: “chatbot” “may be set to simply be reactive and let the user initiate interaction” (a prompt from the “user” (person) to the chatbot to begin a first round of conversation which comprises of the first round)), and
providing context to said chatbot (¶ 0168 S1: “To implement the chatbot role, the Trip Brian may use various data sources including vehicle sensors” (context sources to the chatbot providing sensory (context) data); and/or ¶ 0169 last S: “the interactive chatbot may be set to either be more proactive and assess the validity of self-reported information or initiate appropriate questions based on sensory input” (the chatbot receives “sensory input” (context))),
based on said prompt and on said context, generating a second round of said conversation (¶ 0169 last S: “the interactive chatbot may” “initiate appropriate questions” (chatbot to deliver to the said person) “based on sensory input” (based on the context and the first round (prompt) to generate a second round of said conversation) “and let the user initiate interaction” (in response to the person initiating the first round and this is done using “speakers to communicate and/or provide sound to the user” (loudspeaker (¶ 0064 last S))).
Regarding claim 23, Wipperfurth does teach the method of claim 22, further comprising determining that said person in said vehicle is displaying signs of inattention and causing said artificially intelligent companion to initiate said conversation with said person (¶ 0056: “FIG. 24 representatively illustrates an environment of use of the system of FIG. 1 in which the system determines a distracted state of a driver” (determining that the person is displaying signs of inattention) “and initiates a safety alert” (and initiates e.g., a conversation)).
Regarding claim 24, Wipperfurth does teach the method of claim 22, wherein said first round is preceded by a zeroth round of conversation that was generated by said chatbot (¶ 0169 last S: “the interactive Chatbot may be set to either more proactive and assess the validity of self-reported information” (a zeroth round of conversation by the chatbot) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (before the first round)).
Regarding claim 25, Wipperfurth does teach the method of claim 22, wherein said first round initiates said conversation (¶ 0169 last S: “the interactive Chatbot may be set to either more proactive and assess the validity of self-reported information” (a zeroth round of conversation by the chatbot) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (before the first round which begins the conversation since the zeroth round is “self-reporting” (i.e., is not part of the conversation)).
Regarding claim 26, Wipperfurth does teach the method of claim 22, wherein a zeroth round that precedes said first round initiates said conversation, said zeroth round having been generated by said chatbot (¶ 0169 last S: “the interactive Chatbot may be set to either more proactive and assess the validity of self-reported information” (a zeroth round of conversation by the chatbot precedes) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (the first round)).
Regarding claim 28, Wipperfurth does teach the method of claim 22, wherein a zeroth round that precedes said first round initiates said conversation, said zeroth round having been proactively generated by said chatbot upon occurrence of a condition (¶ 0169 last S: “the interactive Chatbot” (chatbot) “may be set to either more proactive” (proactively) “and assess the validity of self-reported information” (sets a zeroth round of conversation based on a “validity” “assess[ment]” (condition)) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (before the first round)).
Regarding claim 29, Wipperfurth does teach the method of claim 22, wherein a zeroth round that precedes said first round initiates said conversation, said zeroth round having been proactively generated by said chatbot based upon an expected length of a journey to be undertaken by said person (¶ 0169 last S: “the interactive Chatbot” (chatbot) “may be set to either more proactive” (proactively) “and assess the validity of self-reported information” (sets a zeroth round of conversation) “or initiate appropriate questions based on sensory input, or may be set to simply be reactive and let the user initiate interaction” (before the first round and these events occur while the two are passengers in the vehicle which is bound by an expected length of the journey as long as the “driver” (the person) is driving).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2 , 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wipperfurth, and further in view of Chen (US 2025/0036955).
Regarding claim 2, Wipperfurth does not specifically disclose the apparatus of claim 1, wherein said chatbot implements a large-language model.
Chen et al. do teach The apparatus of claim 1, wherein said chatbot implements a large-language model ([0039] “In actual implementation, the artificial intelligence platform 10 can use a chatbot” (chatbot) “of the large language model” (uses large language model)).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “large language model” aspect of the “chatbot” in Chen et al. into the “chatbot” of Wipperfurth would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Wipperfurth “to achieve the technical effect of assisting in building a machine learning model without programming” as disclosed in Chen et al. ¶ 0055 last sentence.
Regarding claim 27, Wipperfurth does not specifically disclose the method of claim 22, wherein said chatbot implements a generative model.
Chen et al. do teach the method of claim 22, wherein said chatbot implements a generative model ([0039] “In actual implementation, the artificial intelligence platform 10 can use a chatbot”(chatbot) “of the large language model, and the large language model can be a generative” (uses a generative model) “pre-trained transformer (GPT) model”).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “large language model” aspect of the “chatbot” in Chen et al. into the “chatbot” of Wipperfurth would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Wipperfurth “to achieve the technical effect of assisting in building a machine learning model without programming” as disclosed in Chen et al. ¶ 0055 last sentence.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose t0elephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Farzad Kazeminezhad/
Art Unit 2653
March 7th 2026.