Prosecution Insights
Last updated: April 19, 2026
Application No. 18/988,107

Method and System to Personalize User Experience in a Vehicle

Non-Final OA §101§102§103
Filed
Dec 19, 2024
Examiner
KINGSLAND, KYLE J
Art Unit
3663
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mercedes-Benz Group AG
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
84%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
164 granted / 212 resolved
+25.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
38 currently pending
Career history
250
Total Applications
across all art units

Statute-Specific Performance

§101
7.5%
-32.5% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 212 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Office Action is in response to the Application filed on December 19, 2024. Claims 1-20 are presently pending and are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on December 19, 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis - Step 1 Claims 1-13 and 20 recite a system/apparatus, therefore claims 1-13 and 20 are within at least one of the four statutory categories. Claims 14-19 recite a method/process, therefore claims 14-19 are within at least one of the four statutory categories. 101 Analysis - Step 2A, Prong 1 Regarding Prong 1 of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recites mathematical concepts and/or mental processes (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: A vehicle computing system for controlling functionality of a vehicle comprising: a control circuit configured to: receive a user prompt from a user associated with a vehicle, the user prompt indicative of a statement or a question; access sensor data associated with a surrounding environment of the vehicle, the sensor data comprising at least of (i) an image of the user, (ii) weather data, (iii) a location of the vehicle, or (iv) a timestamp captured by one or more vehicle sensors; generate, using a context engine configured to determine context data associated with at least one of the user or the vehicle, a modified user prompt based on the user prompt and the sensor data, wherein the modified user prompt supplements the user prompt with the context data, the context data providing one or more conditions associated with the user prompt; and generate, based on the modified user prompt, a user response, wherein the user response implements an action corresponding to the statement or the question. These limitations, as drafted, is a system that, under its broadest reasonable interpretation, covers performance of the limitation as a mental process. That is, nothing in the claim elements preclude the steps from practically being performed as mental process. For example, " generate… a modified user prompt…" and " determine context data...", encompass mental processes as a human can perform these limitations using observations, evaluations, judgments, and/or opinions. " generate… a modified user prompt…" involves a human evaluating and/or making a judgment concerning a modified user prompt in consideration of user prompt, the sensor data, and the context data and " determine context data...", involves a human making a judgment and/or evaluation to determine the context data of a user or the vehicle. Thus, the claim recites at least a mental process. 101 Analysis - Step 2A, Prong 2 Regarding Prong 2 of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a "practical application." In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the "additional limitations" while the bolded portions continue to represent the "abstract idea"): A vehicle computing system for controlling functionality of a vehicle comprising: a control circuit configured to: receive a user prompt from a user associated with a vehicle, the user prompt indicative of a statement or a question; access sensor data associated with a surrounding environment of the vehicle, the sensor data comprising at least of (i) an image of the user, (ii) weather data, (iii) a location of the vehicle, or (iv) a timestamp captured by one or more vehicle sensors; generate, using a context engine configured to determine context data associated with at least one of the user or the vehicle, a modified user prompt based on the user prompt and the sensor data, wherein the modified user prompt supplements the user prompt with the context data, the context data providing one or more conditions associated with the user prompt; and generate, based on the modified user prompt, a user response, wherein the user response implements an action corresponding to the statement or the question. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitation of "A vehicle computing system for controlling functionality of a vehicle comprising a control circuit” the examiner submits that this limitation characterizes the method as being associated with a vehicle with a computer system and a control circuit, which merely amounts to indicating a field of use or technological environment in which to apply a judicial exception and cannot integrate the judicial exception into a practical application or amount to significantly more than the exception itself (see MPEP 2106.05(h)). Additionally, the claim limitation “receive a user prompt…”, “access sensor data…”, and “generate, based on the modified user prompt, a user response…” does not amount to an inventive concept since it is insignificant extra-solution activity as it is merely a form of data collection and outputting (MPEP § 2106.05(g)). It is noted that the “action” in response to a user prompt can merely be a form of data outputting or an output to a display or voice response, which is insignificant extra-solution activity. It is noted that the “context engine” is merely a generic computing component used to implement the abstract idea. The examiner submits that these limitations are mere data collection and outputting components to apply the above-noted abstract idea within an indicated field of use (MPEP §2106.05). Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular process for safety performance evaluation, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis - Step 2B Regarding Step 2B in the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receive a user prompt…”, “access sensor data…”, and “generate, based on the modified user prompt, a user response…” amounts to extra-solution data gathering and outputting. Additionally, the specification demonstrates the well-understood, routine, conventional nature of additional elements as it describes the additional elements as well-understood or routine or conventional (or an equivalent term), as a commercially available product, or in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. §112(a). With respect to the displaying function, the Federal Circuit in Trading Techs. Int’l v. IBG LLC, 921 F.3d 1084, 1093 (Fed. Cir. 2019), and Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 1331 (Fed. Cir. 2017), indicated that the mere displaying of data is a well understood, routine, and conventional function. With respect to “receive a user prompt…”, “access sensor data…”, and “generate, based on the modified user prompt, a user response…”it was ruled within Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015), which are recited within MPEP 2106.05(d)(II) that mere data collection or receiving/obtaining and transmitting of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic matter, as it is here. Additionally, a context engine“, “a vehicle computing system, and “a control circuit” are each generic computing components that merely apply the judicial exception (See 2106.05(f)). Additionally, "A vehicle computing system for controlling functionality of a vehicle comprising a control circuit”” is merely a technological environment or field of use as the limitations merely link the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Claims 14 and 20 recites analogous limitations to that of claim 1, and are therefore rejected by the same premise. Dependent claims 2-13 and 15-19 specify limitations that elaborate on the abstract idea of claims 1 and 14, and thus are directed to an abstract idea nor do the claims recite additional limitations that integrate the claims into a practical application or amount to "significantly more" for similar reasons. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-3, 8-16, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Harvey (US 20250104708). In regards to claim 1, Harvey discloses of a vehicle computing system for controlling functionality of a vehicle (“A vehicle chatbot of a smart vehicle assistant engages in a conversation with a user associated with a vehicle, for instance to provide insurance information and range extension tips associated with vehicle operations. The vehicle chatbot may also engage in a conversation with an external entity in the event of a collision, to provide information to the external entity on behalf of vehicle occupants. The smart vehicle assistant may also cause the vehicle to autonomously drive to a location following a collision. A responder dispatched to respond to the collision may use a smart responder assistant that includes a responder chatbot. The responder chatbot may engage in a conversation with the responder to obtain information identifying the vehicle and/or damage to the vehicle, and may provide the responder with information about the vehicle and recommendations regarding how to extract occupants of the vehicle.” (Abstract)) comprising: a control circuit (“In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) to perform certain operations). A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.” (Para 0201)) configured to: receive a user prompt from a user associated with a vehicle, the user prompt indicative of a statement or a question (“A chatbot, such as the vehicle chatbot 120 or the responder chatbot 122, may be based upon a generative Artificial Intelligence (AI) system that may generate natural language text and/or audio responses to input data, such that a user may converse with the chatbot naturally by asking free-form questions or making other natural language statements, and receiving corresponding natural language responses generated by the chatbot instead of, or in addition to, prewritten responses or predetermined information. The chatbot may generate natural language output that expresses information conversationally via serious and/or humorous statements, that responds to statements and/or questions input by the user most recently and/or earlier during a conversation, that poses questions to the user, and/or that otherwise converses with the user.” (Para 0075), “In some examples, the smart vehicle assistant 102 may be executed at least in part via one or more computing systems that are integrated into the vehicle 106. For example, the vehicle 106 may have one or more on-board processors that may execute one or more elements of the smart vehicle assistant 102. In these examples, a user inside the vehicle 106, such as a driver or other occupant, may use the smart vehicle assistant 102 via a dashboard display of the vehicle 106, integrated speakers and/or microphones of the vehicle 106, and/or other elements of the vehicle 106.” (Para 0058)); access sensor data associated with a surrounding environment of the vehicle, the sensor data comprising at least of (i) an image of the user, (ii) weather data, (iii) a location of the vehicle, or (iv) a timestamp captured by one or more vehicle sensors (“The vehicle 106 may have one or more sensors 110 that are configured to capture corresponding types of sensor data, user input, or other input data. The sensors 110 may include accelerometers and/or other motion sensors, Global Positioning System (GPS) sensors and/or other location sensors, sensors associated with a transmission and/or braking system of the vehicle 106, cameras and/or other image-based sensors, Light Detection and Ranging (LiDAR) sensors, microphones, proximity sensors, weight sensors, seatbelt sensors, seat pressure sensors, payload sensors, and/or other types of sensors. Sensor data, user input, and/or other input data captured by the sensors 110 may be provided to an on-board computing system of the vehicle 106, for instance such that the on-board computing system may perform autonomous or semi-autonomous operations based upon received sensor data. In some examples, as described herein, sensor data, user input, and/or other input data captured by the sensors 110 may also, or alternately, be provided to the smart vehicle assistant 102, such that the smart vehicle assistant 102 may operate based upon the sensor data, user input, and/or other input data.” (Para 0056), “he additional data 146 may include one or more other types of information, such as weather data, traffic data, map data, image and/or audio data associated with collisions of vehicles, image and/or audio data associated with occupants of vehicles before, during, and/or after collisions, steering and driving data, and/or other types of data. The additional data 146 may be maintained in one or more databases or other data repositories, for instance in one or more databases maintained by an insurance company, an operator of the model training system 134 and/or a provider of the smart vehicle assistant 102 and/or the smart responder assistant 104.” (Para 0093), “The range data 142 may include information about how far vehicles powered by batteries are able to travel based upon State of Charge (SoC) levels of the batteries and/or other factors. For example, the range data 142 may include historical data indicating how SoC levels of vehicle batteries change over time, and/or how far vehicles have been able to travel based upon power from such vehicle batteries, in association with travel speeds, travel routes, traffic patterns along the travel routes, capabilities of vehicles, and/or other factors. The range data 142 may also include example scripts for communicating tips regarding extending travel ranges and/or battery SoC levels to users of the vehicle chatbot 120. The range data 142 may be maintained in one or more databases or other data repositories, for instance in one or more databases maintained by manufacturers of vehicles and/or batteries, an operator of the model training system 134, and/or a provider of the smart vehicle assistant 102.” (Para 0091)); generate, using a context engine configured to determine context data associated with at least one of the user or the vehicle, a modified user prompt based on the user prompt and the sensor data, wherein the modified user prompt supplements the user prompt with the context data, the context data providing one or more conditions associated with the user prompt (“For example, as described further below, the smart vehicle assistant 102 may provide output associated with the vehicle 106 to occupants of the vehicle 106 and/or to other entities. Such output may include insurance information, battery range information, collision response information, and/or other types of information associated with the vehicle 106. The smart vehicle assistant 102 may also, or alternately, cause the vehicle 106 to perform actions autonomously in certain situations. For example, if the vehicle 106 is involved in a collision, and the smart vehicle assistant 102 determines that occupants of the vehicle 106 are unresponsive following the collision, the smart vehicle assistant 102 may cause the vehicle 106 to autonomously drive to a hospital or other destination.” (Para 0052), “The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), see also Para 0078, 0102, and 0105); and generate, based on the modified user prompt, a user response, wherein the user response implements an action corresponding to the statement or the question (“The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), “For example, as described further below, the smart vehicle assistant 102 may provide output associated with the vehicle 106 to occupants of the vehicle 106 and/or to other entities. Such output may include insurance information, battery range information, collision response information, and/or other types of information associated with the vehicle 106. The smart vehicle assistant 102 may also, or alternately, cause the vehicle 106 to perform actions autonomously in certain situations. For example, if the vehicle 106 is involved in a collision, and the smart vehicle assistant 102 determines that occupants of the vehicle 106 are unresponsive following the collision, the smart vehicle assistant 102 may cause the vehicle 106 to autonomously drive to a hospital or other destination.” (Para 0052), “In other examples, a user interface 118 may include a non-visual interface, such as an audio-based interface. Accordingly, the smart vehicle assistant 102 and/or the smart responder assistant 104 may present or convey information to users via audio with or without also displaying the information visually via a screen. As an example, the smart vehicle assistant 102 may be an audio-based system that may receive user input as audio voice input captured by a microphone of the vehicle 106 or a user device, and that May audibly present corresponding output voice data via speakers of the vehicle 106 or the user device. Accordingly, in these examples, a user of the smart vehicle assistant 102 may have a voice-based audio conversation with the smart vehicle assistant 102 instead of, or in addition to, interacting with the smart vehicle assistant 102 via a screen or other visual interface. Similarly, a user of the smart responder assistant 104 may have a voice-based audio conversation with the smart responder assistant 104 instead of, or in addition to, interacting with the smart responder assistant 104 via a screen or other visual interface.” (Para 0068), see also Para 0051). In regards to claim 2, Harvey discloses of the vehicle computing system of claim 1, wherein the context engine is configured to: analyze the user prompt from the user (“In some examples, the model training system 134 may be at least partially separate from the smart vehicle assistant 102 and/or the smart responder assistant 104, and may execute to train and/or re-train instance of the vehicle chatbot 120 and/or the responder chatbot 122. A trained instance of the vehicle chatbot 120 may accordingly be deployed in the smart vehicle assistant 102, and a trained instance of the responder chatbot 122 may accordingly be deployed in the smart responder assistant 104. The model training system 134 may train a chatbot, such as the vehicle chatbot 120 or the responder chatbot 122, to generate conversational statements and/or other output during a conversation proactively and/or in response to user questions or statements. The chatbot may generate such statements or other output based upon information that was in a training dataset at the time the chatbot was trained and/or based upon other information that may be accessed by the chatbot.” (Para 0080), “In examples in which user input is audio-based voice data, a chatbot and/or other elements of the smart vehicle assistant 102 or the smart responder assistant 104 may use voice-to-text systems, Natural Language Processing (NLP), and/or other types of audio processing to interpret the audio-based voice data provided by the user. In other examples in which user input is text-based, a chatbot and/or other elements of the smart vehicle assistant 102 or the smart responder assistant 104 may similarly use NLP and/or other types of text processing systems to interpret text provided by a user.” (Para 0076)); based on the analysis of the user prompt, access user preference data associated with the user, the user preference data associated with the one or more conditions (“Accordingly, a user may inquire about a current travel range of the vehicle 106 via the vehicle chatbot 120, and the vehicle chatbot 120 may present information about the current travel range that is generated by the range predictor 124. In some examples, the vehicle chatbot 120 may ask the user questions about when and where the user plans to travel via the vehicle 106, what time the user wants to arrive at a destination, whether the user wants to avoid tolls, heavy traffic, accidents, and/or other elements along a route, and/or other information, such that the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may suggest a route for the user and determine a corresponding travel range to be presented via the vehicle chatbot 120. In other examples, the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may obtain information about a current or planned travel route via a GPS system of the vehicle or a connected user device, such that the range predictor 124 may generate range predictions that may be presented to the user via the vehicle chatbot 120.” (Para 0102), “Accordingly, based upon information about a type of the battery 108 of the vehicle 106, a current SoC of the battery 108, a particular travel route, current or expected traffic and/or weather conditions along the particular travel route, a current or expected travel speed of the vehicle 106 along the particular travel route, a historical driving profile of the driver of the vehicle 106, and/or other factors, the range predictor 124 may predict how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 will be used during travel. Similarly, the range predictor 124 may predict how changes, such as alternate routes with different geographies, traffic patterns, weather conditions, and/or other factors that differ relative to factors associated with current travel route, adjustments to increase or decrease travel speeds of the vehicle 106, and/or other changes relative to current or expected operations of the vehicle 106, would change how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 would be used during travel.” (Para 0105)); and generate, based on the user prompt and the user preference data, the context data, wherein the context data is indicative of one or more user preferences associated with the user prompt (“Accordingly, a user may inquire about a current travel range of the vehicle 106 via the vehicle chatbot 120, and the vehicle chatbot 120 may present information about the current travel range that is generated by the range predictor 124. In some examples, the vehicle chatbot 120 may ask the user questions about when and where the user plans to travel via the vehicle 106, what time the user wants to arrive at a destination, whether the user wants to avoid tolls, heavy traffic, accidents, and/or other elements along a route, and/or other information, such that the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may suggest a route for the user and determine a corresponding travel range to be presented via the vehicle chatbot 120. In other examples, the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may obtain information about a current or planned travel route via a GPS system of the vehicle or a connected user device, such that the range predictor 124 may generate range predictions that may be presented to the user via the vehicle chatbot 120.” (Para 0102), “Accordingly, based upon information about a type of the battery 108 of the vehicle 106, a current SoC of the battery 108, a particular travel route, current or expected traffic and/or weather conditions along the particular travel route, a current or expected travel speed of the vehicle 106 along the particular travel route, a historical driving profile of the driver of the vehicle 106, and/or other factors, the range predictor 124 may predict how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 will be used during travel. Similarly, the range predictor 124 may predict how changes, such as alternate routes with different geographies, traffic patterns, weather conditions, and/or other factors that differ relative to factors associated with current travel route, adjustments to increase or decrease travel speeds of the vehicle 106, and/or other changes relative to current or expected operations of the vehicle 106, would change how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 would be used during travel.” (Para 0105), “The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), see also Para 0052). In regards to claim 3, Harvey discloses of the vehicle computing system of claim 2, wherein the context engine is configured: concatenate the context data with one or more supplemental topics, the one or more supplemental topics comprising additional information associated with the user preference data (“Accordingly, a user may inquire about a current travel range of the vehicle 106 via the vehicle chatbot 120, and the vehicle chatbot 120 may present information about the current travel range that is generated by the range predictor 124. In some examples, the vehicle chatbot 120 may ask the user questions about when and where the user plans to travel via the vehicle 106, what time the user wants to arrive at a destination, whether the user wants to avoid tolls, heavy traffic, accidents, and/or other elements along a route, and/or other information, such that the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may suggest a route for the user and determine a corresponding travel range to be presented via the vehicle chatbot 120. In other examples, the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may obtain information about a current or planned travel route via a GPS system of the vehicle or a connected user device, such that the range predictor 124 may generate range predictions that may be presented to the user via the vehicle chatbot 120.” (Para 0102), “Accordingly, based upon information about a type of the battery 108 of the vehicle 106, a current SoC of the battery 108, a particular travel route, current or expected traffic and/or weather conditions along the particular travel route, a current or expected travel speed of the vehicle 106 along the particular travel route, a historical driving profile of the driver of the vehicle 106, and/or other factors, the range predictor 124 may predict how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 will be used during travel. Similarly, the range predictor 124 may predict how changes, such as alternate routes with different geographies, traffic patterns, weather conditions, and/or other factors that differ relative to factors associated with current travel route, adjustments to increase or decrease travel speeds of the vehicle 106, and/or other changes relative to current or expected operations of the vehicle 106, would change how far the vehicle 106 may travel and/or how much of the SoC of the battery 108 would be used during travel.” (Para 0105), “The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), see also Para 0052); and input the user prompt and the one or more supplemental topics into a machine-learned model, wherein the machine-learned model is configured to generate the user response (“The range predictor 124 may be a component of the vehicle chatbot 120, or may be a separate machine learning model, a separate rules-based model, or another separate system that may interact with users via the vehicle chatbot 120. For example, the range predictor 124 may be a machine learning model that based upon convolutional neural networks, recurrent neural networks, other types of neural networks, nearest-neighbor algorithms, regression analysis, deep learning algorithms, Gradient Boosted Machines (GBMs), Random Forest algorithms, and/or other types of artificial intelligence or machine learning frameworks. The model training system 134 may train the range predictor 124 based upon one or more types of data, such as the vehicle data 140, the range data 142, and/or the additional data 146. For example, the model training system 134 may train the range predictor 124 based upon historical data associated with battery types, battery SoC levels, travel routes, traffic levels, weather conditions, and/or other factors that correspond with known travel ranges indicated in the historical data.” (Para 0104), “One or more models associated with the vehicle chatbot 120, other elements of the smart vehicle assistant 102, the responder chatbot 122, and/or other elements of the smart responder assistant 104 may be trained by a model training system 134 using supervised learning, reinforcement learning, and/or other machine learning techniques. For example, one or more models associated with a chatbot may be trained, by the model training system 134, based upon a training dataset. As discussed further below, a training dataset used by the model training system 134 to train a chatbot may be based upon one or more types of information that may be provided and/or maintained by one or more data sources 116. The data sources 116 may include insurance policy data 136, collision response data 138, vehicle data 140, range data 142, responder data 144, and/or additional data 146. Accordingly, the chatbot may be trained to provide information indicated by, and/or derived from, one or more data sources 116 during conversations with users, and/or to steer the conversations towards such information as described further below.” (Para 0078)). In regards to claim 8, Harvey discloses of the vehicle computing system of claim 1, wherein the user prompt is received from a user computing device (“The smart vehicle assistant 102 and the smart responder assistant 104 may be executed by one or more computing systems, as discussed further below. An exemplary architecture of a computing system that may execute one or more elements of the smart vehicle assistant 102 or the smart responder assistant 104 is shown in FIG. 5, and is discussed further with respect to that figure.” (Para 0057), “FIG. 1 shows an exemplary computing environment 100 associated with at least one of a smart vehicle assistant 102 or a smart responder assistant 104. The smart vehicle assistant 102 may be configured to assist one or more occupants of a vehicle 106, and/or other individuals or entities in association with the vehicle 106. The smart responder assistant 104 may assist one or more response entities, such emergency services personnel, during a response to a collision, accident, or other incident involving the vehicle 106. The smart vehicle assistant 102 and the smart responder assistant 104 may be computer-implemented systems that may receive data, such as user input, sensor data, and/or other data, and that may provide output to users and/or other elements proactively and/or in response to received input.” (Para 0051)). In regards to claim 9, Harvey discloses of the vehicle computing system of claim 1, wherein the user prompt is received from a vehicle interface located within the vehicle and physically coupled to the vehicle “FIG. 1 shows an exemplary computing environment 100 associated with at least one of a smart vehicle assistant 102 or a smart responder assistant 104. The smart vehicle assistant 102 may be configured to assist one or more occupants of a vehicle 106, and/or other individuals or entities in association with the vehicle 106. The smart responder assistant 104 may assist one or more response entities, such emergency services personnel, during a response to a collision, accident, or other incident involving the vehicle 106. The smart vehicle assistant 102 and the smart responder assistant 104 may be computer-implemented systems that may receive data, such as user input, sensor data, and/or other data, and that may provide output to users and/or other elements proactively and/or in response to received input.” (Para 0051), “In some examples, the smart vehicle assistant 102 may be executed at least in part via one or more computing systems that are integrated into the vehicle 106. For example, the vehicle 106 may have one or more on-board processors that may execute one or more elements of the smart vehicle assistant 102. In these examples, a user inside the vehicle 106, such as a driver or other occupant, may use the smart vehicle assistant 102 via a dashboard display of the vehicle 106, integrated speakers and/or microphones of the vehicle 106, and/or other elements of the vehicle 106.” (Para 0058)). In regards to claim 10, Harvey discloses of the vehicle computing system of claim 1, wherein the action comprises at least one of: emitting an audio response (“In other examples, a user interface 118 may include a non-visual interface, such as an audio-based interface. Accordingly, the smart vehicle assistant 102 and/or the smart responder assistant 104 may present or convey information to users via audio with or without also displaying the information visually via a screen. As an example, the smart vehicle assistant 102 may be an audio-based system that may receive user input as audio voice input captured by a microphone of the vehicle 106 or a user device, and that May audibly present corresponding output voice data via speakers of the vehicle 106 or the user device. Accordingly, in these examples, a user of the smart vehicle assistant 102 may have a voice-based audio conversation with the smart vehicle assistant 102 instead of, or in addition to, interacting with the smart vehicle assistant 102 via a screen or other visual interface. Similarly, a user of the smart responder assistant 104 may have a voice-based audio conversation with the smart responder assistant 104 instead of, or in addition to, interacting with the smart responder assistant 104 via a screen or other visual interface.” (Para 0068)); updating a user interface within the vehicle (“In other examples, a user interface 118 may include a non-visual interface, such as an audio-based interface. Accordingly, the smart vehicle assistant 102 and/or the smart responder assistant 104 may present or convey information to users via audio with or without also displaying the information visually via a screen. As an example, the smart vehicle assistant 102 may be an audio-based system that may receive user input as audio voice input captured by a microphone of the vehicle 106 or a user device, and that May audibly present corresponding output voice data via speakers of the vehicle 106 or the user device. Accordingly, in these examples, a user of the smart vehicle assistant 102 may have a voice-based audio conversation with the smart vehicle assistant 102 instead of, or in addition to, interacting with the smart vehicle assistant 102 via a screen or other visual interface. Similarly, a user of the smart responder assistant 104 may have a voice-based audio conversation with the smart responder assistant 104 instead of, or in addition to, interacting with the smart responder assistant 104 via a screen or other visual interface.” (Para 0068));; adjusting a temperature setting within the vehicle; providing an entertainment suggestion; providing a destination suggestion (“The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), “In some examples, the vehicle chatbot 120 may also, or alternately, provide information about battery charging stations along a planned route and/or potential alternate routes, such as locations of the battery charging stations, costs associated with using the battery charging stations, and/or other information. For instance, as the user is driving the vehicle 106, the user may notice that the battery 108 of the vehicle 106 should soon be recharged, and may ask the vehicle chatbot 120 where the nearest battery charging station is located.” (Para 0109)); or adjusting a comfort setting within the vehicle (“Accordingly, a user may inquire about a current travel range of the vehicle 106 via the vehicle chatbot 120, and the vehicle chatbot 120 may present information about the current travel range that is generated by the range predictor 124. In some examples, the vehicle chatbot 120 may ask the user questions about when and where the user plans to travel via the vehicle 106, what time the user wants to arrive at a destination, whether the user wants to avoid tolls, heavy traffic, accidents, and/or other elements along a route, and/or other information, such that the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may suggest a route for the user and determine a corresponding travel range to be presented via the vehicle chatbot 120. In other examples, the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may obtain information about a current or planned travel route via a GPS system of the vehicle or a connected user device, such that the range predictor 124 may generate range predictions that may be presented to the user via the vehicle chatbot 120.” (Para 0102)). In regards to claim 11, Harvey discloses of the vehicle computing system of claim 1, wherein the control circuit is configured to: access the sensor data (“The vehicle 106 may have one or more sensors 110 that are configured to capture corresponding types of sensor data, user input, or other input data. The sensors 110 may include accelerometers and/or other motion sensors, Global Positioning System (GPS) sensors and/or other location sensors, sensors associated with a transmission and/or braking system of the vehicle 106, cameras and/or other image-based sensors, Light Detection and Ranging (LiDAR) sensors, microphones, proximity sensors, weight sensors, seatbelt sensors, seat pressure sensors, payload sensors, and/or other types of sensors. Sensor data, user input, and/or other input data captured by the sensors 110 may be provided to an on-board computing system of the vehicle 106, for instance such that the on-board computing system may perform autonomous or semi-autonomous operations based upon received sensor data. In some examples, as described herein, sensor data, user input, and/or other input data captured by the sensors 110 may also, or alternately, be provided to the smart vehicle assistant 102, such that the smart vehicle assistant 102 may operate based upon the sensor data, user input, and/or other input data.” (Para 0056), “he additional data 146 may include one or more other types of information, such as weather data, traffic data, map data, image and/or audio data associated with collisions of vehicles, image and/or audio data associated with occupants of vehicles before, during, and/or after collisions, steering and driving data, and/or other types of data. The additional data 146 may be maintained in one or more databases or other data repositories, for instance in one or more databases maintained by an insurance company, an operator of the model training system 134 and/or a provider of the smart vehicle assistant 102 and/or the smart responder assistant 104.” (Para 0093), “The range data 142 may include information about how far vehicles powered by batteries are able to travel based upon State of Charge (SoC) levels of the batteries and/or other factors. For example, the range data 142 may include historical data indicating how SoC levels of vehicle batteries change over time, and/or how far vehicles have been able to travel based upon power from such vehicle batteries, in association with travel speeds, travel routes, traffic patterns along the travel routes, capabilities of vehicles, and/or other factors. The range data 142 may also include example scripts for communicating tips regarding extending travel ranges and/or battery SoC levels to users of the vehicle chatbot 120. The range data 142 may be maintained in one or more databases or other data repositories, for instance in one or more databases maintained by manufacturers of vehicles and/or batteries, an operator of the model training system 134, and/or a provider of the smart vehicle assistant 102.” (Para 0091)); and based on the sensor data, generate an automated user prompt, wherein the automated user prompt is associated with a predicted user prompt from the user (“If the collision responder 128 detects that the vehicle 106 has been involved in a collision, the detection of the collision may prompt the vehicle chatbot 120 to ask questions to occupants of the vehicle 106. For instance, the vehicle chatbot 120 may ask occupants whether they are hurt and/or are in need of medical attention due to the collision, ask the occupants whether they are trapped in the vehicle 106 or may safely get out of the vehicle 106, and/or ask other questions that may help determine the state of the occupants following the collision. In some examples, if the occupants do not or cannot respond to such questions from the vehicle chatbot 120, the collision responder 128 may determine that the occupants are unconscious and/or may be in need of medical attention or other emergency services.” (Para 0113), (“The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103)). In regards to claim 12, Harvey discloses of the vehicle computing system of claim 11, wherein the control circuit is configured to: implement the action in response to the automated user prompt (“If the collision responder 128 detects that the vehicle 106 has been involved in a collision, the detection of the collision may prompt the vehicle chatbot 120 to ask questions to occupants of the vehicle 106. For instance, the vehicle chatbot 120 may ask occupants whether they are hurt and/or are in need of medical attention due to the collision, ask the occupants whether they are trapped in the vehicle 106 or may safely get out of the vehicle 106, and/or ask other questions that may help determine the state of the occupants following the collision. In some examples, if the occupants do not or cannot respond to such questions from the vehicle chatbot 120, the collision responder 128 may determine that the occupants are unconscious and/or may be in need of medical attention or other emergency services.” (Para 0113), (“The vehicle chatbot 120 may also, or alternately, provide the user with tips on how to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106, for instance by suggesting an alternate travel route, by suggesting that the vehicle 106 travel at reduced speeds, and/or by suggesting other adjustments to operations of the vehicle 106 that, based upon output of the range predictor 124, is expected to extend the range of the vehicle 106 and/or preserve battery life of the vehicle 106. The vehicle chatbot 120 may provide such tips to a user proactively during a conversation, and/or in response to user questions or statements during a conversation indicating that the user may be interested in extending the range of the vehicle 106 and/or preserving battery life of the vehicle 106.” (Para 0103), “If the vehicle is involved in a collision, the vehicle chatbot may also initiate a 911 call or other emergency communication session on behalf of vehicle occupants who may be unconscious or otherwise unable to engage in the emergency communication session. For example, the vehicle chatbot may engage in a natural language conversation with a 911 operator, to provide information to the 911 operator and/or respond to questions from the 911 operator. In some situations, if the vehicle is an autonomous vehicle and a self-diagnosis indicates that the vehicle is still able to drive autonomously following the collision, the smart vehicle assistant may direct the vehicle to autonomously drive to a hospital or other location, so that occupants of the vehicle may obtain medical services or other assistance.” (Para 0049)). In regards to claim 13, Harvey discloses of the vehicle computing system of claim 1, wherein the one or more conditions associated with the user prompt comprises at least one of (i) a cabin temperature, (ii) a comfort setting, or (iii) a navigation preset (“If the vehicle is involved in a collision, the vehicle chatbot may also initiate a 911 call or other emergency communication session on behalf of vehicle occupants who may be unconscious or otherwise unable to engage in the emergency communication session. For example, the vehicle chatbot may engage in a natural language conversation with a 911 operator, to provide information to the 911 operator and/or respond to questions from the 911 operator. In some situations, if the vehicle is an autonomous vehicle and a self-diagnosis indicates that the vehicle is still able to drive autonomously following the collision, the smart vehicle assistant may direct the vehicle to autonomously drive to a hospital or other location, so that occupants of the vehicle may obtain medical services or other assistance.” (Para 0049), “Accordingly, a user may inquire about a current travel range of the vehicle 106 via the vehicle chatbot 120, and the vehicle chatbot 120 may present information about the current travel range that is generated by the range predictor 124. In some examples, the vehicle chatbot 120 may ask the user questions about when and where the user plans to travel via the vehicle 106, what time the user wants to arrive at a destination, whether the user wants to avoid tolls, heavy traffic, accidents, and/or other elements along a route, and/or other information, such that the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may suggest a route for the user and determine a corresponding travel range to be presented via the vehicle chatbot 120. In other examples, the vehicle chatbot 120 or other elements of the smart vehicle assistant 102 may obtain information about a current or planned travel route via a GPS system of the vehicle or a connected user device, such that the range predictor 124 may generate range predictions that may be presented to the user via the vehicle chatbot 120.” (Para 0102), see also Para 0056). In regards to claims 14 and 20, the claims recite analogous limitations to claim 1, and is therefore rejected on the same premise. In regards to claims 15-16, the claims recite analogous limitations to claims 2-3, respectively, and are therefore rejected on the same premise. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 4-7 and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Harvey in view of Tzirkel-Hancock et al. (US 20170287476; hereinafter Tzirkel-Hancock). In regards to claim 4, Harvey discloses of the vehicle computing system of claim 2. However, Harvey does not specifically disclose of wherein the context engine is configured to: determine, based on the user prompt, the sensor data, and the user preference data, sentiment data associated with the user; and generate based on the sentiment data, the modified user prompt. Tzirkel-Hancock, in the same field of endeavor, teaches of wherein the context engine is configured to: determine, based on the user prompt, the sensor data, and the user preference data, sentiment data associated with the user (“In various embodiments, the vehicle 12 includes a context data acquisition module 26 that communicates with sensors or other systems of the vehicle 12 to capture the context data. The context data indicates a level or mode of automation of the vehicle 12, a vehicle state (e.g., parked, static, moving, in a maneuver, etc.), visibility conditions, road conditions (e.g., rainy, foggy, rough, busy, etc.), driving type (e.g., city, freeway, country roads, etc.), driver state (e.g., distracted or focused as indicated by camera, aware of the car situation or not aware, slurred speech, emotion in speech, etc.), etc. As can be appreciated, these examples of context data and events are merely some examples, as the list may be exhaustive. The disclosure is not limited to the present examples. In various embodiments, the context data acquisition module 26 captures context data and evaluates the context data in realtime.” (Para 0015), see also Para 0021); and generate based on the sentiment data, the modified user prompt (“In various embodiments, the vehicle 12 includes a context data acquisition module 26 that communicates with sensors or other systems of the vehicle 12 to capture the context data. The context data indicates a level or mode of automation of the vehicle 12, a vehicle state (e.g., parked, static, moving, in a maneuver, etc.), visibility conditions, road conditions (e.g., rainy, foggy, rough, busy, etc.), driving type (e.g., city, freeway, country roads, etc.), driver state (e.g., distracted or focused as indicated by camera, aware of the car situation or not aware, slurred speech, emotion in speech, etc.), etc. As can be appreciated, these examples of context data and events are merely some examples, as the list may be exhaustive. The disclosure is not limited to the present examples. In various embodiments, the context data acquisition module 26 captures context data and evaluates the context data in realtime.” (Para 0015), “The context data acquisition module 26 then communicates the context data to the HMI module 16. In response, the HMI module may optionally alter or add information to the data, and communicate the context data to the speech system 10 through the API 24. The speech system 10 is then updated based on the context data.” (Para 0016), and “Upon completion of speech processing by the speech system 10, the speech system 10 provides a dialog prompt, and a delivery method back to the HMI module 16 of the vehicle 12. The dialog prompt and the delivery method are then further processed by, for example, the HMI module 16 to deliver the prompt to the user or schedule an action by a system of the vehicle 12. By adjusting the delivery method based on the context data, the efficiency of communicating with the user via the speech system 10 is improved during various driving scenarios.” (Para 0017), see also Para 0021). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the modified user prompt, as taught by Harvey, to include to be based on sentient data associated with the user, as taught by Tzirkel-Hancock, with a reasonable expectation of success in order to have the efficiency of communicating with the user via the speech system 10 is improved during various driving scenarios (Tzirkel-Hancock Para 0017). In regards to claim 5, Harvey in view of Tzirkel-Hancock teaches of the vehicle computing system of claim 4, wherein the sentiment data comprises at least one of (i) a mood, (ii) a feeling, or (iii) a tone of the user (“In various embodiments, the vehicle 12 includes a context data acquisition module 26 that communicates with sensors or other systems of the vehicle 12 to capture the context data. The context data indicates a level or mode of automation of the vehicle 12, a vehicle state (e.g., parked, static, moving, in a maneuver, etc.), visibility conditions, road conditions (e.g., rainy, foggy, rough, busy, etc.), driving type (e.g., city, freeway, country roads, etc.), driver state (e.g., distracted or focused as indicated by camera, aware of the car situation or not aware, slurred speech, emotion in speech, etc.), etc. As can be appreciated, these examples of context data and events are merely some examples, as the list may be exhaustive. The disclosure is not limited to the present examples. In various embodiments, the context data acquisition module 26 captures context data and evaluates the context data in realtime.” (Tzirkel-Hancock Para 0015)). The motivation for combining Harvey and Tzirkel-Hancock is the same as that recited for claim 4 above. In regards to claim 6, Harvey in view of Tzirkel-Hancock teaches of the vehicle computing system of claim 1, wherein the control circuit is configured to: generate voice analysis data for the user prompt, wherein the voice analysis data is indicative of a sentiment of the user (“In various embodiments, the vehicle 12 includes a context data acquisition module 26 that communicates with sensors or other systems of the vehicle 12 to capture the context data. The context data indicates a level or mode of automation of the vehicle 12, a vehicle state (e.g., parked, static, moving, in a maneuver, etc.), visibility conditions, road conditions (e.g., rainy, foggy, rough, busy, etc.), driving type (e.g., city, freeway, country roads, etc.), driver state (e.g., distracted or focused as indicated by camera, aware of the car situation or not aware, slurred speech, emotion in speech, etc.), etc. As can be appreciated, these examples of context data and events are merely some examples, as the list may be exhaustive. The disclosure is not limited to the present examples. In various embodiments, the context data acquisition module 26 captures context data and evaluates the context data in realtime.” (Tzirkel-Hancock Para 0015)). The motivation for combining Harvey and Tzirkel-Hancock is the same as that recited for claim 4 above. In regards to claim 7, Harvey in view of Tzirkel-Hancock teaches of the vehicle computing system of claim 6, wherein the one or more conditions associated with the user prompt comprise the sentiment of the user (“In various embodiments, the vehicle 12 includes a context data acquisition module 26 that communicates with sensors or other systems of the vehicle 12 to capture the context data. The context data indicates a level or mode of automation of the vehicle 12, a vehicle state (e.g., parked, static, moving, in a maneuver, etc.), visibility conditions, road conditions (e.g., rainy, foggy, rough, busy, etc.), driving type (e.g., city, freeway, country roads, etc.), driver state (e.g., distracted or focused as indicated by camera, aware of the car situation or not aware, slurred speech, emotion in speech, etc.), etc. As can be appreciated, these examples of context data and events are merely some examples, as the list may be exhaustive. The disclosure is not limited to the present examples. In various embodiments, the context data acquisition module 26 captures context data and evaluates the context data in realtime.” (Tzirkel-Hancock Para 0015), “The context data acquisition module 26 then communicates the context data to the HMI module 16. In response, the HMI module may optionally alter or add information to the data, and communicate the context data to the speech system 10 through the API 24. The speech system 10 is then updated based on the context data.” (Tzirkel-Hancock Para 0016), and “Upon completion of speech processing by the speech system 10, the speech system 10 provides a dialog prompt, and a delivery method back to the HMI module 16 of the vehicle 12. The dialog prompt and the delivery method are then further processed by, for example, the HMI module 16 to deliver the prompt to the user or schedule an action by a system of the vehicle 12. By adjusting the delivery method based on the context data, the efficiency of communicating with the user via the speech system 10 is improved during various driving scenarios.” (Tzirkel-Hancock Para 0017), see also Tzirkel-Hancock Para 0021). The motivation for combining Harvey and Tzirkel-Hancock is the same as that recited for claim 4 above. In regards to claims 17-19, the claims recite analogous limitations to claims 4-6, respectively, and are therefore rejected on the same premise. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhao et al. (US 20240013776) discloses of a context aware voice assistant system that can assist with voice commands of a vehicle. Lakhani et al. (US 11404075) discloses of a voice assistant that can determine if a passenger is drowsy and take appropriate action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kyle J Kingsland whose telephone number is (571)272-3268. The examiner can normally be reached Mon-Fri 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Flynn can be reached at (571) 272-9855. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE J KINGSLAND/ Primary Examiner, Art Unit 3663
Read full office action

Prosecution Timeline

Dec 19, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600240
METHOD FOR OPERATING A BRAKE CONTROL SYSTEM, BRAKE CONTROL SYSTEM, COMPUTER PROGRAM, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12595699
VEHICLE INCLUDING A CAP THAT IS AUTOMATICALLY SEPARATED FROM A VEHICLE BODY
2y 5m to grant Granted Apr 07, 2026
Patent 12589784
SYSTEM AND METHOD FOR A VIRTUAL APPROACH SIGNAL
2y 5m to grant Granted Mar 31, 2026
Patent 12576727
DIFFERENTIAL ELECTRICAL DRIVE ARRANGEMENT FOR HEAVY DUTY VEHICLES
2y 5m to grant Granted Mar 17, 2026
Patent 12570246
MULTI-STANCE AERIAL DEVICE CONTROL AND DISPLAY
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
84%
With Interview (+6.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 212 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month