Prosecution Insights
Last updated: April 19, 2026
Application No. 18/793,044

METHOD AND DEVICE FOR CLASSIFYING UTTERANCE INTENT CONSIDERING CONTEXT SURROUNDING VEHICLE AND DRIVER

Non-Final OA §101§102§103
Filed
Aug 02, 2024
Examiner
DUGDA, MULUGETA TUJI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Kia Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
40 granted / 49 resolved
+19.6% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 49 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-18 are pending and claims 1, 7 and 13 are independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 7 and 13 recite “obtaining… utterance …; generating…a prompt based …; obtaining… a context-aware sentence …; and providing… to determine the intent of the utterance” as drafted cover an abstract idea of data analysis/retrieval and mental steps. More specifically, the “obtaining… utterance data representing an utterance occurred within a vehicle and context information related to the utterance; generating… a prompt based on the utterance data and the context information, the prompt including a task description, a function inventory, guided learning examples, the context information, and the utterance data; obtaining… a context-aware sentence …; and providing… the context-aware sentence to an intent classification model to determine the intent of the utterance” which requires just data analysis / retrieval step and mental process. For instance, one can obtain some utterance data within a vehicle and also some context information related to the utterance. One may use such utterance data to generate prompts that specifically provide instructions or questions that can be done mentally. For instance, based on what a person is talking about, a human can prompt and ask questions for clarification of what the person is talking about. To give a specific example, if one is talking about music, one may ask questions mentally about the music choices or preferences or any related topic based on the context that person is talking about. The second person might be listening to a first person talking about music choices and interest, and then the second person might mentally prompt and ask questions to the first person to clarify whether that person is interested in Rock and Roll or Gospel music, etc. If such an utterance was made in writing, one may generate prompts by writing on a paper. The setting or background information which provides the context of the utterance can automatically and mentally produce context-aware sentences that show or indicate the context of the utterance which can be performed mentally and this in turn can lead someone to mentally identify or classify the intent of the person who was uttering. These independent claims include “processor” and “generative large language model” which can simply be considered as additional elements. The claimed invention is, therefore, directed to an abstract idea and a mental process without significantly more and thus, claims 1, 7 and 13 are rejected under 35 U.S.C. 101. Similarly, the dependent claims 2-6, 8-12 and 14-17 recite similar claim language as in claims 1, 7 and 13. Claims 2, 8 and 14 recite “wherein the obtaining of the utterance data representing the utterance and the context information related to the utterance includes: obtaining the utterance data; providing the utterance data to the intent classification model to determine the intent of the utterance; and obtaining the context information related to the utterance in response to failing to determine the intent of the utterance,” which requires just a mental step of obtaining the utterance data and providing the utterance data to the intent classification model For instance, a first person can mentally determine the intent of a second person from a speech or utterance of the second person. However, the first person listening the utterance of the second person may not be able to understand some subset of words which are directly relevant to the particular context or situation, and so as a result the first person might not mentally be able to determine the intent of the second person. Thus, these claims 2, 8 and 14 are directed to an abstract idea. Claims 3, 9 and 15 which recite “wherein the context information includes status information of the vehicle,” which also requires just a mental step of recognizing or understanding of the status or the environment of the vehicle that helps to identify the context. Thus, claims 3, 9 and 15 are directed to an abstract idea. Claims 4,10 and 16 which recite “wherein the function inventory includes at least one in-vehicle function accessible through a vehicle voice recognition system,” which also requires just a simple mental step of recognizing voices. A human brain can easily recognize voices and therefore this step is a mental process too. For instance, the driver can listen to the passenger’s request or command mentally and respond, to adjust, for instance, the speed, route, and other factors to improve the passenger’s safety and comfort. For performing such in-vehicle functions the driver can just listen mentally to the passenger and act without any voice recognition system and therefore this just requires a simple mental step of recognizing voices. Thus, claims 4, 10 and 16 are directed to an abstract idea. Claims 5, 11 and 17 recite “wherein the guided learning examples include example utterance data, an example context-aware sentence, and an example process of reasoning the example context-aware sentence from the example utterance data.” These claims require just gathering some learning/training utterance examples, which produce a context-aware sentence mentally or on paper, and an example process of mental reasoning for the example context-aware sentence from the example utterance data. In addition to being performed using mental steps, these claims can be performed using a conventional/generic (general-purpose) computer (the published Spec. para 0055) or using a simple calculator. Thus, claim 5, 11 and 17 are directed to an abstract idea. Claims 6, 12 and 18 which recite “wherein the guided learning examples include example utterance data and an example context-aware sentence,” which also requires a simple mental step that can be easily performed with training/learning examples that include example utterance data and an example context-aware sentence. Thus, claims 6, 12 and 18 are directed to an abstract idea. Thus, claims 1-18 as drafted cover a mental process and abstract idea of data gathering/retrieval and analysis/processing steps, and they are mental processes directed to an abstract idea and data analysis using a conventional/generic (general-purpose) computer as well and thus, all the claims are directed to an abstract idea. This judicial exception is not integrated into a practical application. In particular, claims 1, 7 and 13 recite additional element of “processor,” “memory,” “generative large language model,” and “non-transitory computer-readable recording medium” as per the independent claims. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional general purpose computer implementation (the published Spec. para 0055). Claims 1-18, are therefore not drawn to patent eligible subject matter as they are directed to an abstract idea without significantly more. Thus, the claimed invention is directed to an abstract idea and a mental process without significantly more and thus, claims 1-18 are rejected under 35 U.S.C. 101. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer is noted as a general computer as noted. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (Spec., para 0055). Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. Dependent claims 2-6, 8-12 and 14-18 are also directed toward an abstract idea and do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Therefore, claims 1-18 do not contain patent eligible subject matter that has been identified by the courts. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3-7, 9-13 and 15-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ullrich et al. Pat. App. No. US 20240419246 A1 (Ullrich). Regarding Claim 1, Ullrich discloses a computer-implemented method for determining an intent of a user’s utterance (Ullrich, para 0183, the context prompt may be a string of tokens representing an inference of the user's intent with regard to their multimedia artifacts), the method comprising: obtaining, by a processor, utterance data representing an utterance occurred within a vehicle and context information related to the utterance (Ullrich, para 0207-0209, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance. Such data may be in the form of sensor data 110 from cameras and lidar, in addition to other device data 112 from the vehicle's computerized vehicle management unit… the method includes receiving at least one of the biosignals prompt, the context prompt, and an optional user input prompt at block 1610. For example, the prompt composer 116 illustrated in FIG. 1 may receive at least one of the biosignals prompt, the context prompt, and an optional user input prompt. The user input prompt 140 may include data from vehicle passengers; [i.e., “The user input prompt” can be an “utterance” from a driver or passenger in the vehicle]); generating, by the processor, a prompt based on the utterance data and the context information, the prompt including a task description, a function inventory, guided learning examples, the context information, and the utterance data (Ullrich, para 0208-0213, Figure 16, According to some examples, the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance…According to some examples, the method includes generating a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt at block 1612. For example, the prompt composer 116 illustrated in FIG. 1 may generate a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt…According to some examples, the method includes providing multimodal output of the real-time feedback at block 1616. For example, the GenAI 118 illustrated in FIG. 1 may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system. In one embodiment the system may utilize estimates of the user's anxiety or comfort, derived from one or more biosensors, in order to adapt the navigation or driving style of an autonomous vehicle…Personalizing E-Commerce Experience: FIG. 17 illustrates an example routine 1700 for personalizing e-commerce experience. Although the example routine 1700 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1700. In other examples, different components of an example device or system that implements the routine 1700 may perform functions at substantially the same time or in a specific sequence; [i.e., Various task descriptions, function inventory, guided learning examples, context information, and the audio/utterance data are described in these paragraphs and some more sample information with related paragraphs/citations, which make use of the same components described above, i.e., GenAIs 118, Figures 1, 3, 16 and 17, which are also used to create the various prompts described above are given below]; “task descriptions” sample: Ullrich, para 0230, The GenAI 118, in one embodiment an LLM, may utilize additional cloud-based resources to facilitate this translation function, including connecting to travel related booking services on the user's behalf in order to provide concrete option planning for the user 102; “function inventory” sample: Ullrich, para 0204-0212, FIG. 16 illustrates an example routine 1600 for enhancing autonomous vehicle safety and comfort. Although the example routine 1600 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1600. In other examples, different components of an example device or system that implements the routine 1600 may perform functions at substantially the same time or in a specific sequence; “guided learning examples” and “context information” sample: Ullrich, para 0128, the user 102 may explicitly select or direct components of the system. For example, the user 102 may be able to choose between GenAIs 118 that have been trained on a different corpus or training set if they prefer to have a specific type of interaction. In one example, the user 102 may select between a GenAI 118 trained on clinical background data or a GenAI 118 trained on legal background data. These models may provide distinct output tokens that are potentially more appropriate for a specific user-intended task or context; “audio/utterance data” sample: Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); obtaining, by the processor, a context-aware sentence from an output of a generative large language model by providing the prompt to the generative large language model (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence); and providing, by the processor, the context-aware sentence to an intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities). Regarding Claim 3, Ullrich discloses the method of claim 1, wherein the context information includes status information of the vehicle (Ullrich, para 0065, This embodiment may provide the user 102 with capability augmentation or agency support by utilizing inference of the user's environment, physical state, history, and current desired capabilities as a user context, to be gathered at a context subsystem 300, described in greater detail with respect to FIG. 3.; Ullrich, para 207 - 208, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance.… the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance; [i.e., the received context data is based on the vehicle's surroundings and performance or based on the status of the vehicle]). Regarding Claim 4, Ullrich discloses the method of claim 1, wherein the function inventory includes at least one in-vehicle function accessible through a vehicle voice recognition system (Ullrich, para 0070, Figure 3, the context subsystem 300 may generate a context prompt 138 token… Such a context prompt 138 may be generated by utilizing sensors… Such sensors may include … microphones configured to feed audio to a speech to text (STT) device; Ullrich, para 212, The GenAI 118 … may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system). Regarding Claim 5, Ullrich discloses the method of claim 1, wherein the guided learning examples include example utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text-to-speech utterances 124, written text 126; [“written text” as “a sentence”]), an example context-aware sentence, and an example process of reasoning the example context-aware sentence from the example utterance data (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence). Regarding Claim 6, Ullrich discloses the method of claim 1, wherein the guided learning examples include example utterance data and an example context-aware sentence (Ullrich, para 0063, FIG. 1 illustrates a user agency and capability augmentation system 100 in accordance with one embodiment. The user agency and capability augmentation system 100 comprises a user 102, a wearable computing and biosignal sensing device 104, biosignals 106, background material 108, sensor data 110, other device data 112, application context 114 a prompt composer 116, a GenAI 118, a multimodal output stage 120, an encoder/parser 132, output modalities 122 such as an utterance 124, a written text 126, a multimodal artifact 128, an other user agency 130, and a non-language user agency device 134, a biosignals subsystem 200, and a context subsystem 300; [“written text” as “context-aware sentence”]; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text-to-speech utterances 124, written text 126; [“written text” as “a sentence”]). Regarding Claim 7, Ullrich discloses a computing apparatus comprising: at least one processor (Ullrich, para 0447, The components of computing device 5000 may include, but are not limited to, one or more processors or processing units 5004); and a memory operably coupled to the at least one processor (Ullrich, para 0447, The components of computing device 5000 may include, but are not limited to, one or more processors or processing units 5004, a system memory 5002, and a bus 5024 that couples various system components including system memory 5002 to processor processing units 5004.), wherein the memory stores instructions for causing the at least one processor to perform operations in response to instructions executed by the at least one processor (Ullrich, para 1063, A “credit distribution circuit configured to distribute credits to a plurality of processor cores” is intended to cover, for example, an integrated circuit that has Circuitry that performs this function during operation, even if the integrated circuit in question is not currently being used (e.g., a power supply is not connected to it). Thus, an entity described or recited as “configured to” perform some task refers to something physical, such as a device, circuit, memory storing program instructions executable to implement the task, etc.), the operations including: obtaining utterance data representing an utterance occurred within a vehicle and context information related to the utterance (Ullrich, para 0207-0209, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance. Such data may be in the form of sensor data 110 from cameras and lidar, in addition to other device data 112 from the vehicle's computerized vehicle management unit… the method includes receiving at least one of the biosignals prompt, the context prompt, and an optional user input prompt at block 1610. For example, the prompt composer 116 illustrated in FIG. 1 may receive at least one of the biosignals prompt, the context prompt, and an optional user input prompt. The user input prompt 140 may include data from vehicle passengers; [i.e., “The user input prompt” can be an “utterance” from a driver or passenger in the vehicle]); generating a prompt based on the utterance data and the context information, the prompt including a task description, a function inventory, guided learning examples, the context information, and the utterance data (Ullrich, para 0208-0213, According to some examples, the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance…According to some examples, the method includes generating a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt at block 1612. For example, the prompt composer 116 illustrated in FIG. 1 may generate a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt…According to some examples, the method includes providing multimodal output of the real-time feedback at block 1616. For example, the GenAI 118 illustrated in FIG. 1 may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system. In one embodiment the system may utilize estimates of the user's anxiety or comfort, derived from one or more biosensors, in order to adapt the navigation or driving style of an autonomous vehicle…Personalizing E-Commerce Experience: FIG. 17 illustrates an example routine 1700 for personalizing e-commerce experience. Although the example routine 1700 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1700. In other examples, different components of an example device or system that implements the routine 1700 may perform functions at substantially the same time or in a specific sequence; [i.e., Various task descriptions, function inventory, guided learning examples, context information, and the audio/utterance data are described in these paragraphs and some more sample information with related paragraphs/citations, which make use of the same components described above, i.e., GenAIs 118, Figures 1, 3, 16 and 17, which are also used to create the various prompts described above are given below]; “task descriptions” sample: Ullrich, para 0230, The GenAI 118, in one embodiment an LLM, may utilize additional cloud-based resources to facilitate this translation function, including connecting to travel related booking services on the user's behalf in order to provide concrete option planning for the user 102; “function inventory” sample: Ullrich, para 0204-0212, FIG. 16 illustrates an example routine 1600 for enhancing autonomous vehicle safety and comfort. Although the example routine 1600 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1600. In other examples, different components of an example device or system that implements the routine 1600 may perform function at substantially the same time or in a specific sequence; “guided learning examples” and “context information” sample: Ullrich, para 0128, the user 102 may explicitly select or direct components of the system. For example, the user 102 may be able to choose between GenAIs 118 that have been trained on a different corpus or training set if they prefer to have a specific type of interaction. In one example, the user 102 may select between a GenAI 118 trained on clinical background data or a GenAI 118 trained on legal background data. These models may provide distinct output tokens that are potentially more appropriate for a specific user-intended task or context; “audio/utterance data” sample: Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); obtaining a context-aware sentence from an output of a generative language model by providing the prompt to the generative language model (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence); and providing the context-aware sentence to an intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities). Regarding Claim 9, Ullrich discloses the computing apparatus of claim 7, wherein the context information includes status information of a vehicle (Ullrich, para 0065, This embodiment may provide the user 102 with capability augmentation or agency support by utilizing inference of the user's environment, physical state, history, and current desired capabilities as a user context, to be gathered at a context subsystem 300, described in greater detail with respect to FIG. 3.; Ullrich, para 207-208, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance… the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance; [i.e., the received context data is based on the vehicle's surroundings and performance or based on the status of the vehicle]). Regarding Claim 10, Ullrich discloses the computing apparatus of claim 7, wherein the function inventory includes at least one in-vehicle function accessible through a vehicle voice recognition system (Ullrich, para 0070, Figure 3, the context subsystem 300 may generate a context prompt 138 token… Such a context prompt 138 may be generated by utilizing sensors… Such sensors may include … microphones configured to feed audio to a speech to text (STT) device; Ullrich, para 212, The GenAI 118 … may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system). Regarding Claim 11, Ullrich discloses the computing apparatus of claim 7, wherein the guided learning examples include example utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text-to-speech utterances 124, written text 126; [“written text ” as “a sentence”]), an example context-aware sentence, and an example process of reasoning the example context-aware sentence from the example utterance data (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence). Regarding Claim 12, Ullrich discloses the computing apparatus of claim 7, wherein the guided learning examples include example utterance data and an example context-aware sentence (Ullrich, para 0063, FIG. 1 illustrates a user agency and capability augmentation system 100 in accordance with one embodiment. The user agency and capability augmentation system 100 comprises a user 102, a wearable computing and biosignal sensing device 104, biosignals 106, background material 108, sensor data 110, other device data 112, application context 114 a prompt composer 116, a GenAI 118, a multimodal output stage 120, an encoder/parser 132, output modalities 122 such as an utterance 124, a written text 126, a multimodal artifact 128, an other user agency 130, and a non-language user agency device 134, a biosignals subsystem 200, and a context subsystem 300; [“written text” as “context-aware sentence”]; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text- to-speech utterances 124, written text 126; [“written text ” as “a sentence”]). Regarding Claim 13, Ullrich discloses a non-transitory computer-readable recording medium in which instructions are stored, the instructions causing a computer including a processor to perform, when executed by the computer (Ullrich, para 450, System memory 5002 may include computer system readable media in the form of volatile memory, such as Random access memory (RAM) 5006 and/or cache memory 5010. Computing device 5000 may further include other removable/non-removable, volatile/non-volatile computer system storage media…): obtaining utterance data representing an utterance occurred within a vehicle and context information related to the utterance (Ullrich, para 0207-0209, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance. Such data may be in the form of sensor data 110 from cameras and lidar, in addition to other device data 112 from the vehicle's computerized vehicle management unit… the method includes receiving at least one of the biosignals prompt, the context prompt, and an optional user input prompt at block 1610. For example, the prompt composer 116 illustrated in FIG. 1 may receive at least one of the biosignals prompt, the context prompt, and an optional user input prompt. The user input prompt 140 may include data from vehicle passengers; [i.e., “The user input prompt” can be an “utterance” from a driver or passenger in the vehicle]); generating a prompt based on the utterance data and the context information, the prompt including a task description, a function inventory, guided learning examples, the context information, and the utterance data (Ullrich, para 0208-0213, According to some examples, the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance…According to some examples, the method includes generating a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt at block 1612. For example, the prompt composer 116 illustrated in FIG. 1 may generate a string of tokens based on at least one of the biosignals prompt, the context prompt, and the optional user input prompt…According to some examples, the method includes providing multimodal output of the real-time feedback at block 1616. For example, the GenAI 118 illustrated in FIG. 1 may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system. In one embodiment the system may utilize estimates of the user's anxiety or comfort, derived from one or more biosensors, in order to adapt the navigation or driving style of an autonomous vehicle…Personalizing E-Commerce Experience: FIG. 17 illustrates an example routine 1700 for personalizing e-commerce experience. Although the example routine 1700 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1700. In other examples, different components of an example device or system that implements the routine 1700 may perform functions at substantially the same time or in a specific sequence; [i.e., Various task descriptions, function inventory, guided learning examples, context information, and the audio/utterance data are described in these paragraphs and some more sample information with related paragraphs/citations, which make use of the same components described above, i.e., GenAIs 118, Figures 1, 3, 16 and 17, which are also used to create the various prompts described above are given below]; “task descriptions” sample: Ullrich, para 0230, The GenAI 118, in one embodiment an LLM, may utilize additional cloud-based resources to facilitate this translation function, including connecting to travel related booking services on the user's behalf in order to provide concrete option planning for the user 102; “function inventory” sample: Ullrich, para 0204-0212, FIG. 16 illustrates an example routine 1600 for enhancing autonomous vehicle safety and comfort. Although the example routine 1600 depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the routine 1600. In other examples, different components of an example device or system that implements the routine 1600 may perform function at substantially the same time or in a specific sequence; “guided learning examples” and “context information” sample: Ullrich, para 0128, the user 102 may explicitly select or direct components of the system. For example, the user 102 may be able to choose between GenAIs 118 that have been trained on a different corpus or training set if they prefer to have a specific type of interaction. In one example, the user 102 may select between a GenAI 118 trained on clinical background data or a GenAI 118 trained on legal background data. These models may provide distinct output tokens that are potentially more appropriate for a specific user-intended task or context; “audio/utterance data” sample: Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); obtaining a context-aware sentence from an output of a generative language model by providing the prompt to the generative language model (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence); and providing the context-aware sentence to an intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities). Regarding Claim 15, Ullrich discloses the non-transitory computer-readable recording medium of claim 13, wherein the context information includes status information of the vehicle (Ullrich, para 0065, This embodiment may provide the user 102 with capability augmentation or agency support by utilizing inference of the user's environment, physical state, history, and current desired capabilities as a user context, to be gathered at a context subsystem 300, described in greater detail with respect to FIG. 3.; Ullrich, para 207-208, the method includes receiving context data from the vehicle's surroundings and performance at block 1606. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data from the vehicle's surroundings and performance…the method includes generating a context prompt based on vehicle surroundings and performance at block 1608. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt based on vehicle surroundings and performance; [i.e., the received context data is based on the vehicle's surroundings and performance or based on the status of the vehicle]). Regarding Claim 16, Ullrich discloses the non-transitory computer-readable recording medium of claim 13, wherein the function inventory includes at least one in-vehicle function accessible through a vehicle voice recognition system (Ullrich, para 0070, Figure 3, the context subsystem 300 may generate a context prompt 138 token… Such a context prompt 138 may be generated by utilizing sensors… Such sensors may include … microphones configured to feed audio to a speech to text (STT) device; Ullrich, para 212, The GenAI 118 … may provide multimodal output of the real-time feedback. In one embodiment, the real-time feedback may instruct the vehicle's autonomous control system to adjust speed, route, and other factors to improve passenger safety and comfort. The output of the GenAI 118 may be converted into multimodal sensations through vehicle instruments such as the steering wheel, driver heads up display and/or over the in-vehicle audio system). Regarding Claim 17, Ullrich discloses the non-transitory computer-readable recording medium of claim 13, wherein the guided learning examples include example utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text-to-speech utterances 124, written text 126; [“written text ” as “a sentence”]), an example context-aware sentence, and an example process of reasoning the example context-aware sentence from the example utterance data (Ullrich, para 0086-0087, Use all contextual information and prior conversation history to modulate your responses. After each input, review the prior inputs and modify your subsequent predictions based on the context of the thread. Taking into account the current context, with spartan language, return a JSON string called ‘suggestions’ with three different and unique phrases without quotes. They should be complete sentences longer than two words. Do not include explanations. The phrases you respond with will be spoken by my speech generating device. The GenAI 118 may take in the prompt 144 from the prompt composer 116 and use this to generate a multimodal output 146. The GenAI 118 may consist of a pre-trained machine learning model, such as GPT. The GenAI 118 may generate a multimodal output 146 in the form of a token sequence that may be converted back into plaintext, or which may be consumed by a user agency process directly as a token sequence). Regarding Claim 18, Ullrich discloses the non-transitory computer-readable recording medium of claim 13, wherein the guided learning examples include example utterance data and an example context-aware sentence (Ullrich, para 0063, FIG. 1 illustrates a user agency and capability augmentation system 100 in accordance with one embodiment. The user agency and capability augmentation system 100 comprises a user 102, a wearable computing and biosignal sensing device 104, biosignals 106, background material 108, sensor data 110, other device data 112, application context 114 a prompt composer 116, a GenAI 118, a multimodal output stage 120, an encoder/parser 132, output modalities 122 such as an utterance 124, a written text 126, a multimodal artifact 128, an other user agency 130, and a non-language user agency device 134, a biosignals subsystem 200, and a context subsystem 300; [“written text” as “context-aware sentence”]; OR, Ullrich, para 0091-0092, The user 102 may respond to the multimodal output 146 in a manner detectable through biosignals 106, and thus a channel may be provided to train the GenAI 118 based on user 102 response to multimodal output 146. In general, the user agency and capability augmentation system 100 may be viewed as a kind of application framework that uses the biosignals prompt 136, context prompt 138, and user input prompt 140 sequences to facilitate interaction with an application, much as a user 102 would use their finger to interact with a mobile phone application running on a mobile phone operating system. Unlike a touchscreen or mouse/keyboard interface, this system incorporates real time user inputs along with an articulated description of their physical context and historical context to facilitate extremely efficient interactions to enable user agency. FIG. 1 shows the pathways signals take from input, by sensing devices, stored data, or the user 102, to output in the form of text- to-speech utterances 124, written text 126; [“written text ” as “a sentence”]). Claims 2, 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ullrich in view of Last et al. Pat App No. US 20260011328 A1 (Last) (EFD: 2023-08-08 & Foreign Priority Date: 2022-08-09). Regarding Claim 2, Ullrich discloses the method of claim 1, wherein the obtaining of the utterance data representing the utterance and the context information related to the utterance includes: obtaining the utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); providing the utterance data to the intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context subsystem 300 may form a context prompt 138 by tokenizing received context data. The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities; OR, Ullrich, para 0183, the method includes generating a context prompt at block 1310. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt. The context prompt may be a string of tokens representing an inference of the user's intent with regard to their multimedia artifacts); and Ulrich does not specifically disclose obtaining the context information related to the utterance in response to failing to determine the intent of the utterance. However, Last, in the same field of endeavor, discloses obtaining the context information related to the utterance in response to failing to determine the intent of the utterance (Last para 0030, According to a third aspect, there is provided a computer-implemented method for determining a user intent from a speech input to effect a user intended action, the method comprising:… attempting to interpret, by each processing node, the same speech input based on the subset of words directly relevant to its associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words, whereby a portion of the same speech input relating to the particular context of a first of the nodes is interpretable to the first processing node but is not interpretable to a second of the processing nodes; OR, Last, para 0031, According to a fourth aspect, there is provided herein a processing node of a computing system for determining a user intent from a speech input to effect a user intended action, wherein the processing node is capable of understanding only a subset of words directly relevant to a particular context associated with the processing node, wherein the processing node is configured to attempt to interpret the speech input based on the subset of words directly relevant to the associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words … ). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of Last in the method of Ullrich because this would enable implementation of different forms of intent recognition with various contexts, one such context being the currently more prevalent online ‘chatbots’ which typically have user interfaces with a chatbot through text input typed on a physical or virtual keyboard or provided via some other text input mechanism at their device, unlike conventional chatbots which are based on text rather than voice input (Last, para 0002). Regarding Claim 8, Ullrich discloses the computing apparatus of claim 7, wherein the obtaining of the utterance data representing the utterance and the context information related to the utterance includes: obtaining the utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); providing the utterance data to the intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context subsystem 300 may form a context prompt 138 by tokenizing received context data. The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities; OR, Ullrich, para 0183, the method includes generating a context prompt at block 1310. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt. The context prompt may be a string of tokens representing an inference of the user's intent with regard to their multimedia artifacts); and Ulrich does not specifically disclose obtaining the context information related to the utterance in response to failing to determine the intent of the utterance. However, Last, in the same field of endeavor, discloses obtaining the context information related to the utterance upon failing to determine the intent of the utterance (Last, para 0030, According to a third aspect, there is provided a computer-implemented method for determining a user intent from a speech input to effect a user intended action, the method comprising:… attempting to interpret, by each processing node, the same speech input based on the subset of words directly relevant to its associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words, whereby a portion of the same speech input relating to the particular context of a first of the nodes is interpretable to the first processing node but is not interpretable to a second of the processing nodes; OR, Last, para 0031, According to a fourth aspect, there is provided herein a processing node of a computing system for determining a user intent from a speech input to effect a user intended action, wherein the processing node is capable of understanding only a subset of words directly relevant to a particular context associated with the processing node, wherein the processing node is configured to attempt to interpret the speech input based on the subset of words directly relevant to the associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words ). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of Last in the method of Ullrich because this would enable implementation of different forms of intent recognition with various contexts, one such context being the currently more prevalent online ‘chatbots’ which typically have user interfaces with a chatbot through text input typed on a physical or virtual keyboard or provided via some other text input mechanism at their device, unlike conventional chatbots which are based on text rather than voice input (Last, para 0002). Regarding Claim 14, Ullrich discloses the non-transitory computer-readable recording medium of claim 13, wherein the obtaining of the utterance data representing the utterance and the context information related to the utterance includes: obtaining the utterance data (Ullrich, para 0225, the method includes receiving context data related to the user's surroundings at block 1806. For example, the context subsystem 300 illustrated in FIG. 3 may receive context data related to the user's surroundings. Context data may include sensor data 110 such as audio and video data capturing body language and spoken words from a user's conversation partner); providing the utterance data to the intent classification model to determine the intent of the utterance (Ullrich, para 0173, The context subsystem 300 may form a context prompt 138 by tokenizing received context data. The context prompt 138 may indicate an inference of the user's conversation intent based on data such as historical speech patterns and known device identities; OR, Ullrich, para 0183, the method includes generating a context prompt at block 1310. For example, the context subsystem 300 illustrated in FIG. 3 may generate a context prompt. The context prompt may be a string of tokens representing an inference of the user's intent with regard to their multimedia artifacts); and Ulrich does not specifically disclose obtaining the context information related to the utterance in response to failing to determine the intent of the utterance. However, Last, in the same field of endeavor, discloses obtaining the context information related to the utterance in response to failing to determine the intent of the utterance (Last, para 0030, According to a third aspect, there is provided a computer-implemented method for determining a user intent from a speech input to effect a user intended action, the method comprising:… attempting to interpret, by each processing node, the same speech input based on the subset of words directly relevant to its associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words, whereby a portion of the same speech input relating to the particular context of a first of the nodes is interpretable to the first processing node but is not interpretable to a second of the processing nodes; OR, Last, para 0031, According to a fourth aspect, there is provided herein a processing node of a computing system for determining a user intent from a speech input to effect a user intended action, wherein the processing node is capable of understanding only a subset of words directly relevant to a particular context associated with the processing node, wherein the processing node is configured to attempt to interpret the speech input based on the subset of words directly relevant to the associated context, to extract therefrom an output indicative of user intent, whereby each processing node is unable to interpret any portion of the same speech input containing a word outside of its subset of words). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of Last in the method of Ullrich because this would enable implementation of different forms of intent recognition with various contexts, one such context being the currently more prevalent online ‘chatbots’ which typically have user interfaces with a chatbot through text input typed on a physical or virtual keyboard or provided via some other text input mechanism at their device, unlike conventional chatbots which are based on text rather than voice input (Last, para 0002). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MULUGETA T. DUGDA whose telephone number is (703)756-1106. The examiner can normally be reached Mon - Fri, 4:30am - 7:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MULUGETA TUJI DUGDA/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 03/18/2026
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597424
METHOD AND APPARATUS FOR DETERMINING SKILL FIELD OF DIALOGUE TEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12592244
REDUCED-BANDWIDTH SPEECH ENHANCEMENT WITH BANDWIDTH EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12579366
DEVELOPMENT PLATFORM FOR FACILITATING THE OPTIMIZATION OF NATURAL-LANGUAGE-UNDERSTANDING SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12573417
A COMPUTER-IMPLEMENTED METHOD OF PROVIDING DATA FOR AN AUTOMATED BABY CRY ASSESSMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12567419
VOICEPRINT DRIFT DETECTION AND UPDATE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 49 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month