Prosecution Insights
Last updated: April 19, 2026
Application No. 18/081,541

DEVICE SENSOR INFORMATION AS CONTEXT FOR INTERACTIVE CHATBOT

Non-Final OA §103
Filed
Dec 14, 2022
Examiner
GUERRA-ERAZO, EDGAR X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
4 (Non-Final)
84%
Grant Probability
Favorable
4-5
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
671 granted / 796 resolved
+22.3% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
13 currently pending
Career history
809
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 796 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Claims 1-20 and 23-24 are pending. Claims 1, 23, and 24 are independent. Claims 2-20 depend from Claim 1. Claims 21 and 22 are cancelled This Application was published as U.S. 2024/0205174. Response to Arguments Applicant’s arguments, with respect to the rejections of claim 1 under 35 USC §103 have been fully considered and are persuasive. In Applicant arguments received 9 Dec 2025, Applicant is correct that the Baeuml et al. (US2023/0074406 hereinafter Baeuml) is an invalid reference. On the other hand, it is noted that Baeuml was cited in three previous office actions (3 Apr 2025, 10 Jul 2025, and 8 Oct 2025) and three previous interviews (12 Jun 2025, 27 Aug 2025, and 16 Sep 2025) were conducted without Applicant mentioning the invalidity of the reference until the response of 9 Dec 2025. Applicant is reminded to thoroughly review each office action and provide a complete response in reply, for at least the sake of compact prosecution. In the interest of fairness and accountability from the office, the 35 USC §103 rejection has been withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-7, 10-18, and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Marshall et al.(US2024/0003950) hereinafter Marshall in view of Heckman et al. (Ting: A device to alert homeowners of scintillations which preclude electrical fires, Oct 2018) hereinafter Heckman in further view of Neumann et al.(US2024/0071598 hereinafter Neumann). With regards to claim 1, Marshall teaches: A method implemented by one or more processors, [Fig 2, item 204c] the method comprising: receiving non-acoustic sensor data from a non-acoustic sensor [Fig 2, item 204, Par [0091] where Marshall teaches client device (204) that contains non-acoustic sensor that receives “electrical activity occurring on the electrical wires 202 of the branch circuit and capture the sensed electricity as waveform data”] determining whether the received non-acoustic sensor data satisfies one or more conditions; and [Marshall Fig 2, item 214a, Par [0110] states “module 214a evaluates whether the number of peaks in the transient exceeds a predefined threshold and whether the rise time is greater than a predefined threshold. If so, the module 214a identifies the transient as a potential electrical discharge” where an electrical discharge is a condition] in response to determining that the received non-acoustic sensor data satisfies the one or more conditions: processing the received non-acoustic sensor data to generate a natural language description for the non-acoustic sensor data, [Fig 2, Par [0186] where Marshall states processing data “collected from sensor device 214 and analyzed by transient analysis module 214b, and a decision tree algorithm identified by alert generation module 214b in response to the hazard information, to initiate an automated conversation with the customer via an application installed on the customer's computing device (e.g., a mobile device or smartphone).” A decision tree algorithm is a “AI-based classification techniques” which are used to “classify incoming electrical discharge activity pattern data using the library of known patterns” and provide “specific make(s) and model(s) of devices that are well-known hazards” which are descriptions for the non-acoustic sensor data. (Par [0168]) Description are used by virtual assistant “to generate prompts for display to the customer in order to solicit responses from the customer and ask the customer to perform certain actions as part of traversing the decision tree to reach a resolution” where providing prompts by a computer for a customer is a natural language processing of the description for the non-acoustic sensor data] processing, using a large language model (LLM), the generated natural language description for the non-acoustic sensor data, as input, to generate an LLM output, [Par [0186] states “generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer” where ChatGPT™ uses a LLM to generate an LLM output] With regards to claim 1, Marshall fails to teach: of a client device; With regards to claim 1, Heckman teaches: of a client device; [Heckman teaches a “smart plug-like technology that will detect and alert homeowners to the presence of damaged and arcing wires” which is a client device. It would be obvious of one of ordinary skill in the art to combine the teachings of Marshall with the teachings of Heckman. The motivation to combine the teachings of Marshall with Heckman is because Heckman is a common inventor of both teachings and Heckman teaches the physical embodiment of the invention of Marshall] With regards to claim 1, Marshall in view of Heckman fails to teach: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device With regards to claim 1, Neumann teaches: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and [Neumann Fig1 teaches data source (112) includes a chatbot where a “chatbot may include a question, response, statement, or the like input by user, question, response, statement, or the like generated for user … [and] chatbot may include generative artificial intelligence (AI), large language model (LLM), or the like.” (Par [003]) where the chatbot generates a natural language statement from the LLM. Neumann Fig 1 teaches receiving non-acoustic sensor data where “one or more sensors such as, without limitation, wearable device, motion sensor, or other sensors or devices described herein may provide foods that may be used as subsequent input data or training data for one or more generative machine learning models described herein” (Par [0083]) wherein the sensor is used for the LLM model. causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device [Neumann Fig 4 teaches chatbot system (400) which causes response (416) such as natural language statements to be rendered on the client device (404). (See Par [0108-110]) It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the processing of non-acoustic sensor data and generation of a natural language description as taught by Marshall in view of Heckman with the processing of natural language description with an LLM as taught by Neumann. The motivation to combine the teachings of Neumann with the teachings of Marshall in view of Heckman is because Neumann teaches “computing device may be configured to retrain one or more generative machine learning models based on feedback or update training data of one or more generative machine learning models by integrating feedback into the original training data. In such embodiment, iterative feedback loop may allow machine learning module to adapt to the user's preferences” (Par [0083]) which improves the user experience in the invention as taught by Marshall in view of Heckman] With regards to claim 6, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 wherein processing the received non-acoustic sensor data to generate the natural language description responsive to the non-acoustic sensor data comprises: determining, based on the received non-acoustic sensor data, a client device state of the client device or an environment state of an environment of the client device, and [Marshall Par [0091] states “single sensor is capable of seeing electrical discharge signals throughout the full electrical distribution system 220, including the other branch circuit” which describes the electrical state of the power distribution system which is an environment state of an environment of the client device] generating the natural language description to reflect the client device state or the environment state of the client device. [Marshall Par [0186] states “virtual assistant application can utilize the decision tree to generate prompts for display to the customer in order to solicit responses from the customer and ask the customer to perform certain actions as part of traversing the decision tree to reach a resolution” which is a natural language description from ChatGPT™ that reflects the electrical state which is an environment state of the client device] With regards to claim 7, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 wherein determining whether the non-acoustic received sensor data satisfies the one or more conditions comprises: determining whether a numerical value, in the received non-acoustic sensor data detected by the non-acoustic sensor, satisfies a threshold value; determining that the received non-acoustic sensor data satisfies the one or more conditions based on determining that the numerical value satisfies the threshold value; and [Marshall Fig 2, item 214a, Par [0110] describes Methods A, B, and C which determine whether a numerical value in the received non-acoustic sensor data detected by the non-acoustic sensor, satisfies a threshold value] determining that the received non-acoustic sensor data does not satisfy the one or more conditions based on determining that the numerical value does not satisfy the threshold value. [Marshall Fig 7A, Par [0111] states “most of the waveform data is general electrical background noise generated by items such as appliances and electromagnetic signals in the air” which is sensor data that does not satisfy the one or more conditions based on determining that the numerical value does not satisfy the threshold value] With regards to claim 10, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 wherein the one or more conditions are specific to a current configuration, for one or more adjustable settings, of the interactive chatbot and for the client device. [Marshall Par [0186] teaches that chatbot utilizes a “decision tree to generate prompts for display to the customer in order to solicit responses from the customer and ask the customer to perform certain actions as part of traversing the decision tree to reach a resolution” which corresponds to one or more conditions that are specific to a current configuration, of the interactive chatbot and for the client device. The decision tree (Fig 30A-D, item 3000, Par [0185]) is an algorithm and has one or more adjustable settings. With regards to claim 11, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 10 wherein the one or more conditions are specific to a type, service, or function of the one or more adjustable settings of the interactive chatbot. [Marshall Par [0186] teaches decision tree corresponds to one or more conditions that are specific to a type, service, or function of the one or more adjustable settings of the interactive chatbot, and if the decision tree is not able to resolve the issue the chatbot can “automatically connect a live service representative to the conversation to assist the customer”] With regards to claim 12, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 further comprising: in response to determining that the received non-acoustic sensor data does not satisfy the one or more conditions: bypassing processing of the received non-acoustic sensor data to generate the natural language description, and/or bypassing processing, using the LLM, the generated natural language description. [Marshall Fig 7A, Par [0111] states “most of the waveform data is general electrical background noise generated by items such as appliances and electromagnetic signals in the air” which is sensor data that does not satisfy the one or more conditions, and does not alert the customer of a hazard per the decision tree (Par [0185-56]) which bypasses processing, using the LLM, the generated natural language description] With regards to claim 13, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 12 further comprising: in response to determining that the received non-acoustic sensor data does not satisfy the one or more conditions: discarding the received non-acoustic sensor data without performing any further processing of the received non-acoustic sensor data. [Marshall teaches “One or more sensor devices, coupled to a circuit, sense a multiple voltage cycle waveform generated by electrical activity on the circuit. A computing device coupled to the sensor devices detects a pattern of electrical discharge activity occurring in the multiple voltage cycle waveform” (Par [0027] and “if the detected pattern does not match any of the known arcing patterns, module 214a can continue by rejecting or discarding the detected pattern as a false positive hazard even” (Par [0173]) With regards to claim 14, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 wherein causing the natural language statement to be rendered at the client device via the interactive chatbot comprises: causing the natural language statement to be visually rendered via a graphical interface of the interactive chatbot. [Neumann Fig 1 teaches “one or more data sources 112 may include an application residing on user device 120 ... [where] A user interface may include a graphical user interface (GUI)” (Par [0023])] With regards to claim 15, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 further comprising: receiving audio data from an acoustic sensor of the client device; [Neumann teaches “voice user interface (VUI)” (Par [0023]) and Fig 4 teaches “one or both of submission 412 and response 416 are audio-based communication” (Par [0108]) which are data received from an acoustic sensor] performing speech recognition, based on the audio data, to generate recognized natural language content recognized from a spoken utterance captured by the audio data; and [Neumann Fig 4 teaches chatbot which is an LLM (Par [0030]) that receives audio based communication for submissions and responses (Par [0108]) where LLM generates natural language content from spoken utterances submitted by the user through the user device] processing, using the LLM and along with the generated natural language description for the non-acoustic sensor data, the recognized natural language content to generate the LLM output. [Marshall states “generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer” (Par [0186] where ChatGPT™ uses a LLM to generate an LLM output and LLM also produces natural language description form sensor data (Par [0186]) as previously discussed] With regards to claim 16, Marshall, Heckman, and Neumann teaches: All the limitations of claim 15 wherein processing the generated natural language description for the non-acoustic sensor data along with the recognized content using the LLM is in response to the audio data and the non-acoustic sensor data being received in a same human-to-computer dialog and/or being received within a threshold period of time of one another. [Neumann Fig 4 teaches chatbot interacting with user via interface (404) where a chatbot is “an artificial intelligence (AI) program designed to simulate human conversation or interaction through text, voice-based or image-based communication” (Par [0030]) which is a human to computer dialog] With regards to claim 17, Marshall, Heckman, and Neumann teaches: All the limitations of claim 15 wherein processing the generated natural language description for the non-acoustic sensor data along with the recognized content using the LLM comprises: priming the LLM by processing, using the LLM, the generated natural language description for the non-acoustic sensor data; and [Marshall Par [0186] teaches “generat[ing] prompts for display to the customer in order to solicit responses from the customer” which is priming the LLM before the conversation with the customer] processing, using the LLM and after priming the LLM, the recognized natural language content to generate the LLM output. [Marshall Par [0186] teaches “utilize a generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer”] With regards to claim 18, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 further comprising: processing, using the LLM and along with the generated natural language description for the non-acoustic sensor data, [Marshall Fig 2, Par [0186] states processing data “collected from sensor device 214 and analyzed by transient analysis module 214b, and a decision tree algorithm identified by alert generation module 214b in response to the hazard information, to initiate an automated conversation with the customer via an application installed on the customer's computing device (e.g., a mobile device or smartphone).” where “generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer” where ChatGPT™ uses a LLM and generates natural language description for the non-acoustic sensor data] context data for an ongoing human-to-computer dialog between a user of the client device and the interactive chatbot. [Neumann teaches context vector (Par [0090]) and chatbot or “LLM may include a transformer model of an attention mechanism. Attention mechanisms, as described above, may provide context for any position in the input sequence” (Par [0091]) which is context data for the chatbot] With regards to claim 23, Marshall teaches: One or more non-transitory computer-readable media comprising instructions that, when executed by one or more processors of one or more electronic devices, are to cause the one or more electronic devices to perform a method for generating a natural language statement responsive to non-acoustic sensor data, the method implemented by one or more processors, and the method comprising: [Marshall teaches “implementation can be as a computer program product, i.e., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computer” (Par [0190])] receiving non-acoustic sensor data from a non-acoustic sensor; [Marshall Fig 2, item 204, Par [0091] where Marshall teaches client device (204) that contains non-acoustic sensor that receives “electrical activity occurring on the electrical wires 202 of the branch circuit and capture the sensed electricity as waveform data”] determining whether the received non-acoustic sensor data satisfies one or more conditions; and [Marshall Fig 2, item 214a, Par [0110] states “module 214a evaluates whether the number of peaks in the transient exceeds a predefined threshold and whether the rise time is greater than a predefined threshold. If so, the module 214a identifies the transient as a potential electrical discharge” where an electrical discharge is a condition] in response to determining that the received non-acoustic sensor data satisfies the one or more conditions: processing the received non-acoustic sensor data to generate a natural language description for the non-acoustic sensor data, [Marshall Fig 2, Par [0186] where Marshall states processing data “collected from sensor device 214 and analyzed by transient analysis module 214b, and a decision tree algorithm identified by alert generation module 214b in response to the hazard information, to initiate an automated conversation with the customer via an application installed on the customer's computing device (e.g., a mobile device or smartphone).” A decision tree algorithm is a “AI-based classification techniques” which are used to “classify incoming electrical discharge activity pattern data using the library of known patterns” and provide “specific make(s) and model(s) of devices that are well-known hazards” which are descriptions for the non-acoustic sensor data. (Par [0168]) Description are used by virtual assistant “to generate prompts for display to the customer in order to solicit responses from the customer and ask the customer to perform certain actions as part of traversing the decision tree to reach a resolution” where providing prompts by a computer for a customer is a natural language processing of the description for the non-acoustic sensor data] processing, using a large language model (LLM), the generated natural language description for the non-acoustic sensor data, as input, to generate an LLM output, generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and [Marshall Par [0186] states “generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer” where ChatGPT™ uses a LLM to generate an LLM output] With regards to claim 23, Marshall fails to teach: of a client device; With regards to claim 23, Heckman teaches: of a client device; [Heckman teaches a “smart plug-like technology that will detect and alert homeowners to the presence of damaged and arcing wires” which is a client device. It would be obvious of one of ordinary skill in the art to combine the teachings of Marshall with the teachings of Heckman. The motivation to combine the teachings of Marshall with Heckman is because Heckman is a common inventor of both teachings and Heckman teaches the physical embodiment of the invention of Marshall] With regards to claim 23, Marshall in view of Heckman fails to teach: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device With regards to claim 23, Neumann teaches: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and [Neumann Fig1 teaches data source (112) includes a chatbot where a “chatbot may include a question, response, statement, or the like input by user, question, response, statement, or the like generated for user … [and] chatbot may include generative artificial intelligence (AI), large language model (LLM), or the like.” (Par [003]) where the chatbot generates a natural language statement from the LLM. Neumann Fig 1 teaches receiving non-acoustic sensor data where “one or more sensors such as, without limitation, wearable device, motion sensor, or other sensors or devices described herein may provide foods that may be used as subsequent input data or training data for one or more generative machine learning models described herein” (Par [0083]) wherein the sensor is used for the LLM model] causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device [Neumann Fig 4 teaches chatbot system (400) which causes response (416) such as natural language statements to be rendered on the client device (404). (See Par [0108-110]) It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the processing of non-acoustic sensor data and generation of a natural language description as taught by Marshall in view of Heckman with the processing of natural language description with an LLM as taught by Neumann. The motivation to combine the teachings of Neumann with the teachings of Marshall in view of Heckman is because Neumann teaches “computing device may be configured to retrain one or more generative machine learning models based on feedback or update training data of one or more generative machine learning models by integrating feedback into the original training data. In such embodiment, iterative feedback loop may allow machine learning module to adapt to the user's preferences” (Par [0083]) which improves the user experience in the invention as taught by Marshall in view of Heckman] With regards to claim 24, Marshall teaches: A system comprising: one or more processors; and [Marshall teaches “Method steps can be performed by one or more special-purpose processors executing a computer program to perform functions of the technology by operating on input data and/or generating output data” (Par [0191])] one or more non-transitory computer-readable media comprising instructions that, when executed by the one or more processors, are to cause one or more electronic devices to perform a method for generating a natural language statement responsive to non- acoustic sensor data, [Marshall teaches “implementation can be as a computer program product, i.e., a computer program tangibly embodied in a machine-readable storage device, for execution by, or to control the operation of, a data processing apparatus, e.g., a programmable processor, a computer, and/or multiple computers” (Par [0190])] the method implemented by one or more processors, and the method comprising: receiving non-acoustic sensor data from a non-acoustic sensor; [Marshall Fig 2, item 204, Par [0091] where Marshall teaches client device (204) that contains non-acoustic sensor that receives “electrical activity occurring on the electrical wires 202 of the branch circuit and capture the sensed electricity as waveform data”] determining whether the received non-acoustic sensor data satisfies one or more conditions; and [Marshall Fig 2, item 214a, Par [0110] states “module 214a evaluates whether the number of peaks in the transient exceeds a predefined threshold and whether the rise time is greater than a predefined threshold. If so, the module 214a identifies the transient as a potential electrical discharge” where an electrical discharge is a condition] in response to determining that the received non-acoustic sensor data satisfies the one or more conditions: processing the received non-acoustic sensor data to generate a natural language description for the non-acoustic sensor data, [Fig 2, Par [0186] where Marshall states processing data “collected from sensor device 214 and analyzed by transient analysis module 214b, and a decision tree algorithm identified by alert generation module 214b in response to the hazard information, to initiate an automated conversation with the customer via an application installed on the customer's computing device (e.g., a mobile device or smartphone).” A decision tree algorithm is a “AI-based classification techniques” which are used to “classify incoming electrical discharge activity pattern data using the library of known patterns” and provide “specific make(s) and model(s) of devices that are well-known hazards” which are descriptions for the non-acoustic sensor data. (Par [0168]) Description are used by virtual assistant “to generate prompts for display to the customer in order to solicit responses from the customer and ask the customer to perform certain actions as part of traversing the decision tree to reach a resolution” where providing prompts by a computer for a customer is a natural language processing of the description for the non-acoustic sensor data] processing, using a large language model (LLM), the generated natural language description for the non-acoustic sensor data, as input, to generate an LLM output, [Par [0186] states “generative natural language model (such as ChatGPT™ available from OpenAI, Inc.) to conduct the conversation with the customer” where ChatGPT™ uses a LLM to generate an LLM output] With regards to claim 24, Marshall fails to teach: of a client device; With regards to claim 24, Heckman teaches: of a client device; [Heckman teaches a “smart plug-like technology that will detect and alert homeowners to the presence of damaged and arcing wires” which is a client device. It would be obvious of one of ordinary skill in the art to combine the teachings of Marshall with the teachings of Heckman. The motivation to combine the teachings of Marshall with Heckman is because Heckman is a common inventor of both teachings and Heckman teaches the physical embodiment of the invention of Marshall] With regards to claim 24, Marshall in view of Heckman fails to teach: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device. With regards to claim 24, Neumann teaches: generating, based on the LLM output, a natural language statement that is responsive to the received non-acoustic sensor data, and [Neumann Fig1 teaches data source (112) includes a chatbot where a “chatbot may include a question, response, statement, or the like input by user, question, response, statement, or the like generated for user … [and] chatbot may include generative artificial intelligence (AI), large language model (LLM), or the like.” (Par [003]) where the chatbot generates a natural language statement from the LLM. Neumann Fig 1 teaches receiving non-acoustic sensor data where “one or more sensors such as, without limitation, wearable device, motion sensor, or other sensors or devices described herein may provide foods that may be used as subsequent input data or training data for one or more generative machine learning models described herein” (Par [0083]) wherein the sensor is used for the LLM model] causing the natural language statement to be rendered at the client device via an interactive chatbot that is installed at the client device or that is otherwise accessible via the client device [Neumann Fig 4 teaches chatbot system (400) which causes response (416) such as natural language statements to be rendered on the client device (404). (See Par [0108-110]) It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the processing of non-acoustic sensor data and generation of a natural language description as taught by Marshall in view of Heckman with the processing of natural language description with an LLM as taught by Neumann. The motivation to combine the teachings of Neumann with the teachings of Marshall in view of Heckman is because Neumann teaches “computing device may be configured to retrain one or more generative machine learning models based on feedback or update training data of one or more generative machine learning models by integrating feedback into the original training data. In such embodiment, iterative feedback loop may allow machine learning module to adapt to the user's preferences” (Par [0083]) which improves the user experience in the invention as taught by Marshall in view of Heckman] Claims 2-5, and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Marshall et al.(US2024/0003950) in view of Heckman et al. (Ting: A device to alert homeowners of scintillations which preclude electrical fires, Oct 2018) and Neumann et al.(US2024/0071598) in further view of Peshkov et al.(US2024/0135619 hereinafter Peshkov). With regards to claim 2, Marshall in view of Heckman and Neumann teaches: All the limitations of claim 1 With regards to claim 2, Marshall in view of Heckman and Neumann fails to teach: further comprising: tuning, based on the LLM output, a voice of a virtual character that visually represents the interactive chatbot. With regard to claim 2, Peshkov teaches: further comprising: tuning, based on the LLM output, a voice of a virtual character that visually represents the interactive chatbot. [Peshkov Par [0072] teaches Chatbot (Fig 1, item 104) that generates response through a large language model (LLM) output and creates avatar models or virtual characters that communicate with the user, where avatar models visually represent the interactive chatbot. Chatbot can also be represented with automated voice system (104, Par [0073]) that can tune an audio message or voice to go along with a virtual character such as a woman’s face. (Par [0051]) It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the voice and virtual character as taught by Peshkov with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Peshkov is because “avatar communication systems and methods can improve communication between machines (e.g., a chatbot or AI assistant) and human users” (Peshkov, Par [0021])] With regards to claim 3, Marshall, Heckman, Neumann, and Peshkov teaches: All the limitations of claim 2 wherein causing the natural language statement to be rendered at the client device via the chatbot comprises: causing the natural language statement to be audibly rendered, in the tuned voice of the virtual character, via an audible interface of the client device. [Peshkov Par [0072] teaches Chatbot (Fig 1, item 104) that generates response through a large language model (LLM) output and creates avatar models that communicate with the user, where avatar models visually represent the interactive chatbot which communicates with user device (106) via audible interface of the client device. (Fig 1, Par [0054]) With regards to claim 4, Marshall, Heckman and Neumann teaches: All the limitations of claim 1 With regards to claim 4, Marshall in view of Heckman and Neumann fails to teach: further comprising: modifying, based on the LLM output, a visual appearance of a graphical interface of the interactive chatbot, wherein modifying the visual appearance of the graphical interface of the interactive chatbot comprises: modifying a character visual appearance of a virtual character displayed at the graphical interface of the interactive chatbot; and/or modifying a background of the graphical interface of the interactive chatbot. With regard to claim 4, Peshkov teaches: further comprising: modifying, based on the LLM output, a visual appearance of a graphical interface of the interactive chatbot, wherein modifying the visual appearance of the graphical interface of the interactive chatbot comprises: modifying a character visual appearance of a virtual character displayed at the graphical interface of the interactive chatbot; and/or modifying a background of the graphical interface of the interactive chatbot. [Peshkov Fig 1, item 102 Par [00450 states “intermediary system 102 can be configured to provide a user interface that enables a user to create or modify an avatar model. For example, intermediary system 102 can support an avatar editor (which may provide functionality tailored to editing, constructing, or building avatars) or an augmented reality design platform (e.g., a low-code or no-code augmented reality design platform)” where an avatar is the visual appearance of a virtual character displayed at the graphical interface of the interactive chatbot. It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the modification of a virtual character as taught by Peshkov with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Peshkov is because “avatar communication systems and methods can improve communication between machines (e.g., a chatbot or AI assistant) and human users” (Peshkov, Par [0021])] With regards to claim 5, Marshall, Heckman, Neumann and Peshkov teaches: All the limitations of claim 4 wherein modifying the visual appearance of the graphical interface of the interactive chatbot comprises modifying the character visual appearance of the virtual character, and wherein modifying the character visual appearance of the virtual character comprises: controlling, based on the LLM output, a facial expression, a gesture, [Peshkov Par [0052] states “automatic response system 104 could provide a confirmation message (e.g., an audio message) and expression data corresponding to a cheerful smile” where a smile is a facial expression and a gesture because it is a nonverbal communication. and/or a movement of the virtual character. Peshkov Par [0073] states “avatar's expressions and body movements can also be controlled by the chat bot running on the remote system, which may use natural language processing and machine learning algorithms.” With regards to claim 8, Marshall, Heckman and Neumann teaches: All the limitations of claim 1 wherein determining whether the non-acoustic received sensor data satisfies the one or more conditions comprises: determining that the received non-acoustic sensor data satisfies the one or more conditions based on determining that the received non-acoustic sensor data is of the type for which the particular chatbot is responsive; and [Marshall Par [0015] teaches a “a sensor device for detecting electrical discharges that precede electrical fires in electrical wiring” which is a type of non-acoustic sensor data the chatbot is responsive to because of the risk of the electrical fires] determining that the received non-acoustic sensor data fails to satisfy the one or more conditions based on determining that the received non-acoustic sensor data is not of the type for which the particular chatbot is responsive. [Marshall Fig 7A, Par [0111] states “most of the waveform data is general electrical background noise generated by items such as appliances and electromagnetic signals in the air” where electrical background noise is non-acoustic sensor data of the type for which the chatbot is not responsive] With regards to claim 8, Marshall, Heckman and Neumann fails to teach: virtual character; determining, based on a particular virtual character being a currently active virtual character for the interactive chatbot, whether the received non-acoustic sensor data is of a type for which the particular virtual character is responsive; With regards to claim 8, Peshkov teaches: virtual character; [Peshkov Par [0072] teaches creating avatars for non-player characters which are virtual characters] determining, based on a particular virtual character being a currently active virtual character for the interactive chatbot, whether the received non-acoustic sensor data is of a type for which the particular virtual character is responsive; [Peshkov Par [0072] teaches Chatbot (Fig 1, item 104) that generates response through a large language model (LLM) output and creates avatar models that communicate with the user, where the avatar models is based on a particular virtual character for the interactive chatbot and is a currently active virtual character that is responsive to users as well as sensor data types. It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the virtual character as taught by Peshkov with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Peshkov is because “avatar communication systems and methods can improve communication between machines (e.g., a chatbot or AI assistant) and human users” (Peshkov, Par [0021])] With regards to claim 9, Marshall, Heckman and Neumann teaches: All the limitations of claim 1 wherein determining whether the non-acoustic received sensor data satisfies the one or more conditions comprises: determining that the received non-acoustic sensor data satisfies the one or more conditions based on determining that the received non-acoustic sensor data includes content for which the particular chatbot is responsive; and [Marshall Par [0015] teaches a “a sensor device for detecting electrical discharges that precede electrical fires in electrical wiring” which includes content of non-acoustic sensor data the chatbot is responsive to because content was analyzed by analysis module 214b to determine the risk of electrical fires] determining that the received non-acoustic sensor data fails to satisfy the one or more conditions based on determining that the received non-acoustic sensor data does not include content for which the particular chatbot is responsive. [Marshall Fig 7A, Par [0111] states “most of the waveform data is general electrical background noise generated by items such as appliances and electromagnetic signals in the air” where electrical background noise is non-acoustic sensor data that does not include content for which the chatbot is not responsive] With regards to claim 9, Marshall, Heckman and Neumann fails to teach: virtual character; determining, based on a particular virtual character being a currently active virtual character for the interactive chatbot, whether the received non-acoustic sensor data includes content for which the particular virtual character is responsive; With regards to claim 9, Peshkov teaches: virtual character; [Peshkov Par [0072] teaches creating avatars for non-player characters which are virtual characters] determining, based on a particular virtual character being a currently active virtual character for the interactive chatbot, whether the received non-acoustic sensor data includes content for which the particular virtual character is responsive; [Peshkov Par [0072] teaches Chatbot (Fig 1, item 104) that generates response through a large language model (LLM) output and creates avatar models that communicate with the user, where the avatar models is based on a particular virtual character for the interactive chatbot and is a currently active virtual character that is responsive to users as well as sensor data content. It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the virtual character as taught by Peshkov with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Peshkov is because “avatar communication systems and methods can improve communication between machines (e.g., a chatbot or AI assistant) and human users” (Peshkov, Par [0021])] Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Marshall et al.(US2024/0003950) in view of Heckman et al. (Ting: A device to alert homeowners of scintillations which preclude electrical fires, Oct 2018) and Neumann et al.(US2024/0071598) in further view of Taylor et al.(US2020/0382449 hereinafter Taylor). With regards to claim 19, Marshall, Heckman, and Neumann teaches: All the limitations of claim 18 wherein the context data includes a current utterance from the user in the ongoing human-to-computer dialog, [Neumann Fig 4, user submission or utterance (412), (Par [0108-110])] With regards to claim 19, Marshall, Heckman, and Neumann fails to teach: a prior utterance from the user in the ongoing human-to-computer dialog, a current response from the interactive chatbot in the ongoing human-to-computer dialog, and/or a prior response from the interactive chatbot in the ongoing human-to-computer dialog. With regards to claim 19, Taylor teaches: a prior utterance from the user in the ongoing human-to-computer dialog, [Taylor Fig 2 teaches previous utterance (138) is provided for context]] a current response from the interactive chatbot in the ongoing human-to-computer dialog, [Taylor Fig 2 item 134] and/or a prior response from the interactive chatbot in the ongoing human-to-computer dialog. [Taylor Fig 2 item 140 It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the contextual feedback for a chatbot as taught by Taylor with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Taylor is because Taylor teaches temporal context as well as uses “shelf life or expiration criteria may be location (or geographic position) information (such as a current geographic location), instead of temporal information (Par [0059]) which increases the capabilities of the invention of Marshall, Heckman, and Neumann to provide better context to the chatbot] With regards to claim 20, Marshall, Heckman, and Neumann teaches: All the limitations of claim 18 With regards to claim 20, Marshall, Heckman, and Neumann fails to teach: wherein the context data includes a current date, a current time, and/or a current day of the week. With regards to claim 20, Taylor teaches: wherein the context data includes a current date, a current time, and/or a current day of the week. [Taylor teaches a “timestamp for each concept that will be fed back as context information.” (Par [0062]) It would be obvious to one of ordinary skill in the art at the effective filing date of applicant’s invention to combine the contextual feedback for a chatbot as taught by Taylor with the interactive chatbot as taught by Marshall in view of Heckman and Neumann. The motivation to combine the inventions of Marshall in view of Heckman and Neumann with the invention of Taylor is because Taylor teaches temporal context as well as uses “shelf life or expiration criteria may be location (or geographic position) information (such as a current geographic location), instead of temporal information (Par [0059]) which increases the capabilities of the invention of Marshall, Heckman, and Neumann to provide better context to the chatbot] Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joseph J Yamamoto whose telephone number is (571)272-4020. The examiner can normally be reached M-F 1000-1800 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH J. YAMAMOTO Examiner Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Dec 14, 2022
Application Filed
Mar 31, 2025
Non-Final Rejection — §103
Jun 12, 2025
Applicant Interview (Telephonic)
Jun 12, 2025
Examiner Interview Summary
Jun 19, 2025
Response Filed
Jul 07, 2025
Final Rejection — §103
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Sep 16, 2025
Examiner Interview (Telephonic)
Sep 16, 2025
Examiner Interview Summary
Oct 04, 2025
Non-Final Rejection — §103
Dec 09, 2025
Response Filed
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602198
SEARCH AND KNOWLEDGE BASE QUESTION ANSWERING FOR A VOICE USER INTERFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591746
LANGUAGE MODEL TUNING IN CONVERSATIONAL ARTIFICIAL INTELLIGENCE SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12572565
SEMANTIC CONTENT CLUSTERING BASED ON USER INTERACTIONS FOR CONTENT MODERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12542134
TRAINING AND USING A TRANSCRIPT GENERATION MODEL ON A MULTI-SPEAKER AUDIO STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12536373
TOKEN OPTIMIZATION IN GENERATIVE LARGE LANGUAGE MODEL LEARNING (LLM) INTERACTIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+15.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 796 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month