Prosecution Insights
Last updated: April 19, 2026
Application No. 18/207,616

AUGMENTING ARTIFICIAL INTELLIGENCE PROMPT DESIGN WITH EMOTIONAL CONTEXT

Non-Final OA §101§103
Filed
Jun 08, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented in the case. CRM statement Review of the applicant’s specification recites that the medium claim 16 recited is not a propagated signal but it is still recommended that the applicant amends the language to being “A non-transitory computer-readable storage medium” further see specification PGPUB ¶128 “As used herein, the term “computer-readable medium” can include transitory propagating signals. In contrast, the term “computer-readable storage medium” excludes transitory propagating signals.” Information Disclosure Statement The information disclosure statement submitted on 07/11/2023 and 10/22/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”) Claim 1, 9 and 16 have the following abstract idea analysis. Step 1: The claim is directed to “a method, system and crm”. The claims are directed to the statutory categories accordingly. Step 2A Prong 1: claims recite the abstract idea limitations of "determining an emotional state of the user based on the sensor data;" and "receive an emotion from the emotion service;". These limitations include mental concepts (act of evaluating) such as inferring emotion. Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2)). Thus, these steps are an abstract idea in the “mental process”. The specification also provides example determination of emotion including using a ML model and service that uses a ML model. See USPGPUB ¶35. Other sections of the claims such as "receiving an original prompt from a user;", "receiving sensor data associated with the user;", "generating an augmented prompt by augmenting the original prompt based on the emotional state;" and "inputting the augmented prompt to a generative artificial intelligence (AI); receiving a response from the generative AI; and outputting the response for presentation to the user." are advanced processes, too generic or high level to be listed as a judicial exception given the available descriptions and MPEP comparisons. Step 2A Prong 2: The judicial exceptions recited in these claims are not integrated into a practical application. Merely invoking "a generative artificial intelligence", "sensor data", "a processor", "storage" or "memory" does not yield eligibility. Claims are still in line with mental concepts such as claim 1, 9 and 16 are not specific to a practical application. The additional elements as such are processors and instructions which do not include specialized hardware. See MPEP § 2106.05(f). Claim 1, 9 and 16 do not include a particular field but even doing so may not be sufficient to overcome the abstract idea rejection. Merely applying an model to a field or data without an advancement in the new field or new hardware is ineligible. MPEP § 2106.05(h). Step 2B: The claims do not contain significantly more than their judicial exceptions. Processors, memory and other hardware are in their standard forms in the field. These additional elements are well-understood, routine, and conventional activity, see MPEP 2106.05(d)(II). Claims lacks any particular "how" or algorithm for a solution in a field in a novel way. Claims require more specificity on processes that would be incapable of simple mathematics, mental processes or use more substantial structure than conventional devices such as non-textbook implementations. Regarding claims 2-8, 10-15 and 17-20 merely narrow the previously recited abstract idea limitations with more abstract concepts and/or routine fundamental processes. For the reasons described above with respect to claim 1 and 9 this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Abstract idea steps 1, 2A prong 1 and 2 remain the same as independent analysis above. See specification for more practical application concepts as none are seen in claims 2-8, 10-15 and 17-20 . With respect to step 2B These claims disclose similar limitations described for the independent claims above and do not provide anything significantly more than mathematical or mental concepts. Claims 2-8, 10-15 and 17-20 recite the additional elements of "wherein the sensor data includes audio data, video data, physiological data, and cognitive data. using a machine-learning model to classify the sensor data into emotion categories. wherein the emotional state includes one or more emotion categories. wherein the emotion state includes an emotion vector that indicates degrees of the emotion categories. translating the emotional state of the user into at least a token; and adding the token to the original prompt. wherein the token includes a word, a highlight, a punctuation, an emoji, a metadata, and/or an emotion vector. using a machine-learning model to translate the emotional state to a token and adding the token to the original prompt. wherein the sensor data includes a plurality of sensing modalities. wherein the emotion includes a plurality of emotion categories associated with the plurality of sensing modalities. wherein the emotion includes a plurality of emotion vectors associated with the plurality of sensing modalities. wherein the additional token includes the plurality of emotion vectors. the original tokens are associated with first timestamps; the sensor data is associated with second timestamps; and the emotion is associated with third timestamps. inserting the additional token in a particular position among the original tokens based on the first timestamps, second timestamps, and/or the third timestamps. wherein the original prompt is blank. wherein the instructions further cause the processor to send a sequence of augmented prompts to the generative AI at regular intervals. wherein the emotion includes a number representing an emotion category. wherein the instructions further cause the processor to append a word that correlates with the emotion to the original prompt to generate the augmented prompt." These elements are more abstract concepts, generic applications to a field of use or well-understood, routine, conventional activity (see MPEP § 2106.05(d) and can't be simply appended to qualify as significantly more or being a practical application. What type of application, or structure of components beyond generic machine learning is still unknown for these claims. Therefore claims 2-7 and 9-10 also recite abstract ideas that do not integrate into a practical application or amount to significantly more than the judicial exception, and are rejected under U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn et al. (US 20230351118 A1 hereinafter Gelfenbeyn) in view of el Kaliouby et al. (US 11073899 B2 hereinafter el Kaliouby) As to independent claim 1, Gelfenbeyn teaches a computer-implemented method, comprising: receiving an original prompt from a user; [text/voice conversation from user ¶27-28, prompts ¶68, ¶44 " generate, based on the voice messages, requests and provide the request to the AI character model 202 to generate the model output 216"] receiving sensor data associated with the user; [voice, camera, sensors from user input ¶44, ¶106] generating an augmented prompt by augmenting the original prompt based on the emotional state; [adds emotional state before sending to an LLM (augments) ¶28 "For example, prior to sending a request to the LLM, the platform may classify and filter the user questions and messages to change words based on the personalities of AI characters, emotional states of AI characters, emotional states of users, context of a conversation, scene and environment of the conversation, and so forth"] inputting the augmented prompt to a generative artificial intelligence (AI); [LLM Is a generative AI that receive user request ¶28 " targeted requests to the LLMs will result in optimized performance"] receiving a response from the generative AI; and [LLM response ¶28 "adjust the response formed by the LLM by changing words and adding fillers based on the personality, role, and emotional state of the AI character. "] outputting the response for presentation to the user. [present response ¶7 "The client-side computing device may present the response to the user."] Gelfenbeyn does not specifically teach determining an emotional state of the user based on the sensor data. However, el Kaliouby teaches determining an emotional state of the user based on the sensor data; [determines cognitive and emotional state based on camera data Col. 18 ln. 4-36, Col. 8 ln. 45-48 " To facilitate the identification of cognitive states, cameras on devices can track eye movements to make an estimation and/or determination of when the message has been read."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn by incorporating the determining an emotional state of the user based on the sensor data disclosed by el Kaliouby because both techniques address the same field of machine learning and by incorporating el Kaliouby into Gelfenbeyn enhance analytics allowing effectiveness of a media presentation or another stimulus can be evaluated and compared to the effectiveness of other media presentations. [el Kaliouby Col. 7 ln. 25-60] As to dependent claim 2, Gelfenbeyn and el Kaliouby teach the method of claim 1 above that is incorporated, Gelfenbeyn and el Kaliouby further teach wherein the sensor data includes audio data, video data, physiological data, and cognitive data. [el Kaliouby video, audio, physiological heart rate Col. 13 ln. 12-32, Cognitive overload Col. 7-8 ln. 61-15 ] As to dependent claim 3, Gelfenbeyn and el Kaliouby teach the method of claim 1 above that is incorporated, Gelfenbeyn and el Kaliouby further teach using a machine-learning model to classify the sensor data into emotion categories.[el Kaliouby classifier/SVM (ML model) classifies into categories Col. 18 ln. 4-36 "SVM can build a model that assigns new data into one of the two categories"] As to dependent claim 4, Gelfenbeyn and el Kaliouby teach the method of claim 1 above that is incorporated, Gelfenbeyn and el Kaliouby further teach wherein the emotional state includes one or more emotion categories. [el Kaliouby emotional states and categories Col. 18 ln. 4-36 "emotional states can include amusement, fear, anger, disgust, surprise, and sadness"] Claims 5 is rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of el Kaliouby, as applied in the rejection of claim 1 and 4 above, and further in view of Modi et al. (US 10818312 B2 hereinafter Modi) As to dependent claim 5, Gelfenbeyn and el Kaliouby teach the method of claim 4 above that is incorporated, Gelfenbeyn and el Kaliouby do not specifically teach wherein the emotion state includes an emotion vector that indicates degrees of the emotion categories. However, Modi teaches wherein the emotion state includes an emotion vector that indicates degrees of the emotion categories. [Modi emotion vector Col. 5 ln. 3-8 "emotion vector representing a probability distribution over six emotions"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and el Kaliouby by incorporating the wherein the emotion state includes an emotion vector that indicates degrees of the emotion categories disclosed by Modi because all techniques address the same field of machine learning and by incorporating Modi into Gelfenbeyn and el Kaliouby provides a more intelligence solution with emotional relevance without sacrificing logic or grammar [Modi Col. 10 ln. 20-36] Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of el Kaliouby, as applied in the rejection of claim 1 and 4 above, and further in view of Manfredi et al. US 20080096533 A1 hereinafter Manfredi) As to dependent claim 6, Gelfenbeyn and el Kaliouby teach the method of claim 1 above that is incorporated, Gelfenbeyn and el Kaliouby do not specifically teach wherein augmenting the original prompt comprises: translating the emotional state of the user into at least a token; and adding the token to the original prompt. However, Manfredi teaches translating the emotional state of the user into at least a token; and [outputs a code or emotional tag (token) ¶85 " Neural network outputs are emotional codes, which are interpreted by the other modules. In one example, in case the network chooses to show happiness, it will transmit to flow manager 42 a happy tag followed by indications in percentage scale of its intensity at that precise moment] adding the token to the original prompt. [adds the tag to the message ¶143 " Thus, the message "thanks" is forwarded to the expert system along with a tag indicating that the emotion is "happy.""] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and el Kaliouby by incorporating the wherein augmenting the original prompt comprises: translating the emotional state of the user into at least a token; and adding the token to the original prompt by Manfredi because all techniques address the same field of machine learning and by incorporating Manfredi into Gelfenbeyn and el Kaliouby provides more realistic responses from AI with varying moods for improved user experiences [Manfredi ¶3-5] As to dependent claim 7, Gelfenbeyn, el Kaliouby and Mandfredi teach the method of claim 6 above that is incorporated, Gelfenbeyn, el Kaliouby and Mandfredi further teach wherein the token includes a word, a highlight, a punctuation, an emoji, a metadata, and/or an emotion vector. [Manfredi word/metadata (tag) ¶85 symbols (emoji) ¶260 ] As to dependent claim 8, Gelfenbeyn and el Kaliouby teach the method of claim 1 above that is incorporated, Gelfenbeyn and el Kaliouby do not specifically teach using a machine-learning model to translate the emotional state to a token and adding the token to the original prompt. However, Manfredi teaches using a machine-learning model to translate the emotional state to a token and adding the token to the original prompt. [Manfredi neural network outputs the codes (token) and adds as a tag ¶85, ¶143 " Neural network outputs are emotional codes, which are interpreted by the other modules. "] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and el Kaliouby by incorporating the using a machine-learning model to translate the emotional state to a token and adding the token to the original prompt by Manfredi because all techniques address the same field of machine learning and by incorporating Manfredi into Gelfenbeyn and el Kaliouby enables more realistic responses from AI with varying moods for improved user experiences [Manfredi ¶3-5] Claims 9-10, 16-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of Manfredi. As to independent claim 9, Gelfenbeyn teaches a system, comprising: [computer platform ¶6] a storage including instructions; and [memory ¶6] a processor for executing the instructions to: [processor and instructions ¶6] receive an original prompt including original tokens; [receives a message with words (tokens) ¶6, prompts ¶68] receive sensor data; [voice, camera, sensors from user input ¶44, ¶106] send the augmented prompt to a generative AI. [Gelfenbeyn adds context to a prompt (input) to a generative AI (LLM) ¶31 "add context to parameters used by the LLMs for analyzing and generating conversations. The context may include additional data related to the conversation, such as an intent of a dialog, an emotional state of the user, a change of an emotional state of the user, an emotional state of the AI character model, parameters of a scene associated with the AI character model, and so forth. In an example embodiment, the context can be determined based on third party data. For example, if participants of the conversation talk about the weather, the weather data can be pulled to the LLM "] Gelfenbeyn does not specifically teach send the sensor data to an emotion service; receive an emotion from the emotion service; translate the emotion into an additional token; and generate an augmented prompt by adding the additional token to the original tokens;. However, Manfredi teaches send the sensor data to an emotion service; [Manfredi sends parameters about the user to a right brain engine (emotion service) including a server ¶141, ¶82 "Right Brain engine is able to directly act, for example, on words to be used, on tone of voice or on expressions to be used to communicate emotions (this last case if the user is interacting through a 3D model). These emotions are the output of a neural network processing which receives at its input several parameters about the user"] receive an emotion from the emotion service; [Manfredi receives a code as emotion ¶85 " Neural network outputs are emotional codes, which are interpreted by the other modules. In one example, in case the network chooses to show happiness, it will transmit to flow manager 42 a happy tag followed by indications in percentage scale of its intensity at that precise moment"] translate the emotion into an additional token; [Manfredi creates a tag ¶85"in case the network chooses to show happiness, it will transmit to flow manager 42 a happy tag followed by indications in percentage scale of its intensity at that precise moment"] generate an augmented prompt by adding the additional token to the original tokens; and [Manfredi adds the tag to the message ¶143 " Thus, the message "thanks" is forwarded to the expert system along with a tag indicating that the emotion is "happy.""] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn by incorporating the send the sensor data to an emotion service; receive an emotion from the emotion service; translate the emotion into an additional token; and generate an augmented prompt by adding the additional token to the original tokens disclosed by Manfredi because both techniques address the same field of machine learning and by incorporating Manfredi into Gelfenbeyn enables more realistic responses from AI with varying moods for improved user experiences [Manfredi ¶3-5] As to dependent claim 10, Gelfenbeyn and Mandfredi teach the method of claim 9 above that is incorporated, Gelfenbeyn and Mandfredi further teach wherein the sensor data includes a plurality of sensing modalities. [Gelfenbeyn voice, camera, sensors from user input ¶44, ¶106] As to independent claim 16, Gelfenbeyn teaches a computer-readable storage medium storing instructions which, when executed by a processor, cause the processor to: [memory processor and instructions ¶6] receive sensor data; [voice, camera, sensors from user input ¶44, ¶106] send the augmented prompt to a generative AI. [Gelfenbeyn adds context to a prompt (input) to a generative AI (LLM) ¶31 "add context to parameters used by the LLMs for analyzing and generating conversations. The context may include additional data related to the conversation, such as an intent of a dialog, an emotional state of the user, a change of an emotional state of the user, an emotional state of the AI character model, parameters of a scene associated with the AI character model, and so forth. In an example embodiment, the context can be determined based on third party data. For example, if participants of the conversation talk about the weather, the weather data can be pulled to the LLM "] Gelfenbeyn does not specifically teach send the sensor data to an emotion service; receive an emotion from the emotion service; and augment an original prompt based on the emotion to generate an augmented prompt. However, Manfredi teaches send the sensor data to an emotion service; [Manfredi sends parameters about the user to a right brain engine (emotion service) including a server ¶141, ¶82 " Right Brain engine is able to directly act, for example, on words to be used, on tone of voice or on expressions to be used to communicate emotions (this last case if the user is interacting through a 3D model). These emotions are the output of a neural network processing which receives at its input several parameters about the user"] receive an emotion from the emotion service; [Manfredi receives a code as emotion ¶85 " Neural network outputs are emotional codes, which are interpreted by the other modules. In one example, in case the network chooses to show happiness, it will transmit to flow manager 42 a happy tag followed by indications in percentage scale of its intensity at that precise moment"] augment an original prompt based on the emotion to generate an augmented prompt; and [Manfredi adds the tag to the message ¶143 " Thus, the message "thanks" is forwarded to the expert system along with a tag indicating that the emotion is "happy.""] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn by incorporating the send the sensor data to an emotion service; receive an emotion from the emotion service; and augment an original prompt based on the emotion to generate an augmented prompt disclosed by Manfredi because both techniques address the same field of machine learning and by incorporating Manfredi into Gelfenbeyn enables more realistic responses from AI with varying moods for improved user experiences [Manfredi ¶3-5] As to dependent claim 17, Gelfenbeyn and Mandfredi teach the method of claim 16 above that is incorporated, Gelfenbeyn and Mandfredi further teach wherein the original prompt is blank. [Gelfenbeyn different inputs are optional (left blank) then may be augmented ¶43-45 "interface 206 may receive a user input 210, environment parameters 212, and events 214 and generate, based on the AI character model 202, a model output 216"] As to dependent claim 20, Gelfenbeyn and Mandfredi teach the method of claim 16 above that is incorporated, Gelfenbeyn and Mandfredi further teach wherein the instructions further cause the processor to append a word that correlates with the emotion to the original prompt to generate the augmented prompt.[Manfredi adds the tag (word) to the message ¶143 " Thus, the message "thanks" is forwarded to the expert system along with a tag indicating that the emotion is "happy.""] Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of Manfredi, as applied in the rejection of claim 10 above, and further in view of Modi As to dependent claim 11, Gelfenbeyn and Manfredi teach the method of claim 10 above that is incorporated, Gelfenbeyn and Manfredi do not specifically teach wherein the emotion includes a plurality of emotion categories associated with the plurality of sensing modalities. However, Modi teaches wherein the emotion includes a plurality of emotion categories associated with the plurality of sensing modalities. [Modi Six emotion categories Col. 5 ln. 50-65 "representation of emotions is categorical, and uses the six emotions addressed by Equation 1, i.e., anger, surprise, joy, sadness, fear, and disgust"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and Manfredi by incorporating the wherein the emotion includes a plurality of emotion categories associated with the plurality of sensing modalities disclosed by Modi because all techniques address the same field of machine learning and by incorporating Modi into Gelfenbeyn and Manfredi provides a more intelligence solution with emotional relevance without sacrificing logic or grammar [Modi Col. 10 ln. 20-36] As to dependent claim 12, Gelfenbeyn and Manfredi teach the method of claim 10 above that is incorporated, Gelfenbeyn and Manfredi do not specifically teach wherein the emotion includes a plurality of emotion vectors associated with the plurality of sensing modalities. However, Modi teaches wherein the emotion includes a plurality of emotion vectors associated with the plurality of sensing modalities. [Modi emotion vector Col. 5 ln. 3-8 "emotion vector representing a probability distribution over six emotions"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and Manfredi by incorporating the wherein the emotion includes a plurality of emotion vectors associated with the plurality of sensing modalities disclosed by Modi because all techniques address the same field of machine learning and by incorporating Modi into Gelfenbeyn and Manfredi provides a more intelligence solution with emotional relevance without sacrificing logic or grammar [Modi Col. 10 ln. 20-36] As to dependent claim 13, Gelfenbeyn, Mandfredi and Modi teach the method of claim 12 above that is incorporated, Gelfenbeyn, Mandfredi and Modi further teach wherein the additional token includes the plurality of emotion vectors. [Modi multiple instances of emotion embedding as input Col. 4 ln. 51-60 "determine vector representation h.sub.S 358 of input dialog sequence 130/230/330. Decoder 360 may also be implemented using an RNN or a CNN. According to the exemplary implementation shown in FIG. 3, decoder 360 includes multiple instances of emotion embedding e.sub.SED provided by a Sequence-Level Decoder Model (SED) and represented by exemplary e.sub.SED 362, as well as affective sampling block 364. As further shown in FIG. 3, affective re-ranking stage 370 includes emotion classifier 372 and reversed seq2seq block 374."] Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of Manfredi, as applied in the rejection of claim 9 above, and further in view of Eleftheriou et al. (US 20200075039 A1 hereinafter Eleftheriou) As to dependent claim 14, Gelfenbeyn and Manfredi teach the method of claim 9 above that is incorporated, Gelfenbeyn and Manfredi do not specifically teach the original tokens are associated with first timestamps; the sensor data is associated with second timestamps; and the emotion is associated with third timestamps. However, Eleftheriou teaches the original tokens are associated with first timestamps; the sensor data is associated with second timestamps; and the emotion is associated with third timestamps. [Timestamps associated with voice (sensor data) and emotion markers and used in ML (tokens) ¶13-15 " extract timeseries of pitch, voice speed, voice volume, pure tone, and/or other characteristics of the user's voice from the voice recording; and transform these timeseries of pitch, voice speed, voice volume, pure tone, and/or other characteristics of the user's voice into timestamped instances (and magnitudes) of the target emotion exhibited by the user while reciting the story. The remote computer system can then: synchronize these timeseries of physiological biosignal data and instances of the target emotion"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and Manfredi by incorporating the original tokens are associated with first timestamps; the sensor data is associated with second timestamps; and the emotion is associated with third timestamps disclosed by Eleftheriou because all techniques address the same field of machine learning and by incorporating Eleftheriou into Gelfenbeyn and Manfredi can improve analysis of data identifying trends and avoid over signaling [Eleftheriou ¶75-76] As to dependent claim 15, Gelfenbeyn, Mandfredi and Eleftheriou teach the method of claim 14 above that is incorporated, Gelfenbeyn, Mandfredi and Eleftheriou further teach inserting the additional token in a particular position among the original tokens based on the first timestamps, second timestamps, and/or the third timestamps. [Eleftheriou stories are ordered emotions (positioned ¶65) ¶13 "prompt the user to recount a story associated with a target emotion (e.g., happy, sad, stressed, distressed, etc.); and capture a voice recording of the user orally reciting this story. During the user's recitation of this story, the wearable device can record a timeseries of physiological biosignal data of the user via a suite of integrated sensors"] Claims 18 is rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of Manfredi, as applied in the rejection of claim 16 above, and further in view of Woloshyn (US 9118773 B2) As to dependent claim 18, Gelfenbeyn and Manfredi teach the method of claim 16 above that is incorporated, Gelfenbeyn and Manfred further teach a sequence of augmented prompts. [Gelfenbeyn sequential dialog and actions input to AI (prompts are AI input) ¶74, ¶68 "generative models configured to follow sequential instructions for dialog and actions that are driven by a specific purpose or intent for AI-driven characters. FIG. 7A shows possible user inputs 702 and input impact for goals model 704"] Gelfenbeyn and Manfredi do not specifically teach wherein the instructions further cause the processor to send a sequence of augmented prompts to the generative AI at regular intervals. However, Woloshyn teaches wherein the instructions further cause the processor to send a sequence of augmented prompts to the generative AI at regular intervals. [auto initiates automation system in regular intervals Col. 31 ln. 39-47 "Automated Notation System procedures may be initiated and/or implemented manually, automatically, statically, dynamically, concurrently, and/or combinations thereof. Additionally, different instances and/or embodiments of the Automated Notation System procedures may be initiated at one or more different time intervals (e.g., during a specific time interval, at regular periodic intervals, at irregular periodic intervals, upon demand, etc.)."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and Manfredi by incorporating the wherein the instructions further cause the processor to send a sequence of augmented prompts to the generative AI at regular intervals disclosed by Woloshyn because all techniques address the same field of automation and by incorporating Woloshyn into Gelfenbeyn and Manfredi provides easier wats to record and personalize notes on conversations [Woloshyn Col. 5 ln. 50-63] Claims 19 is rejected under 35 U.S.C. 103 as being unpatentable over Gelfenbeyn in view of Manfredi, as applied in the rejection of claim 16 above, and further in view of Kim et al. (US 11501794 B1) As to dependent claim 19, Gelfenbeyn and Manfredi teach the method of claim 16 above that is incorporated, Gelfenbeyn and Manfredi do not specifically teach wherein the emotion includes a number representing an emotion category. However, Kim teaches wherein the emotion includes a number representing an emotion category. [sentiment is a number or score representing a emotion Col. 6 ln. 17-27 "estimate a sentiment score and/or sentiment category (e.g., emotion) of the user 5 based on the second input audio data"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning models disclosed by Gelfenbeyn and Manfredi by incorporating the wherein the emotion includes a number representing an emotion category disclosed by Kim because all techniques address the same field of automation and by incorporating Kim into Gelfenbeyn and Manfredi improves user interaction with systems and accuracy of recognition or emotion. [Kim Col. 3 ln. 1-7, ABST] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Shukla et al. (US 20190206402 A1) teaches user devices that collect and send sensor data of a user (see ¶49) It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143 /JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month