Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,414

DYNAMIC PRESENTATION OF DATA DURING A CALL OR A CHAT USING ARTIFICIAL INTELLIGENCE

Non-Final OA §101§103
Filed
Jul 18, 2024
Examiner
PULLIAS, JESSE SCOTT
Art Unit
2655
Tech Center
2600 — Communications
Assignee
The Toronto-Dominion Bank
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
873 granted / 1052 resolved
+21.0% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
47 currently pending
Career history
1099
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to application 18/777,414, which was filed 07/18/24. In a preliminary amendment 12/28/25, Applicant amended claims 7 and 14, as well as the Abstract. Claims 1-20 are pending in the application and have been considered. Response to Preliminary Amendment/Arguments The amendments 12/18/25 to the Abstract are acknowledged. Applicant’s preliminary amendment to claims 7 and 14 made 12/28/25 is acknowledged. On page 8 of the accompanying Remarks, Applicant asserts that no new matter is presented. The examiner has considered this assertion and upon reviewing the original specification, agrees. In particular, Applicant’s amendments add the text “wherein an AI agent performs an action related to the parameter”. It is noted that Applicant’s original specification and claims do not specifically mention the term “agent” or “AI agent”. However, upon reviewing the original specification, Applicant’s “AI model”, in addition to making predictions, also takes actions such as offering a promotional product based on an identified conversation tone (see para [0156], pages 45-46). The AI model here is performing an action related to the parameter, and fairly considered an “AI agent” in the sense that it is a thing that takes an action. Missing Oath/Declaration Applicant’s attention is directed to the notice 07/31/24 informing Applicant that a properly executed oath or declaration for the inventor has not been received. Applicant’s assistance in submitting the properly executed oath or declaration for the inventor is respectfully requested in order to avoid delays should the application otherwise be found in condition for allowance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 15-20 are directed to a “computer-readable storage medium” comprising “instructions”. On page 52, para [0170], the specification discusses a computer-readable storage medium, stating "A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disk read-only memory (CD-ROM), or any other form of storage medium known in the art." Thus, the specification factors against eligibility for the claimed “medium” because the defined scope of the medium is not limited (i.e., the specification only gives examples of media types in an open-ended list which includes “any other form of storage medium known in the art”). Because the scope is open ended, the claims as a whole include non-statutory medium types, e.g. carrier waves, which do not fall into a category of patent eligible subject matter. Abstract Idea Analysis (NOT A REJECTION) The instant claims 1-20 are directed to analyzing an ongoing conversation and presenting a parameter by matching conversation content to groups of conditions in a table and determining content that matches the group of conditions. In addition to utilizing aa trained AI model including a neural network capability, the claims are not practically performable as a mental process because they require presenting the parameter that is mapped to the group of conditions within the table via a graphical user interface (GUI) of the computing device of which the ongoing communication session is being held, which cannot be performed as a mental process. This presentation of the parameter is also not insignificant extra solution activity - it is a critical part of the solution as exemplified by e.g. Applicant’s Fig. 7B where a home loan offer is displayed in the GUI, see [0105]. The claims are therefore considered to be not directed to an abstract idea without significantly more. This is solely for clarity of the record and is NOT a rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-8, 11-15, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20200311204 A1) in view of Vasylyev (US 20240412720 A1). Consider claim 1, Gupta discloses an apparatus (apparatus, [0001]), comprising: a memory configured to store a table which contains parameters that are mapped to groups of condition (Intent-Business Opportunity value Map 414, Fig 4, which maps groups of intents to business opportunities, [0036-0037]; stored in memory of computing system, [0031], Fig. 1)); and a processor, wherein the processor and memory are communicably coupled (processor executing program tangibly embodied in a machine-readable storage device, which in the Fig. 1 computers is memory of computing system, [0067], [0031], Fig. 1), and configured to: receive conversation content from an ongoing communication session with a computing device (computing architecture 200 provides a virtual assistant with which a user via a chatroom that facilitates a conversation including user queries and VA replies, [0032], Fig. 2, ongoing conversation shown in the left pane of Fig. 8 screenshot, [0066]), obtain previous conversation content from a database (chats from chat logs database 112A, Fig. 1, [0031], including historical chat data 404, [0036], Fig. 4, and previous conversation content shown on the left pane of Fig. 8, [0066]), implement a trained artificial intelligence (AI) model including a neural network capability to match the conversation content to the groups of conditions within the table (Long Short-term Memory based neural network is trained to predict intents over chat sessions, [0048], by matching conversations to intent & intent variations in table 410, [0036], Fig. 4; this matches user queries from the conversation to intents and intent variations, i.e. groups of conditions) execute the trained AI model on the conversation content and the previous conversation content to determine content that matches a group of conditions within the table (neural network matches user queries from conversations to intent & intent variations in table 410, [0036], [0048], Fig. 4), and present a parameter that is mapped to the group of conditions within the table via a graphical user interface (GUI) of the computing device at a point in time during the ongoing communication session (monitoring dashboard displays visual indicators, e.g. an “untapped wallet” metric based on the mapped business opportunity value, 0064], [0066]). Gupta does not specifically mention a computing device associated with a profile; and conversation content of a profile. Vasylyev discloses a computing device associated with a profile (system memory 118 of assistant system 2 stores use profile, [0342], Fig. 1); and conversation content of a profile (interaction history, which includes conversation histories, [0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta such that the computing device is associated with a profile; and by storing conversation content of a profile in order to personalize generated conversational responses based on user-specific information, as suggested by Vasylyev ([0012]). Doing so would have led to predictable results of improved user conversation experience, as suggested by Vasylyev ([0009]) . The references cited are analogous art in the same field of natural language processing. Consider claim 8, Gupta discloses a method (method, [0001]) comprising: receiving conversation content from an ongoing communication session with a computing device (computing architecture 200 provides a virtual assistant with which a user via a chatroom that facilitates a conversation including user queries and VA replies, [0032], Fig. 2, ongoing conversation shown in the left pane of Fig. 8 screenshot, [0066]); obtaining previous conversation content from a database (chats from chat logs database 112A, Fig. 1, [0031], including historical chat data 404, [0036], Fig. 4, and previous conversation content shown on the left pane of Fig. 8, [0066]); implementing a trained artificial intelligence (AI) model including a neural network capability to match the conversation content to the groups of conditions within a table (Long Short-term Memory based neural network is trained to predict intents over chat sessions, [0048], by matching conversations to intent & intent variations in table 410, [0036], Fig. 4; this matches user queries from the conversation to intents and intent variations, i.e. groups of conditions); executing the trained AI model on the conversation content and the previous conversation content to determine content that matches a group of conditions within the table (neural network matches user queries from conversations to intent & intent variations in table 410, [0036], [0048], Fig. 4); and presenting a parameter that is mapped to the group of conditions within the table via a graphical user interface (GUI) of the computing device at a point in time during the ongoing communication session (monitoring dashboard displays visual indicators, e.g. an “untapped wallet” metric based on the mapped business opportunity value, 0064], [0066]). Gupta does not specifically mention a computing device associated with a profile; and conversation content of a profile. Vasylyev discloses a computing device associated with a profile (system memory 118 of assistant system 2 stores use profile, [0342], Fig. 1); and conversation content of a profile (interaction history, which includes conversation histories, [0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta such that the computing device is associated with a profile; and by storing conversation content of a profile for reasons similar to those for claim 1. Consider claim 15, Gupta discloses a computer-readable storage medium comprising instructions which when executed by a computer (processor executing program tangibly embodied in a machine-readable storage device, [0067], [0031], Fig. 1 cause a processor to perform: receiving conversation content from an ongoing communication session with a computing device (computing architecture 200 provides a virtual assistant with which a user via a chatroom that facilitates a conversation including user queries and VA replies, [0032], Fig. 2, ongoing conversation shown in the left pane of Fig. 8 screenshot, [0066]); obtaining previous conversation content from a database (chats from chat logs database 112A, Fig. 1, [0031], including historical chat data 404, [0036], Fig. 4, and previous conversation content shown on the left pane of Fig. 8, [0066]); implementing a trained artificial intelligence (AI) model including a neural network capability to match the conversation content to the groups of conditions within a table (Long Short-term Memory based neural network is trained to predict intents over chat sessions, [0048], by matching conversations to intent & intent variations in table 410, [0036], Fig. 4; this matches user queries from the conversation to intents and intent variations, i.e. groups of conditions); executing the trained AI model on the conversation content and the previous conversation content to determine content that matches a group of conditions within the table (neural network matches user queries from conversations to intent & intent variations in table 410, [0036], [0048], Fig. 4); and presenting a parameter that is mapped to the group of conditions within the table via a graphical user interface (GUI) of the computing device at a point in time during the ongoing communication session (monitoring dashboard displays visual indicators, e.g. an “untapped wallet” metric based on the mapped business opportunity value, 0064], [0066]). Gupta does not specifically mention a computing device associated with a profile; and conversation content of a profile. Vasylyev discloses a computing device associated with a profile (system memory 118 of assistant system 2 stores use profile, [0342], Fig. 1); and conversation content of a profile (interaction history, which includes conversation histories, [0102]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta such that the computing device is associated with a profile; and by storing conversation content of a profile for reasons similar to those for claim 1. Consider claim 4, Gupta discloses the processor is configured to implement a second trained AI model configured to determine a tone of a conversation, and execute the trained second AI model on the conversation content to determine a current tone of the ongoing communication session (a trained emotional state classifier predicts an emotional state of the user during the conversation based on the textual messages, i.e. the tone of the messages, [0011], [0012]). Consider claim 5, Gupta discloses the processor is configured to output the parameter based on the current tone of the ongoing communication session (generating a display signal including a status indicator based on the user experience score, predicted based on tone of the user messages, [0011], [0012], [0019]). Consider claim 6, Gupta discloses the processor is configured to generate a model feedback record which includes at least one of the conversation content, previous conversation content, an identifier of the parameter, and an indication of whether the parameter was accepted, and retrain the trained AI model based on the model feedback record (feedback signal used to re-train the machine learning models of the VA, [0034]). Consider claim 7, Gupta discloses the processor is configured to output a description of the parameter to a second graphical user interface (GUI) with a visual indicator which indicates the parameter is being output via the GUI (visual indicators are output with textual descriptions in second window pane, i.e. “second GUI”,, Fig. 8, [0066]), wherein an AI agent performs an action related to the parameter (virtual assistant such as a chatbot, e.g. AI agent, performs one or more VA replies 204B, [0032], which are “related to” the visual indicators since they are derived from the ongoing conversation, [0064], [0066]). Consider claim 11, Gupta discloses the implementing a second trained AI model configured to determine a tone of a conversation, and executing the trained second AI model on the conversation content to determine a current tone of the ongoing communication session (a trained emotional state classifier predicts an emotional state of the user during the conversation based on the textual messages, i.e. the tone of the messages, [0011], [0012]). Consider claim 12, Gupta discloses outputting the parameter based on the current tone of the ongoing communication session (generating a display signal including a status indicator based on the user experience score, predicted based on tone of the user messages, [0011], [0012], [0019]). Consider claim 13, Gupta discloses generating a model feedback record which includes at least one of the conversation content, previous conversation content, an identifier of the parameter, and an indication of whether the parameter was accepted, and retraining the trained AI model based on the model feedback record (feedback signal used to re-train the machine learning models of the VA, [0034]). Consider claim 14, Gupta discloses outputting a description of the parameter to a second graphical user interface (GUI) with a visual indicator which indicates the parameter is being output via the GUI (visual indicators are output with textual descriptions in second window pane, i.e. “second GUI”, Fig. 8, [0066]), where an AI agent performs an action related to the parameter (virtual assistant such as a chatbot, e.g. AI agent, performs one or more VA replies 204B, [0032], which are “related to” the visual indicators since they are derived from the ongoing conversation, [0064], [0066]). Consider claim 18, Gupta discloses the processor performs implementing a second trained AI model configured to determine a tone of a conversation, and executing the trained second AI model on the conversation content to determine a current tone of the ongoing communication session (a trained emotional state classifier predicts an emotional state of the user during the conversation based on the textual messages, i.e. the tone of the messages, [0011], [0012]). Consider claim 19, Gupta discloses the processor performs outputting the parameter based on the current tone of the ongoing communication session (generating a display signal including a status indicator based on the user experience score, predicted based on tone of the user messages, [0011], [0012], [0019]). Consider claim 20, Gupta discloses the processor performs generating a model feedback record which includes at least one of the conversation content, previous conversation content, an identifier of the parameter, and an indication of whether the parameter was accepted, and retraining the trained AI model based on the model feedback record (feedback signal used to re-train the machine learning models of the VA, [0034]). Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20200311204 A1) in view of Vasylyev (US 20240412720 A1), in further view of Sivasubramanian et al. (US 20210158234 A1). Consider claim 2, Gupta and Vasylyev do not, but Sivasubramanian discloses the ongoing communication session comprises a telephone call conducted via a software application (call routed to agents conducted over VOIP, for which a software application is inherent, [0166], [0167]), and the processor is configured to receive speech from the telephone call that is converted to text (calls are transcribed to text, [0073]), and output the parameter during the telephone call via the software application (displaying a “next best action” in the dashboard for the call center agent, [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev such that the ongoing communication session comprises a telephone call conducted via a software application, and the processor is configured to receive speech from the telephone call that is converted to text, and output the parameter during the telephone call via the software application in order to improve speed and accuracy of communication tools, as suggested by Sivasubramanian ([0002]). Doing so would have led to predictable results of improved data analytics, as suggested by Sivasubramanian ([0002]). The references cited are analogous art in the same field of natural language processing. Consider claim 9, Gupta and Vasylyev do not, but Sivasubramanian discloses the ongoing communication session comprises a telephone call conducted via a software application (call routed to agents conducted over VOIP, for which a software application is inherent, [0166], [0167]), and receiving speech from the telephone call that is converted to text (calls are transcribed to text, [0073]), and outputting the parameter during the telephone call via the software application (displaying a “next best action” in the dashboard for the call center agent, [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev such that the ongoing communication session comprises a telephone call conducted via a software application, and receiving speech from the telephone call that is converted to text, and outputting the parameter during the telephone call via the software application for reasons similar to those for claim 2. Consider claim 16, Gupta and Vasylyev do not, but Sivasubramanian discloses the ongoing communication session comprises a telephone call conducted via a software application (call routed to agents conducted over VOIP, for which a software application is inherent, [0166], [0167]), and the processor performs receiving speech from the telephone call that is converted to text (calls are transcribed to text, [0073]), and outputting the parameter during the telephone call via the software application (displaying a “next best action” in the dashboard for the call center agent, [0053]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev such that the ongoing communication session comprises a telephone call conducted via a software application, and receiving speech from the telephone call that is converted to text, and outputting the parameter during the telephone call via the software application for reasons similar to those for claim 2. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. (US 20200311204 A1) in view of Vasylyev (US 20240412720 A1), in further view of Cruz Huertas et al. (US 20180225279 A1). Consider claim 3, Gupta discloses the processor is configured to execute the trained AI model on the conversation content, previous conversation content, and at least one future correspondence (processing historical data and text from ongoing VA chat to predict future intents via intent classifier, [0046], Fig. 5). Gupta and Vasylyev do not specifically mention determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence. Cruz Huertas discloses determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence (modifying a proposed message by removing content that does not contextually fit the messaging session, [0103]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev by determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence in order to ensure messages are contextually appropriate, as suggested by Cruz Huertas ([0001]), leading to predictable results of avoiding innocent jokes which would be poorly received, as suggested by Cruz Huertas ([0001]). The references cited are analogous art in the same field of natural language processing. Consider claim 10, Gupta discloses executing the trained AI model on the conversation content, previous conversation content, and at least one future correspondence (processing historical data and text from ongoing VA chat to predict future intents via intent classifier, [0046], Fig. 5). Gupta and Vasylyev do not specifically mention determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence. Cruz Huertas discloses determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence (modifying a proposed message by removing content that does not contextually fit the messaging session, [0103]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev by determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence for reasons similar to those for claim 3. Consider claim 17, Gupta discloses the processor performs executing the trained AI model on the conversation content, previous conversation content, and at least one future correspondence (processing historical data and text from ongoing VA chat to predict future intents via intent classifier, [0046], Fig. 5). Gupta and Vasylyev do not specifically mention determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence. Cruz Huertas discloses determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence (modifying a proposed message by removing content that does not contextually fit the messaging session, [0103]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Gupta and Vasylyev by determining unwanted content to be removed from the at least one future correspondence, and in response, delete the unwanted content from the at least one future correspondence to generate a modified at least one future correspondence for reasons similar to those for claim 3. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20120089394 A1 Teodosiu discloses visual display of semantic information during a call US 20170004205 A1 Jain discloses utilizing semantic hierarchies to process free form text US 20240187524 Koneru discloses handling customer conversations at a contact center US 20220060580 Dunn discloses analyzing intent to route calls to a device or agent in a contact center US 20200175118 Mahajan discloses dynamically expanding natural language processing agent capability Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jesse Pullias whose telephone number is 571/270-5135. The examiner can normally be reached on M-F 8:00 AM - 4:30 PM. The examiner’s fax number is 571/270-6135. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jesse S Pullias/ Primary Examiner, Art Unit 2655 03/27/26
Read full office action

Prosecution Timeline

Jul 18, 2024
Application Filed
Dec 28, 2025
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596885
Automatically Labeling Items using a Machine-Trained Language Model
2y 5m to grant Granted Apr 07, 2026
Patent 12573378
SPEECH TENDENCY CLASSIFICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572740
MULTI-LANGUAGE DOCUMENT FIELD EXTRACTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566929
COMBINING DATA SELECTION AND REWARD FUNCTIONS FOR TUNING LARGE LANGUAGE MODELS USING REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536389
TRANSLATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+13.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month