DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This Office Action is in response to the Amendment filed on 01/29/2026.
3. Claims 1-20 are pending. All the pending claims are examined herein.
Response to Arguments
4. Applicant's arguments filed 01/29/2026 have been fully considered but they are not persuasive. the applicant in general argues the language of each independent claims are not disclosed by Taheri (the prior art of record), and in particular the applicant argues that “Nothing in Taheri discloses or suggests generating a first summary or generating a second summary using customer input data and agent input data obtained in the chat session, as in the claims. As such, Applicant submits that Taheri fails to disclose and enable generating summaries, as in the claims.”
The examiner strongly disagrees. Taheri, as given rejection below, teaches the language of each independent claims. Furthermore, Taheri discloses generating a first summary or generating a second summary using customer input data and agent input data obtained in the chat session, as in the claims. For example in some embodiments, [0123] the executing may include receiving a history of the conversation between the chatbot and the user, including identifiers of user dialogue and chatbot dialogue, and generating a prompt based on execution of the LLM on the history of the conversation and the one or more credit card documents. [0076] In this example, the host application 420 may combine the query with the response from the user interface and generate a conversation state submitted to the LLM 422. For example, each conversation may include a history of communications between the user and the chatbot until that point. [0083] In the example embodiments, the dialog manager 520 may extract a history of the conversation from the chat session with the user, including all communication in the chat window 512 up to that point. [0119] In some embodiments, the method may include receiving a history of the conversation between the chatbot and the user, including identifiers of user dialogue and chatbot dialogue, and generating the next output based on execution of the LLM on the history of the conversation. Furthermore, also see claim 4 of the prior art, that is , the processor is configured to receive a history of the conversation between the chatbot and the user that includes identifiers of user dialogue and chatbot dialogue, and generate the next prompt based on execution of the LLM on the history of the conversation. Also see Figs. 5A and 5B.
The arguments are not persuasive the rejection is maintained.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
5. Claims 1-3, 5, 10, 11, and 13-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Taheri (US 20250117595 A1).
Taheri is directed to DYNAMIC PROMPTING BASED ON CONVERSATION CONTEXT.
As per claim 1, Taheri disclose a computer-implemented method comprising:
receiving, by a computer, customer input data from a client device and agent input data from an agent device during a chat session (In example operation may include one or more of receiving a sequence of inputs from a user during a conversation between the user and a chatbot within a chat window of a software application (Abstract), also see a chat session between a user and chatbot (agent), also see Figs. 4A-4B) ;
for a first state of the chat session, executing, by the computer, a large language model on the customer input data and the agent input data at the first state to generate a first summary (executing a large language model (LLM) on each input from the user to determine a next prompt to output via the chatbot (Abstract). [0012] A further example embodiment provides a method that includes one or more of receiving a sequence of inputs from a user during a conversation between the user and a chatbot within a chat window of a software application, executing a large language model (LLM) on each input from the user to determine a next prompt to output via the chatbot, respectively, wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window, and displaying the next prompt within a chat window on a user device. Also see Figs. 4a-4B); and
updating, by the computer, an agent chat interface of the agent device based on the first summary (each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window, and displaying the next prompt within a chat window on a user device. Abstract, [0069]); and
for a second state of the chat session, executing, by the computer, the large language model on the customer input data and the agent input data at the second state to generate a second summary ([0011] execute a large language model (LLM) on each input from the user to determine a next prompt to output via the chatbot, respectively, wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window, and display the next prompt output by the chatbot within the chat window on a user device. [0072] Here, the LLM 422 may determine a goal/next goal of the conversation based on execution of the LLM on the new chat input and the most recent state of the conversation between the user and the chatbot. The LLM 422 can generate an additional response to be output by the chatbot based on execution of the LLM 422 on the next goal of the conversation. also see [0012-10013], Figs. 4A-4B); and
updating, by the computer, the agent chat interface based on the second summary. ([0013] wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window, and displaying the next prompt within a chat window on a user device. Also see [0069, 0072, Figs. 4A-4B).
As per claim 2, Taheri further disclose that the computer-implemented method of claim 1, wherein executing the large language model on the customer input data and the agent input data at the second state to generate a second summary comprises: executing, by the computer, the large language model on the first summary, the customer input data, and the agent input data at the second state to generate the second summary ([0069] In response, the host application 420 may trigger execution of an LLM 422 on each input from the user to determine a next piece of text content to output via the chatbot, respectively, wherein each execution of the LLM 422 includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot 414 within the chat window 412. The LLM 422 may transfer the next output/response to the host application 420, which outputs the next output via the chatbot 414 within the chat window 412 on the user device 410).
As per claim 3, Taheri further disclose that the computer-implemented method of claim 1, further comprising:
determining, by the computer, that a transition has occurred between the first state of the chat session and the second state of the chat session, the transition associated with an indication that the agent chat interface was updated based on an intervening chat session, ([0118] Referring to FIG. 8B, in 811, the method may include receiving a sequence of inputs from a user during a conversation between the user and a chatbot within a chat window of a software application. In 812, the method may include executing a large language model (LLM) on each input from the user to determine a next output to output via the chatbot, respectively, wherein each execution of the LLM includes a new chat input from the user and a most-recent state of the conversation between the user and the chatbot within the chat window. In 813, the method may include displaying the next output within a chat window on a user device with a description of the identified benefit obtained.);
wherein, when executing the large language model during the second state of the chat session, the large language model is executed based on determining that the transition has occurred between the first state of the chat session and the second state of the chat session. [0082] FIGS. 5A-5E are diagrams illustrating a process of generating chat content based on a dynamic conversation state according to example embodiments. See a chat session or a conversations state transition in Fig. 5A-5E. ([0120] In some embodiments, the method may further include determining a next goal of the conversation based on execution of the LLM on the new chat input and the most recent state of the conversation between the user and the chatbot. In some embodiments, the method may further include generating an additional output to be output by the chatbot based on execution of the LLM on the next goal of the conversation. In some embodiments, the method may further include training the LLM based on execution of the LLM on a corpus of documents from the database, which are associated with user historical conversations).
As per claim 5, Taheri further disclose that the computer-implemented method of claim 1, wherein the customer input data corresponding to the first state is associated with a first message (see Fig. 5A, see different conversation states associated with corresponding message or prompts,) the computer-implemented method further comprising:
determining, by the computer and during the second state, that the customer input data corresponding to the second state is associated with a second message ([0120] In some embodiments, the method may further include determining a next goal of the conversation based on execution of the LLM on the new chat input and the most recent state of the conversation between the user and the chatbot. In some embodiments, the method may further include generating an additional output to be output by the chatbot based on execution of the LLM on the next goal of the conversation. In some embodiments, the method may further include training the LLM based on execution of the LLM on a corpus of documents from the database, which are associated with user historical conversations.
wherein, when executing the large language model during the second state of the chat session, the large language model is executed based on receiving the customer input data that is associated with the second message ([0019] In some embodiments, the method may include displaying a first output by the chatbot via the chat window, receiving a first natural language input via the chat window in response to the first output, and generating a second output by the chatbot via the chat window based on execution of the LLM on the first output and the first natural language input. In some embodiments, the method may include receiving an additional natural language input from the chat window and generating an additional output by the chatbot based on execution of the LLM on the second output and the additional natural language input. In some embodiments, the method may include receiving a history of the conversation between the chatbot and the user, including identifiers of user dialogue and chatbot dialogue, and generating the next output based on execution of the LLM on the history of the conversation).
As per claim 10, Taheri further disclose that the computer-implemented method according to claim 1, further comprising:
parsing, by the computer, a form file into a plurality of fields, each field comprising a field name ([0111] The LLM 722 may identify specific benefits sections within the broader credit card documentation. Using parsing functionality, the instant solution may then extract just this relevant content 744. This snippet is especially useful for users wanting specific, pinpointed information without the bulk of full documentation); and
identifying, by the computer, an instance of the field name of at least one field in at least one of the first summary or the second summary ([0164] operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations, including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network).
As per claim 11, Taheri further disclose that the computer-implemented method according to claim 1, wherein receiving the customer input data from the client device comprises:
receiving, by the computer, customer input data from a plurality of client devices that are associated with distinct customers, the plurality of client devices corresponding to a plurality of chat sessions that are simultaneously performed, and updating, by the computer, the agent chat interface for the agent device based on the plurality of chat sessions ([0006] A further example embodiment provides a method that includes one or more of receiving an input from a user during a conversation that includes a plurality of prompts between the user and a chatbot within a chat window of a software application, converting text content within the received input into a vector, executing a large language model (LLM) on the vector and a database of vectorized responses to identify a vectorized response to output from among the plurality of vectorized responses within the database, converting the vectorized response into a text response, and displaying the text response output by the chatbot within the chat window of the software application. [0081] In one embodiment, the current solution offers a chatbot 414 that can engage with users regarding payment card inquiries. When a user poses a question about a payment card, the processor utilizes the LLM 422, which in turn taps into a stored database of payment card information to construct a relevant response. On receiving a natural language query, the processor extracts relevant documents from the database and feeds both the query and these documents into the LLM to construct a more informed response. [0126]Referring to FIG. 8D, in 831, the method may include receiving an input from a user during a conversation that includes a plurality of prompts between the user and a chatbot within a chat window of a software application).
As per claim 13, Taheri further disclose that the computer-implemented method of claim 1, further comprising:
determining, by the computer, a plurality of messages associated with the customer input data and the agent input data corresponding to the first state of the chat session ([0072] In some embodiments, the LLM 422 may be configured to try to obtain the most information in the shortest number of rounds of communication. Here, the LLM 422 may determine a goal/next goal of the conversation based on execution of the LLM on the new chat input and the most recent state of the conversation between the user and the chatbot. The LLM 422 can generate an additional response to be output by the chatbot based on execution of the LLM 422 on the next goal of the conversation. and
updating, for the first state of the chat session and by the computer, a message buffer comprising a predetermined number of messages from among the plurality of messages corresponding to the first state of the chat session ([0070] Each execution of the LLM 422 may include a new conversation state input to the LLM 422 from the chat window 412. For example, the conversation state may include a history of all communications from the current session with the user performed via the chat window 412. Initially, the conversation state may be empty. Each time the user submits a query, the conversation state grows. [0072] In some embodiments, the LLM 422 may be configured to try to obtain the most information in the shortest number of rounds of communication. Here, the LLM 422 may determine a goal/next goal of the conversation based on execution of the LLM on the new chat input and the most recent state of the conversation between the user and the chatbot. The LLM 422 can generate an additional response to be output by the chatbot based on execution of the LLM 422 on the next goal of the conversation. Also see [0012, 0069-0070;
wherein the computer applies the large language model on the customer input data and the agent input data corresponding to the predetermined number of messages in the message buffer to generate the first summary ([0126] Referring to FIG. 8D, in 831, the method may include receiving an input from a user during a conversation that includes a plurality of prompts between the user and a chatbot within a chat window of a software application. In 832, the method may include converting text content within the received input into a vector. In 833, the method may include executing a large language model (LLM) on the vector and a database of vectorized responses to identify a vectorized response to output from among the plurality of vectorized responses within the database. In 834, the method may include converting the vectorized response into a text response. Also see ‘[0135, 0137, 0138 and 0144])
As per claim 14, Taheri further disclose that the computer-implemented method of claim 13, further comprising:
determining, by the computer, a plurality of messages associated with the customer input data and the agent input data corresponding to the first state of the chat session and the second state of the chat session ([0021] FIGS. 5A-5E are diagrams illustrating a process of generating chat content based on a dynamic conversation state according to example embodiments). and
updating, by the computer, a message buffer comprising a predetermined number of messages from among the plurality of messages corresponding to the first state of the chat session or the second state of the chat session ([0083] In the example embodiments, the dialog manager 520 may extract a history of the conversation from the chat session with the user, including all communication in the chat window 512 up to that point. The history of the conversation may also be referred to herein as a “conversation state.” The conversation state may include the chat content within the chat window 512 that has been output already. Thus, the conversation state may dynamically evolve each time a new communication is added to the chat window 512 by either the user or the chatbot 514 via the LLM 522);
wherein the computer applies the large language model on the customer input data and the agent input data corresponding to the predetermined number of messages in the message buffer to generate the second summary ([0077] For example, the input to the LLM 422 may include the query 432 submitted by the user via the chat window 412 shown in FIG. 4B. [0078] In response, the LLM 422 may generate a response 434, output via the chatbot 414 within the chat window 412. Here, the LLM 422 may use the text content from the query 432 to derive the response 434. However, as the conversation continues, the size of the data submitted to the LLM 422 may increase. For example, each additional response and query may be accumulated/aggregated with the current conversation state sent to the LLM 422 to generate a response. Thus, the LLM 422 continues to receive a larger data set. For example, the LLM 422 may receive the query 436, the response 434, and the previous query (i.e., the query 432) as inputs and output the response 438 in response. also see [0072]).
As per claim 15, Taheri further disclose that the computer-implemented method of claim 1, wherein executing the large language model on the customer input data and the agent input data at the first state to generate the first summary comprises:
generating, by the computer, one or more outputs associated with the first summary by executing the large language model on a prompt response, the customer input data, and the agent input data ([0082] FIGS. 5A-5E are diagrams illustrating a process of generating chat content based on a dynamic conversation state according to example embodiments. For example, FIG. 5A illustrates a process 500 of a dialog manager 520 executing a conversation between a chatbot and a user via a chat window 512 on a user device 510. In this example, the dialog manager 520 manages communications between an LLM 522 and a chatbot 514 displayed on the chat window 512 of the user device 510. For example, content from the chat window 512 may be retrieved by the dialog manager 520 (e.g., via a software application, etc.) and input into the LLM 522. In response, the LLM 522 may generate a response); and
obtaining, by the computer, the one or more outputs from the large language model, the output associated with the first summary, wherein the prompt response includes input text indicating one or more conditions for the large language model to process the customer input data and the agent input data based on the one or more conditions ([0105] FIGS. 7A-7B illustrate a process of generating and sending a confirmation letter to a user device according to example embodiments. For example, FIG. 7A illustrates a process 700 of generating an electronic message 730 based on execution of an LLM 722 on a conversation state 716 from a chat window 714 displayed on a user interface 712 of a user device 710. In this example, a user and a chatbot are conversing via the chat window 714. The responses output by the chatbot may be generated by the LLM 722. Also see [0115, 0118-0121]).
As per claims 16-19, Taheri further disclose that the computer system (for example computer system of Fig. 9). The limitations of claims 16-19 correspond to the limitations of method claims 1 and 3-5, respectively. Thus the system claims are rejected under similar citations given to the method claims 1 and 3-5, respectively.
As per claim 20, Taheri further disclose a non-transitory machine-readable storage medium (storage medium system 914, Fig. 9) has instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform the method steps of claim 1. Thus, the storage medium claim is rejected under similar citations given to the method claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Taheri in view of
Vendrow ( US 20230297765 A1) .
As per claim 12, Taheri further disclose that the computer-implemented method according to claim 1, further comprising:
Although Taheri discloses generating chat session, but Taheri does not mention generating a transcript of the chat session. That is Taheri fail to discloses generating a transcript of the chat session based on the customer input data and the agent input data, wherein the first summary is generated based on a first portion of the transcript at the first state, and wherein the second summary is generated based on a second portion of the transcript at the second state.
Vendrow is directed to summarizing meeting content with accuracy control. [0050] In some implementations, the conference management system 150 may include services configured to analyze meeting content and generate an intelligent meeting summary for users. Meeting content may include but is not limited to audio and video recordings and live streams of the meeting, generated transcripts of the meeting, recorded chat sessions during the meeting, as well as any documents or other content presented during the meeting).
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the teaching of Vendrow with Taheri so that keeping session transcript provides a clear record of the session, increases accountability by documenting action items and improves communication by ensuring everyone ins on the same page.
Therefore, it would have been obvious to combine Vendrow with Taheri to obtain the invention as specified in claim 12.
Allowable Subject Matter
7. Claims 4, and 6-9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/Primary Examiner, Art Unit 2174