Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The instant application, having application number 19/044, 212, filed on February 3, 2025, has claims 1-19 pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/03/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). However, the claim(s) recite(s) inputting and generating a semantic feature which is an abstract idea of organizing human activities.
The limitation of “electronically inputting, in an application program interface of a chat application, a first prompt assigned to a user, the first prompt yielding a plurality of possible responses from the language model based on content of the document under review; generating an example set comprising text from example documents representative of each of the plurality of possible responses; and electronically inputting, before the first prompt in an application program interface of a chat application, a fabricated history of a conversation between the user and the language model, the fabricated history comprising the example set and a plurality of possible responses assigned to the language model.” as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “electronically” and “language model”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “electronically” and “language model”, language, “inputting” and “generating”, in the context of this claim encompasses the user mentally, with the aid of pen and paper, keeping track of a chat to generate an example set and writing down possible responses. If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements – “electronically inputting, before the first prompt in an application program interface of a chat application”. The limitation “electronically inputting, before the first prompt in an application program interface of a chat application” amounts to data-gathering steps which is considered to be insignificant extra-solution activity (See MPEP 2106.05(g)). The automatically inputting these steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activity identified above, which include the data gathering steps, is recognized by the courts as well-understood, routine, and conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II)(i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). The claims are not patent eligible.
Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein the first prompt comprises text from the document under review and a first query directed to identifying the semantic feature of interest in the document under review”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claims 3-4.
Claim 5, is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 5 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein the plurality of possible responses include: a positive response, indicating the semantic feature of interest is contained in the text from the corresponding example document; and a negative response, indicating the semantic feature of interest is not contained in the text from the corresponding example document.” , which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 6 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein the documents are legal contracts”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claim 7.
Claim 8 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 8 recites the same abstract idea of claim 1. The claim recites the additional limitations of “wherein at least one of the example set and the fabricated history is stored in a database accessible by the chat application”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 9 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 9 recites the same abstract idea of claim 1. The claim recites the additional limitations of “repeating the step of electronically inputting the first prompt assigned to a user, creating a new first prompt assigned to the user with new text from one of the document under review and a new document under review, wherein the first prompt is replaced by the new first prompt and wherein the new first prompt follows the fabricated history”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 10, the limitation of “transmitting, to the language model, a fabricated history of a conversation between a user and the language model, the fabricated history comprising: a first prompt assigned to a user, content of the first prompt comprising text from a first document and a first query directed to identifying the semantic feature of interest in the first document; a first response assigned to the language model, content of the first response responsive to the first query; a second prompt assigned to the user, content of the second prompt comprising text from a second document and a second query directed to identifying the semantic feature of interest in the second document; and a second response assigned to the language model, content of the second response responsive to the second query; wherein the first and second queries are the same and wherein the content of the first response differs from content of the second response; transmitting, to the language model, a third prompt assigned to the user, content of the third prompt comprising text from a third document and a third query directed to identifying the semantic feature of interest in the third document, wherein the third query is the same as the first and second queries; and receiving, from the language model, a third response to the third query.” amount to nothing more than collecting, analyzing, and comparing information. Courts have found similar concepts to be abstract. See, e.g., Electric Power Group v. Alstom, 830 F.3d 1350 (Fed. Cir. 2016); Content Extraction and Transmission LLC v. Wells Fargo Bank, 776 F.3d 1343 (Fed. Cir. 2014). Accordingly, claim 10 recites an abstract idea.
The claim does not include additional elements that integrate the abstract idea into a practical application.
The only additional element recited is the use of a generic “large language model” and generic computer components implicitly required to perform the transmitting and receiving. The claim does not recite any specific improvement to computer functionality, model architecture, training method, or data handling. The claim does not recite additional elements that amount to significantly more than the abstract idea itself.
The use of a large language model constitutes well-understood, routine, and conventional computer activity. The mere inclusion of “fabricated conversation history” is simply a form of data formatting, which courts have found does not constitute “significantly more.” See Alice, supra.
Claim 11 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 11 recites the same abstract idea of claim 10. The claim recites the additional limitations of “wherein the first and second responses represent all possible responses to the third query.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 12 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 12 recites the same abstract idea of claim 10. The claim recites the additional limitations of “wherein the fabricated history includes one or more additional prompts assigned to the user and one or more additional responses assigned to the language model, each of the one or more additional prompts assigned to the user comprising text from one of a plurality of documents and a query directed to identifying the semantic feature of interest in each of the plurality of documents, wherein the first, second, and plurality of additional responses represent all possible responses to the third query.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 13 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 13 recites the same abstract idea of claim 10. The claim recites the additional limitations of “wherein the first, second, and third documents are different.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claim 14.
Claim 15 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 15 recites the same abstract idea of claim 10. The claim recites the additional limitations of “transmitting a new third prompt assigned to the user with new text from the new third document, wherein the third prompt is replaced by the new third prompt and wherein the new third prompt follows the fabricated history.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 16 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 16 recites the same abstract idea of claim 10. The claim recites the additional limitations of “wherein all possible responses to the first and second queries are: positive, indicating that the corresponding first or second document contains the semantic feature of interest; and negative, indicating the corresponding first or second document does not contain the semantic feature of interest; wherein one of the first and second responses is positive and the other of the first and second responses is negative.” , which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim 17 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 17 recites the same abstract idea of claim 10. The claim recites the additional limitations of “transmitting, to the language model, instructions defining a format for responses generated by the language model.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more. Same rationale applies to claim 18.
Claim 19 is dependent on claim 10 and includes all the limitations of claim 10. Therefore, claim 19 recites the same abstract idea of claim 10. The claim recites the additional limitations of “querying a database to retrieve the fabricated history of a conversation, wherein the fabricated history is associated with the semantic feature of interest.”, which is further elaborating on the abstract idea, and therefore it does not amount to significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 are rejected under 35 USC 103 (a) as being unpatentable over Hernandez et al. (US 20250265413 A1) (hereinafter Hernandez) in view of Taheri (US 20250117595 A1) (hereinafter Taheri).
As per claim 1, Hernandez discloses electronically inputting, in an application program interface of a chat application, a first prompt assigned to a user [At operation 202, customer input data is received from a client device and agent input data is received from an agent device during a chat session., paragraph 40], the first prompt yielding a plurality of possible responses from the language model based on content of the document under review [In some embodiments, the client device generates the customer input data to initiate a chat session. For example, in the case where a customer requires assistance (e.g., in connection with a purchase they made, a product or service they purchased, and/or the like), the customer may navigate to a website using the client device to initiate the chat session. At this time, the customer may provide input via the input device associated with the client device, the input representing, for example, customer identifiers, transaction identifiers, product identifiers (e.g., a serial number), service identifiers (e.g., an order confirmation number), and/or the like, paragraph 42, (it is understood that the issuance of any identifier or reference number to a customer signifies that the associated documents have been submitted and are already under review.)]; generating an example set comprising text from example documents representative of each of the plurality of possible responses [The agent may then provide input that is received by the automated context monitoring system, the input associated with a prompt response (e.g., information provided by the agent after receiving the prompt)., paragraph 55]. However Hernandez does not disclose electronically inputting, before the first prompt in an application program interface of a chat application, a fabricated history of a conversation between the user and the language model, the fabricated history comprising the example set and a plurality of possible responses assigned to the language model. On the other hand, Taheri discloses electronically inputting, before the first prompt in an application program interface of a chat application, a fabricated history of a conversation between the user and the language model, the fabricated history comprising the example set and a plurality of possible responses assigned to the language model [the training may include training the LLM model to understand a correlation between a product identifier and a credit card benefit based execution of the LLM on the sequence of prompts and the sequence of prompts. In some embodiments, the executing may include receiving a history of the conversation between the chatbot and the user, including identifiers of user dialogue and chatbot dialogue, and generating a prompt based on execution of the LLM on the history of the conversation and the one or more credit card documents, paragraph 123]. Both references Hernandez and Taheri are in the field of endeavor of automated context monitoring across multiple chat sessions. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the automated context monitoring across multiple chat sessions during customers communicate with agent-advisors as taught by Hernandez with realizing dynamic prompting during conversation between a user and a chatbot as disclosed by Taheri to receive a sequence of inputs from a user when in a conversation with a chatbot within a chat window of a software application and execute a large language model (LLM) on the inputs from the user to determine a prompt to output through the chatbot and displays the prompt output by the chatbot, so that the chatbot can quickly and accurately gauge a user's familiarity level and respond according to effective prompts.
As per claim 2, Hernandez discloses wherein the first prompt comprises text from the document under review and a first query directed to identifying the semantic feature of interest in the document under review [Fig. 5B, Agent’s response to the user’s confirmation number].
As per claim 3, Hernandez discloses a plurality of prompts assigned to the user, each prompt assigned to the user including text from one of the example documents and the first query directed to identifying the semantic feature in the one of the example documents; wherein each prompt assigned to the user is followed by a corresponding one of the plurality of possible responses assigned to the language model [a GUI associated with a third region 505c and a fourth region 505d are shown where the third region 505c and the fourth region 505d are associated with a chat session involving another client device which may be similar to the client device 502 of FIG. 5A. The third region 505c and the fourth region 505d may be updated by the agent device 504 similar to how the first region 505a and the second region 505b are updated based on communication of data associated with one or more messages with another client device., paragraph 109].
As per claim 4, Hernandez discloses wherein the example documents used to generate the example set are different and are different from the document under review [Fig. 5B, 505a is different from 505c and therefore their summaries].
As per claim 5, Taheri discloses, wherein the plurality of possible responses include: a positive response, indicating the semantic feature of interest is contained in the text from the corresponding example document; and a negative response, indicating the semantic feature of interest is not contained in the text from the corresponding example document [Fig. 5E, item 532, the conversation state includes the user response 532 from the user, which includes a natural language input such as a description, a query, an answer, or the like provided by the user, paragraph 94].
As per claim 6, Hernandez discloses wherein the documents are legal contracts [Fig. 6F, user needed help with registration, can be interpreted as registering any legal documents].
As per claim 7, Hernandez discloses wherein the semantic feature of interest is a term or condition of the legal contracts [Fig. 6F, user needed help with registration, can be interpreted as registering any legal documents].
As per claim 8, Hernandez discloses wherein at least one of the example set and the fabricated history is stored in a database accessible by the chat application [Fig. 5B, Agent’s response to the user’s confirmation number].
As per claim 9, Taheri discloses repeating the step of electronically inputting the first prompt assigned to a user, creating a new first prompt assigned to the user with new text from one of the document under review and a new document under review, wherein the first prompt is replaced by the new first prompt and wherein the new first prompt follows the fabricated history [Returning again to FIG. 6A, the vector 644 may be input to a vector-to-word model 626, which converts the vector 644 to text and outputs the text via the chatbot 616 in the chat window 614 on the user interface 612 of the user device 610. This process may be iteratively repeated. By keeping a database of vectors, the LLM 624 can operate on vector content rather than text content, making the LLM 624 more efficient due to less processing time., paragraph 100].
Claim 16 is rejected under 35 USC 103(a) as being unpatentable over Taheri in view of Everest (US 20250124001 A1) (hereinafter Everest).
As per claim 16, the rejection of claim 16 is incorporated by claim 10 below. However Taheri does not disclose wherein all possible responses to the first and second queries are: positive, indicating that the corresponding first or second document contains the semantic feature of interest; and negative, indicating the corresponding first or second document does not contain the semantic feature of interest; wherein one of the first and second responses is positive and the other of the first and second responses is negative. On the other hand, Everest discloses wherein all possible responses to the first and second queries are: positive, indicating that the corresponding first or second document contains the semantic feature of interest; and negative, indicating the corresponding first or second document does not contain the semantic feature of interest; wherein one of the first and second responses is positive and the other of the first and second responses is negative [statistical correlations and/or mathematical associations may include probabilistic formulas or relationships indicating a positive and/or negative association between at least an extracted word and/or a given semantic meaning; positive or negative indication may include an indication that a given document is or is not indicating a category semantic meaning. Whether a phrase, sentence, word, or other textual element in a document or corpus of documents constitutes a positive or negative indicator may be determined, in an embodiment, by mathematical associations between detected words, comparisons to phrases and/or words indicating positive and/or negative indicators that are stored in memory, paragraph 125]. Both references Taheri and Everest are in the field of user-specific outputs of machine learning models. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the realizing dynamic prompting during conversation between a user and a chatbot as disclosed by Taheri with the data ingestion for user-specific outputs of one or more machine learning models to create a user-specific output as a function of the educational module.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 10-15 and 17-19 are rejected under 35 USC 102(a)(1) as being anticipated by Taheri (US 20250117595 A1) (hereinafter Taheri).
As per claim 10, Taheri discloses transmitting, to the language model, a fabricated history of a conversation between a user and the language model, the fabricated history [The user's current location may be received by the LLM 522 from the user device 510, paragraph 94] comprising: a first prompt assigned to a user, content of the first prompt comprising text from a first document and a first query directed to identifying the semantic feature of interest in the first document [Fig. 5E, item 532, the conversation state includes the user response 532 from the user, which includes a natural language input such as a description, a query, an answer, or the like provided by the user, paragraph 94, (it is understood that item 532 is interpreted as first prompt)]; a first response assigned to the language model, content of the first response responsive to the first query [Fig. 5C, item 514]; a second prompt assigned to the user, content of the second prompt comprising text from a second document and a second query directed to identifying the semantic feature of interest in the second document [Fig. 5E, item 542, interpreted as a second user’s prompt]; and a second response assigned to the language model, content of the second response responsive to the second query [Fig. 5E, item 516, second response to the user’s second prompt]; wherein the first and second queries are the same [Fig. 5E, item 532] and wherein the content of the first response differs from content of the second response [Fig. 5E, two different responses to user’s query 532]; transmitting, to the language model, a third prompt assigned to the user, content of the third prompt comprising text from a third document and a third query directed to identifying the semantic feature of interest in the third document, wherein the third query is the same as the first and second queries [In 802, the method may include conversing with a user via a chatbot within a chat window of a software application, wherein the conversing comprises receiving a query from the user about a payment card during a chat session between the user and the chatbot, paragraph 113]; and receiving, from the language model, a third response to the third query [displaying the generated chatbot response via the chatbot within the chat window of the software application during the chat session, paragraph 114].
As per claim 11, Taheri discloses wherein the first and second responses represent all possible responses to the third query [The response will also become part of the conversation state. The output 516 and any response may be added to the conversation state 530 to generate a second conversation state, paragraph 86].
As per claim 12, Taheri discloses wherein the fabricated history includes one or more additional prompts assigned to the user and one or more additional responses assigned to the language model, each of the one or more additional prompts assigned to the user comprising text from one of a plurality of documents and a query directed to identifying the semantic feature of interest in each of the plurality of documents, wherein the first, second, and plurality of additional responses represent all possible responses to the third query [The LLM detects if a product identifier is correlated with any relevant credit card offers, promos, or cashback from the stored documentation. For example, the product identifier may be “MacBook.” This correlation is achieved through a combination of keyword matching, context recognition, and potentially semantic understanding, paragraph 109].
As per claim 13, Taheri discloses, wherein the first, second, and third documents are different [Fig. 6B, the LLM 624 may perform a similarity analysis within vector space using cosine similarity or the like. The cosine similarity function measures the similarity between two vectors within the vector space/product space. It is measured by determining the cosine of the angle between the two vectors and whether they are pointing in the same direction. In the example of FIG. 6B, the LLM 624 compares the vector 642 to each of a plurality of vectors (vectorized responses) stored within the database 630 and selects a vector 644 that most closely matches the vector 642 in vector space as the response vector, paragraph 99].
As per claim 14, Taheri discloses wherein the first, second, and third documents are legal contracts and wherein the semantic feature of interest is a condition of the legal contracts [Recognizing the relationship between specific products (e.g., card types) and their associated benefits is crucial. The LLM is trained to deduce these connections based on conversational prompts and responses. The current solution tracks the conversation's history, combining both user and chatbot interactions. This history guides the LLM to generate more contextually relevant prompts, paragraph 55].
As per claim 15, Taheri discloses transmitting a new third prompt assigned to the user with new text from the new third document, wherein the third prompt is replaced by the new third prompt and wherein the new third prompt follows the fabricated history [the converting may include converting previous responses from the user and previous outputs by the chatbot within the chat window into the vector and identifying the vectorized response based on an aggregation of the received input, the previous responses from the user, and the previous outputs by the chatbot, paragraph 128].
As per claim 17, Taheri discloses transmitting, to the language model, instructions defining a format for responses generated by the language model [When the LLM identifies a user's credit card benefits, it can format this information into an interactive, detailed digital document, paragraph 109].
As per claim 18, Taheri discloses, wherein the first and second responses are formatted according to the instructions assigned to the language model [if the user inquires about credit card benefits, the processor identifies the context and uses the LLM 722 to extract appropriate data from documentation 744. Subsequently, a tailored letter of confirmation 746 is formulated. This letter is then converted to a format suitable for transmission (e.g., PDF), encapsulated in an electronic message, and sent to the user's device 710. When the LLM identifies a user's credit card benefits, it can format this information into an interactive, detailed digital document, such as a hyperlinked PDF or an interactive web page, for example., paragraph 109].
As per claim 19, Taheri discloses querying a database to retrieve the fabricated history of a conversation, wherein the fabricated history is associated with the semantic feature of interest [The user's current location may be received by the LLM 522 from the user device 510, paragraph 94].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOOSHA ARJOMANDI whose telephone number is (571)272-9784. The examiner can normally be reached on (571)272-9784.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sanjiv Shah can be reached on (571)272-4098. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
January 7, 2026
/NOOSHA ARJOMANDI/ Primary Examiner, Art Unit 2166