DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5, 7-9, and 11-22 are pending in this application.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-5, 7-9, and 11-22 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 12/10/2025 have been fully considered but they are not persuasive.
Applicant’s representative argue on pages 7-8 that Bhargava does not disclose “extract wallet data which includes an identifier of at least one card type from a mobile wallet within the software application installed on the source device.” In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the chatbot can converse with a source device of a user through a software application that can identify payment cards stored on the source device though an automated process. This prevents the need for a user to have to manually enter this information, and ensures that the cards are accurate and current) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The examiner notes that applicant’s amendment no longer recites the argued claim language. The claim now recites “extract an identifier of at least one card type from the software application installed on the source device.” The software application is the same software that a chat session with a source device is conducted on within a chat window of a software application. As applicant’s have stated on page 8, Bhargava discloses in fig. 3, [0074]-[0082], conversation input by the user and the conversation agent, i.e., the requested information in the form of brand “Samsung”, location of purchase as “Arizona” and payment cards in form of “Bankamericard” and “Chase platinum” is used to figure out the payment network includes payment cards like those operated by MasterCard, Visa, Discover, American express, BOA card etc. It is clear that “Bankamericard” and “Chase Platinum” are identifiers of the payment card type that were extracted within the software application (chat session). Therefore Bhargava discloses the claimed invention.
Applicant’s representative argue on page 8-9 that Bhargava does not disclose “determine a goal of the chat session based on a conversation state extracted from the chat window during the chat session…generate a chatbot response that prompts for attributes of the goal based on the execution of the LLM on the goal and the digital document”. Applicant’s representative express that Bhargava does not disclose the recited claim since there is no need to determine a “goal” of the chat session since the conversation agent is trained to perform only one task (select an appropriate installment payment option). Therefore, the examiner respectfully disagrees.
The examiner notes that Bhargava discloses in Fig. 1, [0056], examples of a user swiping a payment card in the POS terminal, and a pre-selected installment payment option is applied to the payment transaction which does not require a chatbot conversational agent to determine the user’s need for assistance (goals). However, it is also noted that Bhargava discloses utilizing a chatbot conversational agent to provide assistance with the user in order to determine how the chatbot conversation agent can provide the assistance the user is requsting. Bhargava discloses in [0076], the conversational agent “Chatbot” assists the user in an interaction channel. The user is seeking assistance on installment payment options for making an intended purchase using a payment card. Fig. 3, [0077]-[0081] discloses the conversational agent has a conversation with the user in which the user provides a first input, ie. An input specifying her requirement of identifying a suitable installment payment option for purchase of a refrigerator. After identifying the user is requiring assistance for installment payment options, the Chatbot in the interaction follows up with the user for additional information needed that allows for the Chatbot to provide a response to the user as assistance for the installment payment options. Fig 3. Item 322, [0104] discloses the installment payment options after reviewing the payment option offers in the database of relevant payment options identified for the user. Therefore, it is clear that Bhargava discloses a chatbot which determines the user’s goal, matches the database document with the product the user is buying (object) along with the card payment (card type), and generates a chatbot response which displays the options in the database document.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-9, 11-20 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Bhargava (US 2019/0340595) in view of Schoenmackers (US 2018/0268818).
Re Claim 1, Bhargava discloses an apparatus comprising;
a memory ([0141], memory); and
a processor coupled to the memory, the processor configured to ([0141], processor with the memory):
conduct a chat session with a source device via a chatbot within a chat window of a software application, and extract a description of an object from the chat window ([0039]-[0040], use natural language and AI logic to extract information such as type of product to be purchased, a brand name and brand model),
extract an identifier of at least one card type from the software application installed on the source device, (fig. 3, [0074]-[0082], conversation input by the user and the conversation agent, i.e., the requested information in the form of brand “Samsung”, location of purchase as “Arizona” and payment cards in form of “Bankamericard” and “Chase platinum” is used to figure out the payment network includes payment cards like those operated by MasterCard, Visa, Discover, American express, BOA card etc.),
determine a goal of the chat session based on a conversation state extracted from the chat window during the chat session (Fig. 3, [0076]-[0081] 302, chatbot asks user how they are doing. 304 User response they are planning to buy a refrigerator and seeing what could be the best payment option which suits me. 306 Chatbot is able to identify that the users require assistance of identifying the best payment option and follows up with where they want to buy it from and how they would like to pay for it which is the goal.)
determine a digital document matches the object and the at least one card type based on the extracted description of the object, at least one card type, (fig. 3, [0076]-[0081], extracts information in the textual chat interaction to retrieve information of brand Samsung and payments cards from Bankamericard and Chase platinum for type of product to be purchased.) and the digital document ([0092]-[0097], Fig. 5 the database contain relevant payment options are identified for the user based on the conversation with the chatbot.), and
display a chatbot response via the chatbot during the chat session (fig. 3, [0103]-[0104], shows payment options offer for the user to choose and respond to an appropriate payment plan.)
While Bhargava discloses machine learning models to trained to interpret natural language user input, Bhargava does not disclose, however Schoenmackers discloses a large language model (LLM) ([0102], train a machine learned model for conversation using language models [0071]),
It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified the teachings of Bhargava’s chatbot with Schoenmackers’s chatbot with train a machine learning model. One of ordinary skill in the art would have been motivated to incorporate the teachings with one another in order to better assist a user to simulate a human agent.
Re Claim 2, Bhargava discloses receive a natural language input via the chat window, extract a corpus of documents from a database, and input the natural language input and the corpus of documents to the LLM to generate the chatbot response (fig. 3, [0076]-[0081], extracts information in the textual chat interaction to retrieve information of brand Samsung and payments cards from Bankamericard and Chase platinum for type of product to be purchased. [0103]-[0104], shows payment options offer for the user to choose and respond to an appropriate payment plan.)
Re Claim 3, Bhargava discloses generate the chatbot response based on execution of the LLM on additional card data stored within at least one database file (fig. 3, [0076]-[0081], extracts information in the textual chat interaction to retrieve information of brand Samsung and payments cards from Bankamericard and Chase platinum and provides the offers between the two payment cards).
Re Claim 4, Bhargava discloses generate a query based on execution of the LLM on conversation content from the chat window and display the query via the chatbot within the chat window (fig. 3, [0103]-[0104], shows payment options offer for the user to choose and respond to an appropriate payment plan.)
Re Claim 5, Bhargava discloses receive a response to the query, and further execute the LLM on the query and the response to generate the chatbot response (fig. 3, [0103]-[0104], shows payment options offer for the user to choose and respond to an appropriate payment plan.)
Re Claim 7, Bhargava discloses the database which are associated with the at least one card type (fig. 3, [0059], machine learning models trained to interpret natural language user input. [0076]-[0081], extracts information in the textual chat interaction to retrieve information of brand Samsung and payments cards from Bankamericard and Chase platinum for type of product to be purchased.)
One of ordinary level of skill in the art would have been compelled to make the proposed modification to Bhargava for the same reasons identified in the rejection of claim 1. In addition, Schoenmackers discloses train the LLM ([0102], train a machine learned model for conversation using language models [0071]), based on execution of the LLM on a corpus of documents from the database ([0021], machine-learned model is trained repeatedly with utterances of users over a time period. [0038], AIF may ask questions to the user to clarify what the user is looking for).
Re Claim 8, Bhargava discloses wherein the processor is further configured to transmit an application programming interface (API) call to an artificial intelligence (AI) engine of the LLM with an identifier of the LLM and natural language content from the description of the object ([0059], API call to retrieve virtual agent logic with machine learning models trained to interpret natural language user input and provide appropriate response in natural language form. [0087], AI logic to extract information such as a type of product intended to be purchased).
Re Claim 14, Bhargava discloses receive a response to the query, determine a next query based on execution of the LLM on the query and the response, and display the next query via the chatbot within the chat window (fig. 3, [0103]-[0104], shows payment options offer for the user to choose and respond to an appropriate payment plan.)
Re Claim 22, Bhargava discloses wherein the process is further configured to determine a next goal of the chat session based on an updated conversation state extracted from the chat window (fig. 3, [0106] at 326, the user ask queries regarding how much they will be paying more if they use the BOA card instead of Chase. The chatbot has now determined that the user wants to find out how much more they will be paying by using the BOA card instead of Chase ), generate a next chatbot response based on the next goal and the updated conversation state, and output the next chatbot response via the chatbot ((fig. 3, [0106] at 326, the chatbot generates and displays the response that the user will end up paying 25 dollars more over a period of 12 hours if the user uses the BOA card).
With respect to claims 9, 11-13, 15-20, they are of similar claims to 1-5, and 7-8 and therefore are rejected for the same reasons above.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Bhargava in view of Schoenmackers and in view of Otsuka (US 2021/0232948).
Re Claim 21, Bhargava discloses in fig. 3, [0076]-[0081], extract information in the textual chat interaction to retrieve information of a type of product intended to be purchased (a refrigerator).
While Bhargava discloses information related to the user’s need for assistance, Bhargava and Schoenmackers does not disclose, however Otsuka discloses extract a name of the object from the chat window, and determine the digital document matches based on the name of the object ([0157]-[0160], the input question reads “I want to know the fee for Plan A” or “I want to know the fee when a special discount is applied”. The chatbot is able to confirm the user’s intention and the identify and then retrieve the relevant documents related to the question).
It would have been obvious for one of ordinary skill in the art before the date the current invention was effectively filed to have modified the teachings of Bhargava and Schoenmackers’s chatbot with Otsukas’s identification of relevant documents. One of ordinary skill in the art would have been motivated to incorporate the teachings with one another in order to identify what documents are needed that are relevant to the user’s intentions of the question that was inputted.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HO T SHIU whose telephone number is (571)270-3810. The examiner can normally be reached Mon-Fri (9:00am - 5:00pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached at 571-272-3089. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HO T SHIU/Examiner, Art Unit 2443
HO T. SHIU
Examiner
Art Unit 2443
/NICHOLAS R TAYLOR/Supervisory Patent Examiner, Art Unit 2443