Detailed Action
Status of Claims
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is in reply to the Election made on 11/14/2025.
Claim 19 has been cancelled, Claim 21 has been newly entered, and claims 2-3, 12, and 17 have been amended.
Claims 1-18 and 20-21 are currently pending and have been examined.
Election
Applicant’s election without traverse of Group I in the reply filed on 4/2/2025 is acknowledged.
Priority
Applicant’s claim of priority to provisional application 63524499 is acknowledged. The provisional application does not provide support for at least a determination of a type of clickable prompt button being engaged as type-1 or type-2 and the corresponding claimed steps.
The claims are therefore afforded an effective filing date of 11/22/2023.
Claim Objections
Claim 18 is objected to for the following informality: “the computer-implemented method of claim 17” should read “the system of claim 17.” Appropriate correction is required.
Claim Rejection - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 and 20-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 1-10 are directed to a process, claims 11-18 are directed to a machine, and claims 20-21 are directed to an article of manufacture. Therefore, claims 1-18 and 20-21 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claims 1, 11, and 20 recite at least the following limitations that are believed to recite an abstract idea:
detecting a customer engagement with an element displayed;
determining a type of element being engaged;
upon determining that the type of element being engaged by the customer is a type-1 element:
transmitting, to an interactive procedure, systemic context information and customer context information;
processing, by the procedure, the systemic context information and customer context information to generate first parametric output data; and
transmitting the first parametric output data responsive to the element being engaged by the customer;
upon determining that the type of element being engaged by the customer is a type-2 element:
transmitting, to the interactive procedure, the systemic context information and the customer context information;
requesting customer response data, via a chat;
receiving, at the procedure the customer response data;
processing, by the procedure, the systemic context information to generate second parametric output data;
processing, by at least one third party, the customer context information to generate outside source data; and
transmitting the second parametric output data and the outside source data responsive to the element being engaged by the customer.
The above limitations recite the concept of context-based search support. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 1-118 and 20-21 an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
the method being computer-implemented
a clickable prompt button embedded on a web page
a computing device
a large language model (LLM) at a remote server
a pop-up chat interface
a third party server
A system comprising: a controller comprising: a memory; and a processor communicatively coupled to the memory the memory storing instructions executable by the processor
A non-transitory computer storage medium comprising computer program instructions stored thereon, the computer program instructions when executed by one or more processors cause the one or more processors to perform steps
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 2-8, 10, 13-18, and 21 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above. As for claims 9, these claims are similar to the independent claims except that they recite the further additional elements of steps being performed automatically. These additional elements are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. Therefore, the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
the method being computer-implemented
a clickable prompt button embedded on a web page
a computing device
a large language model (LLM) at a remote server
a pop-up chat interface
a third party server
A system comprising: a controller comprising: a memory; and a processor communicatively coupled to the memory the memory storing instructions executable by the processor
A non-transitory computer storage medium comprising computer program instructions stored thereon, the computer program instructions when executed by one or more processors cause the one or more processors to perform steps
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claim Interpretation
With reference to subsection II of MPEP 2111.04, it is noted that “the broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.” MPEP 2143.03 further notes that “language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation,” with a contingent limitation “rais[ing] a question as to its limiting effect.”
In the pending claims, such contingent limitations include the steps of:
“upon determining that the typo of clickable prompt button being engaged by the customer is a type-1 clickable prompt button” and “upon determining that the typo of clickable prompt button being engaged by the customer is a type-2 clickable prompt button” in Claim 1; and those limitations which depend thereon.
In the interest of a compact prosecution, art has nonetheless been applied to the contingent limitations of the method claims.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim Rejection – 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-
obviousness.
Claims 1-18 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Petricek et al (US 11055305 B1), hereinafter Petricek, in view of Devaux et al (US 20240370691 A1), hereinafter Devaux.
Regarding Claim 1, Petricek discloses a computer-implemented method, comprising:
detecting a customer engagement with a clickable prompt button [e.g. buttons 238, 260, 270] embedded on a web page displayed at a computing device (Petricek: “The user can respond …by inputting text at the message input box 236 and hitting the send message button 238.” Col. 8, lines 30-35 – “he recommended question 270b has been selected in the chat view 234 shown in FIG. 8 (e.g., by user input” Col. 11, lines 15-20 – “Selecting the more information button 260 …260 has been selected (e.g., by user input)” Col. 10, lines 11-20 – “provide user interfaces in the form of webpages, web services, and the like for facilitating searching of item records, refinement of result sets, interaction with chat bots” Col. 4, lines 55-60 – See Figures 7-8.);
determining a type of clickable prompt button being engaged (Petricek: “The user can respond …by inputting text at the message input box 236 and hitting the send message button 238.” Col. 8, lines 30-35 – “ selection of the refinement button 235a” Col. 8, lines 10-15);
upon determining that the type of clickable prompt button being engaged by the customer is a type-1 [more information button] clickable prompt button:
transmitting, to an interactive bot at a remote server, from the computing device, systemic context information [popular questions] and customer context information [questions the user asked] (Petricek: “FIG. 8 illustrates the user interface 200 presented via the user device after the more information button 260 has been selected (e.g., by user input).” Col. 10, lines 15-20 – “the item description bot 264 may generate the recommended questions 270 based on questions the user asked recently on other items …popular questions, …top rated questions, … and/or any other suitable information pertaining to questions about items.” Col. 10, lines 50-65 – The bots operate on a server computer: Col. 4, line 52-Col. 5, line 5; See Figure 1);
processing, by the bot at the remote server, the systemic context information and customer context information to generate first parametric output data [questions 270] (Petricek: “the set of recommended questions 270 is …generated dynamically in response to the user requesting the question interface view” – Col. 11 ,lines 10-15 “the item description bot 264 may generate the recommended questions 270 based on questions the user asked recently on other items …popular questions, …top rated questions, … and/or any other suitable information pertaining to questions about items” Col. 10, lines 50-65); and
transmitting, from the remote server, the first parametric output data to the computing device responsive to the clickable prompt button being engaged by the customer (Petricek: “The next message from the item description bot 264 (e.g., message 266 b) may include a set of recommended questions 270.” Col. 10, lines 40-45 – “the recommended question 270 b and/or other elements presented in the chat view 234 may be selectable” Col. 11, lines 35-40– See Figure 8.);
upon determining that the type of clickable prompt button being engaged by the customer is a type-2 [“send” or “question” button] clickable prompt button:
- transmitting, to the interactive bot at the remote server, from the computing device, the systemic context information and the customer context information (Petricek: “a record of the user's selection of the recommended question 270 b (e.g., “Are these waterproof?”), is represented by a user comment 248 c in the chat view 234” Col. 11, lines 20-25 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. The additional recommended questions 270 c and 270 d can be generated based on existing questions previously posed by customers, the other recommended questions 270, the answers 272, all the same information the first question was based on” Col. 11, lines 40-50– “the item description bot 264 may generate the recommended questions 270 based on questions the user asked recently on other items …popular questions, …top rated questions, … and/or any other suitable information pertaining to questions about items.” Col. 10, lines 50-65);
requesting, from the computing device, customer response data [feedback, text input], via a pop-up chat interface (Petricek: “The additional recommended questions 270 c and 270 d can be generated based on … interaction with the answer and any implicit feedback (dwell time, thumbs up/down)” Col. 11, lines 45-55 – “the user may also input her own questions using the message input box 236. For example, the user can input text at the message input box 236 that represents the question and may hit the send message button 238 to pass the text to the service provider. Once received, the service provider can parse the text to determine whether a question is present in the text.” Col. 11, lines 55-65 – With reference to Figure 14, the “question interface…chat window” Col. 8, lines 20-25, can be a pop-up.);
receiving, at the bot at the remote server, from the computing device, the customer response data (Petricek: “The additional recommended questions 270 c and 270 d can be generated based on … interaction with the answer and any implicit feedback (dwell time, thumbs up/down)” Col. 11, lines 45-55 – “the user may also input her own questions using the message input box 236. For example, the user can input text at the message input box 236 that represents the question and may hit the send message button 238 to pass the text to the service provider. Once received, the service provider can parse the text to determine whether a question is present in the text.” Col. 11, lines 55-65);
processing, by the bot at the remote server, the systemic context information to generate second parametric output data [additional questions] (Petricek: “The item description bot 264 may also present additional recommended questions 270 c and 270 d. The additional recommended questions 270 c and 270 d can be generated based on existing questions previously posed by customers, the other recommended questions 270, the answers 272, all the same information the first question was based on, interaction with the answer and any implicit feedback (dwell time, thumbs up/down), sentiment and positivity/negativity of the answer” Col. 11, lines 44-55 – “Within the question chat window 262, the user is enabled to interact with a chat bot referred to herein as the item description bot 264. … The item description bot 264 presents prompts or questions to the user within the chat view 234.” Col. 10, lines 25-35);
processing, by at least one source server, the customer context information to generate outside source data (Petricek: “ the item information database 1638 may store records corresponding to these items. This may include information that is used to generate an item description page (e.g., product title, price, description, images, ratings, reviews, questions and answers, etc.)” Col. 15, lines 25-35 – “the response can be automatically generated based on questions posed by other users and answers by other users, product experts, sellers, etc.” Col. 16, lines 55-60 – “generating, based at least in part on a question about the item received via the second user interface view, an answer to the question for presentation” Col. 17, lines 30-40); and
transmitting, to the computing device, the second parametric output data and the outside source data responsive to the clickable prompt button being engaged by the customer (Petricek: “responsive to selection of the recommended question 270 b, an answer 272 a has been presented in the question interface view 208 … the corresponding answer 272 a are selected because other customers have indicated that they are helpful (e.g., by a process of up-voting or down-voting answers with respect to helpfulness, and, in some examples, the answers as well).” Col. 11, lines 20- 35 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. ” Col. 11, lines 40-50 – See Figures 9-10).
While Petricek teaches that the bot may be an artificial intelligence module (Col. 8, lines 25-35; Col. 10, lines 25-35), it does not specifically teach that the bot is a large language model (LLM), and that the source server is a third party server.
However, Devaux teaches a system for searching [Abstract], including that:
that the bot is a large language model (LLM) (Devaux: “provide chat-based travel searching functions for devices 116 with assistive chat functions from LLM engine 120, including generation of structured travel search requests from unstructured travel search requests. LLM engine 120 can be based on any large language model platform such as ChatGPT from OpenAI.” [0060]); and
the source server is a third party server (Devaux: “the structured travel query is sent to external sources for fulfillment.” [0107] – “The specification can allow for the enrichment of the second set of parameters with external data, such as expenses at the destination and country-specific information.” [0195]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Petricek would continue to teach an interactive bot at a remote server, except that now it would also teach that the bot is a large language model (LLM), and that the source server is a third party server, according to the teachings of Devaux. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved search efficiency for a user (Devaux: [0174]).
Regarding Claim 2, Petricek/Devaux teach the computer-implemented method of claim 1, further comprising: in the case where the clickable prompt button is a type 2 clickable prompt button:
requesting, from the computing device, further customer response data, via the pop-up chat interface (Petricek: “When the result set has been narrowed, the refinement bot can suggest certain items from the narrowed result set for review by the user. At this point, the user can continue to narrow the result set or select an option to learn more about one of the other items.” Col. 2, lines 60-65 – “The additional recommended questions 270 c and 270 d can be generated based on … interaction with the answer and any implicit feedback (dwell time, thumbs up/down)” Col. 11, lines 45-55 – “the user may also input her own questions using the message input box 236. For example, the user can input text at the message input box 236 that represents the question and may hit the send message button 238 to pass the text to the service provider. Once received, the service provider can parse the text to determine whether a question is present in the text.” Col. 11, lines 55-65 – See also Figure 10. It is recognized that the conversation can continue.);
receiving, at the remote server, the further customer response data (Petricek: “The additional recommended questions 270 c and 270 d can be generated based on … interaction with the answer and any implicit feedback (dwell time, thumbs up/down)” Col. 11, lines 45-55 – “the user may also input her own questions using the message input box 236. For example, the user can input text at the message input box 236 that represents the question and may hit the send message button 238 to pass the text to the service provider. Once received, the service provider can parse the text to determine whether a question is present in the text.” Col. 11, lines 55-65);
transmitting, from the remote server, the further customer response data to the at least one source server (Petricek: “ the item information database 1638 may store records corresponding to these items. This may include information that is used to generate an item description page (e.g., product title, price, description, images, ratings, reviews, questions and answers, etc.)” Col. 15, lines 25-35 – “the response can be automatically generated based on questions posed by other users and answers by other users, product experts, sellers, etc.” Col. 16, lines 55-60 – “generating, based at least in part on a question about the item received via the second user interface view, an answer to the question for presentation” Col. 17, lines 30-40);
processing, by the at least one source server, the further customer response data to generate further outside source data (Petricek: “ the item information database 1638 may store records corresponding to these items. This may include information that is used to generate an item description page (e.g., product title, price, description, images, ratings, reviews, questions and answers, etc.)” Col. 15, lines 25-35 – “the response can be automatically generated based on questions posed by other users and answers by other users, product experts, sellers, etc.” Col. 16, lines 55-60 – “generating, based at least in part on a question about the item received via the second user interface view, an answer to the question for presentation” Col. 17, lines 30-40);
transmitting the further outside source data from the at least one source server to the remote server (Petricek: “responsive to selection of the recommended question 270 b, an answer 272 a has been presented in the question interface view 208 … the corresponding answer 272 a are selected because other customers have indicated that they are helpful (e.g., by a process of up-voting or down-voting answers with respect to helpfulness, and, in some examples, the answers as well).” Col. 11, lines 20- 35 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. ” Col. 11, lines 40-50 – See Figures 9-10); and
transmitting, from the remote server to the computing device, the further outside source data (Petricek: “The item description bot 264 may also present additional recommended questions 270 e and 270 f. ” Col. 12, lines 15-25 – See Figure 10).
While Petricek does not specifically teach that the source server is the third party server, Devaux teaches that the source server is the third party server (Devaux: “the structured travel query is sent to external sources for fulfillment.” [0107] – “The specification can allow for the enrichment of the second set of parameters with external data, such as expenses at the destination and country-specific information.” [0195]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Petricek with Devaux for the reasons identified above with respect to claim 1.
Regarding Claim 3, Petricek/Devaux teach the computer-implemented method of claim 1, wherein the customer context information comprises at least one of: a past browsing history of the customer, a current browsing history of a customer, prior clicks of a customer, a past purchase history, past search history, customer physical location, customer cart contents, a customer profile on file, or any suitable combination thereof (Petricek: “FIG. 8 illustrates the user interface 200 presented via the user device after the more information button 260 has been selected (e.g., by user input).” Col. 10, lines 15-20 – “the item description bot 264 may generate the recommended questions 270 based on questions the user asked recently on other items …popular questions, …top rated questions, … and/or any other suitable information pertaining to questions about items.” Col. 10, lines 50-65).
Regarding Claim 4, Petricek/Devaux teach the computer-implemented method of claim 1, wherein the type-1 and type-2 clickable prompt buttons comprise one or more attributes including:
an object identifier attribute (Petricek: “generate the recommended questions 270 based on questions the user asked recently on other items (in the same product category), questions the user has asked in search box, questions the user has asked customer service representatives, questions the user has viewed or interacted with on product detail page (of similar product)” Col. 10, lines 50-55– “Each recommended item 252 may include an item image 256, an item description 258, and more information button ” Col. 9, line 60- Col. 10, line 5),
a display name attribute (Petricek: “Each recommended item 252 may include an item image 256, an item description 258, and more information button ” Col. 9, line 60- Col. 10, line 5 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d.” Col. 10, lines 50-55 – See Figures 7-10, which illustrate visual labels/indicators on each type of button.),
a customer message attribute (Petricek: “the question interface view may include a chat window for chatting with the item description bot in a conversational format. The user can ask her own questions about the particular item by inputting them in the chat window. A set of recommended questions can also be generated and presented to the user in the chat window” Col. 3, lines 55-65 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. ” Col. 11, lines 40-50 – See Figures 9-10),
a system message attribute (Petricek: “the question interface view may include a chat window for chatting with the item description bot in a conversational format. The user can ask her own questions about the particular item by inputting them in the chat window. A set of recommended questions can also be generated and presented to the user in the chat window” Col. 3, lines 55-65 – See Figures 9-10, where the bot/system provides a variety of messages about the conversation/chat elements.).
Regarding Claim 5, Petricek/Devaux teach the computer-implemented method of claim 4, wherein the object identifier attribute associates a single displayed item on the web page with a single type-1 or type-2 clickable prompt button (Petricek: “generate the recommended questions 270 based on questions the user asked recently on other items (in the same product category), questions the user has asked in search box, questions the user has asked customer service representatives, questions the user has viewed or interacted with on product detail page (of similar product)” Col. 10, lines 50-55– “Each recommended item 252 may include an item image 256, an item description 258, and more information button ” Col. 9, line 60- Col. 10, line 5).
Regarding Claim 6, Petricek/Devaux teach the computer-implemented method of claim 4, wherein the display name attribute is a label displayed on an exterior face of a single type-1 or type-2 clickable prompt button (Petricek: “Each recommended item 252 may include an item image 256, an item description 258, and more information button ” Col. 9, line 60- Col. 10, line 5 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d.” Col. 10, lines 50-55 – See Figures 7-10, which illustrate visual labels/indicators on each type of button.).
Regarding Claim 7, Petricek/Devaux teach the computer-implemented method of claim 4, wherein the customer message attribute is a message that is transmitted to the bot to provide the bot with context indicating that a customer has clicked on a type-1 or type-2 clickable prompt button and requires a response (Petricek: “the question interface view may include a chat window for chatting with the item description bot in a conversational format. The user can ask her own questions about the particular item by inputting them in the chat window. A set of recommended questions can also be generated and presented to the user in the chat window” Col. 3, lines 55-65 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. ” Col. 11, lines 40-50 – See Figures 9-10).
While Petricek does not specifically teach that the bot is the LLM, Devaux teaches that the bot is a large language model (LLM) (Devaux: “provide chat-based travel searching functions for devices 116 with assistive chat functions from LLM engine 120, including generation of structured travel search requests from unstructured travel search requests. LLM engine 120 can be based on any large language model platform such as ChatGPT from OpenAI.” [0060]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Petricek with Devaux for the reasons identified above with respect to claim 1.
Regarding Claim 8, Petricek/Devaux teach the computer-implemented method of claim 7, wherein the customer message is included in a chat history between the customer and the bot displayed in the pop-up chat window (Petricek: “the question interface view may include a chat window for chatting with the item description bot in a conversational format. The user can ask her own questions about the particular item by inputting them in the chat window. A set of recommended questions can also be generated and presented to the user in the chat window” Col. 3, lines 55-65 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d. ” Col. 11, lines 40-50 – See Figures 9-10).
While Petricek does not specifically teach that the bot is the LLM, Devaux teaches that the bot is a large language model (LLM) (Devaux: “provide chat-based travel searching functions for devices 116 with assistive chat functions from LLM engine 120, including generation of structured travel search requests from unstructured travel search requests. LLM engine 120 can be based on any large language model platform such as ChatGPT from OpenAI.” [0060]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Petricek with Devaux for the reasons identified above with respect to claim 1.
Regarding Claim 9, Petricek/Devaux teach the computer-implemented method of claim 1, further comprising: in the case where the clickable prompt button is a type-1 clickable prompt button: automatically displaying, on a display of the computing device, a pre-programmed response to the customer (Petricek: “the questions and the answers may be sourced from customer-provided information … the questions and answers may be stored in association with a record of the recommended item 252. In some examples, the set of recommended questions 270 is pre-generated,” Col. 11, lines 5-15).
Regarding Claim 10, Petricek/Devaux teach the computer-implemented method of claim 9, wherein the pre- programmed response pertains to a displayed item displayed in association with the type-1 clickable prompt button (Petricek: “the questions and the answers may be sourced from customer-provided information … the questions and answers may be stored in association with a record of the recommended item 252. In some examples, the set of recommended questions 270 is pre-generated,” Col. 11, lines 5-15 – “to answer the questions, the item description bot may rely on customer reviews, customer questions and answers relating to the item, and other data associated with an item record.” Col. 3, lines 1-5 – See Figures 8-10).
Regarding Claims 11-15 and 17-18, the limitations of claims 11-15 and 17-18 are closely parallel to the limitations of claims 1-5 and 7-8, with the additional limitation of a system comprising: a controller comprising: a memory; and a processor communicatively coupled to the memory the memory storing instructions executable by the processor (Petricek: Col. 14), and are rejected on the same basis.
Regarding Claim 16, Petricek/Devaux teach the system of claim 15, wherein the display name attribute is a label displayed on an exterior face of a single type-1 or type-2 clickable prompt button (Petricek: “Each recommended item 252 may include an item image 256, an item description 258, and more information button ” Col. 9, line 60- Col. 10, line 5 – “The item description bot 264 may also present additional recommended questions 270 c and 270 d.” Col. 10, lines 50-55 – See Figures 7-10, which illustrate visual labels/indicators on each type of button).
Regarding Claims 20-21, the limitations of claims 20-21 are closely parallel to the limitations of claims 1-2, with the additional limitation of a non-transitory computer storage medium comprising computer program instructions stored thereon, the computer program instructions when executed by one or more processors cause the one or more processors to perform steps (Petricek: Col. 14), and are rejected on the same basis.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Wang et al (US 20240289861 A1) teaches customized query systems related to a product, using an LLM.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J SULLIVAN whose telephone number is (571)272-9736. The examiner can normally be reached Mon - Fri 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached on (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.S./Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689