DETAILED ACTION
Status of the Application
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is a non-final action in response to the communications filed on 12/13/2024. Claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 are currently pending and have been considered below.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/13/2024 has been entered.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 18-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 18 recites “the computer system”. There is no antecedent basis for this limitation in the claim. Therefore, the claim is indefinite. Claim 19 is also rejected by virtue of its dependency from claim 18 without remedying the issue identified above.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 are determined to be directed to an abstract idea.
The claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 are directed to a judicial exception (i.e., law of nature, natural phenomenon, or abstract idea), without providing a practical application integration and without providing significantly more.
As per Step 1 of the subject matter eligibility analysis, Claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 are directed to a method (i.e., process) which is one of the four statutory categories of invention.
As per Step 2A-Prong 1 of the subject matter eligibility analysis, Claim 1 is directed specifically to the abstract idea of providing a product recommendation to a user by generating questions, responses and recommendations for a set of products based on data that includes at least one of manufacturer data, data from sellers of one or more of the set of products, and data from consumer reviews; receiving input from the user as user answers to the generated questions, the user answers being a one of a potential answer generated or a natural language response; dynamically generating, optimizing, organizing or reorganizing, sequencing or re-sequencing questions, responses and recommendations during execution of the method by the user based on the received input from the user; and recommending a product from the particular set of products to the user; which include mental processes (observing and evaluating questions and answers to make a judgement and opinion on product recommendations), and certain methods of organizing human activity based on commercial and legal interactions (marketing and advertising products to users based on questions and answers (survey)), and based on managing personal behavior (social interaction and following rules and instructions to facilitate questions and answers to and from a user to make a decision on product recommendation). Claim 18 is directed to the abstract idea of selling a recommended product to a user by providing a product recommendation to the user by generating the questions, responses and recommendations for a set of products based on data that includes at least one of manufacturer data, data from sellers of one or more of the set of products, and data from consumer reviews; generating questions and responses for a set of products; receiving input from the user as user answers to the generated questions, user answers being one of a potential answer generated or a natural language response; dynamically generating, optimizing organizing or reorganizing, sequencing or resequencing questions and responses as the user answers the questions generated; recommending a product to the user; and directing a user to a seller of the recommended product; which include mental processes (observing and evaluating questions and answers to make a judgement and opinion on product recommendations), and certain methods of organizing human activity based on commercial and legal interactions (sales, marketing and advertising products to users based on questions and answers (survey)), and based on managing personal behavior (social interaction and following rules and instructions to facilitate questions and answers to and from a user to make a decision on product recommendation). Claims 3, 5, 7-9, 11-14, 16-17, 19, and 21-26 are directed to the abstract idea of claims 1 and 18 with further details on the parameters/attributes of the abstract idea which includes mental processes and certain methods of organizing human activity for similar reasons as provided above for claim 1 or 18. After considering all claim elements, both individually and in combination and in ordered combination, it has been determined that the claims do not amount to significantly more than the abstract idea itself.
As per Step 2A-Prong 2 of the subject matter eligibility analysis, while the claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 recite additional limitations which are hardware or software elements, such as a computer system, automatically {performing functions using generic computer/processor}, where at least some of the data is obtained via web scraping and optionally extracted using natural language processing, artificial intelligence, machine learning, reinforcement learning, neural network, natural language processing, web scraping, {products found} online, computerized {product recommendation}, a computerized product recommendation engine, wherein at least one of the questions, responses and/or recommendations are output as vocal or audible signals and/or the user's answers are input by voice, these limitations are not enough to qualify as a practical application being recited in the claims along with the abstract idea since these elements are merely invoked as a tool to apply instructions of an abstract idea in a particular technological environment, and mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP 2106.05(f)&(h)). The claims do not amount to "practical application" for the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
As per Step 2B of the subject matter eligibility analysis, while the claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 recite additional limitations which are hardware or software elements, such as a computer system, automatically {performing functions using generic computer/processor}, where at least some of the data is obtained via web scraping and optionally extracted using natural language processing, artificial intelligence, machine learning, reinforcement learning, neural network, natural language processing, web scraping, {products found} online, computerized {product recommendation}, a computerized product recommendation engine, wherein at least one of the questions, responses and/or recommendations are output as vocal or audible signals and/or the user's answers are input by voice, these limitations are not enough to qualify as “significantly more” being recited in the claims along with the abstract idea since these elements are merely invoked as a tool to apply instructions of an abstract idea in a particular technological environment, and mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do provide significantly more to an abstract idea (MPEP 2106.05 (f) & (h)). The claims do not amount to "significantly more" than the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) add a specific limitation other than what is well-understood, routine and conventional in the field; (6) add unconventional steps that confine the claim to a particular useful application; nor (7) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
Therefore, since there are no limitations in the claims 1, 3, 5, 7-9, 11-14, 16-19 and 21-26 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, and looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, the claims are rejected under 35 USC § 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, 5, 7-9, 11-13, 16-18, 21-24 and 26 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Adams et al (US 20180005293 A1).
As per Claim 1, Adams teaches a method for providing a product recommendation to a user via a computer system (Abstract, regarding item recommendation, fig. 1 and para. 0057-0065, regarding the computer system if the invention), the method comprising:
automatically generating, by the computer system, questions, responses and recommendations for a set of products based on data that includes “at least one” of manufacturer data, data from sellers of one or more of the set of products, and data from consumer reviews, where at least some of the data is obtained via web scraping and extracted using natural language processing (Figs. 5-18, para. 0020, “the dialog progresses through a series of product recommendations generated by the graph database using forward chaining logic and a series of questions generated using backward chaining logic, until a specific product recommendation is determined”; para. 0030, “the forward chaining steps for product recommendations and the backward chaining steps for determining a next question may be performed in real time, while a user is in a question-and-answer dialog, using runtime components of the system, including a graph database and other components. For example, in a system where there are about 50 facts relevant to a line of products, the system can rapidly return recommendations and next questions to maintain a dialog”; para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0072, 0078-0095 fig. 1 and para. 0057-0065, regarding the computer system if the invention; see specifically para. 0081 for web scraping “The XPS admin tools 418 may, as noted above, ingest data from customer data sources 422 (such as product data, metadata, links to additional information, etc.) and data from third party data sources 520 (which may contain further information about products, such as reviews on third party websites, as well as other information.”);
receiving input from the user as user answers to the automatically generated questions, the user answers being a one of a potential answer generated by the computer system or a natural language response (Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”); and
dynamically generating, optimizing, organizing or reorganizing, and sequencing or re-sequencing questions, responses and recommendations during execution of the method by the user based on the received input from the user (para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0078-0095);
recommending a product from the particular set of products to the user (Abstract, Figs. 5-18, para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation, such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's product line that fit that description.”).
As per Claim 3, Adams teaches a method as provided below for claim 21. Adams further teaches building the interaction graph/tree by initially associating question answers with sets of compatible products and limiting the sets of compatible products as a user answers the questions (figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation, such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's product line that fit that description.”).
As per Claim 5, Adams teaches a method as provided above for claim 1. Adams further teaches automatically generating the questions, responses and recommendations based on “at least one of” data harvested by the computer system, crowd-sourced suggestions, and personality based questions, answers, responses and recommendations (figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation, such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's product line that fit that description.”; also see para. 0072, 0078-0095).
As per Claim 7, Adams teaches a method as provided above for claim 1. Adams further teaches optimizing the generation and/or organization of the questions, potential answers, responses and/or recommendations according to a metric (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0077, “”The graph database servers 424 weights the quality of a match on various criteria (such as reflected in a series of questions and answers in a dialog) and accumulates those into a confidence score that weighs based on criteria already identified and on the extent to which the available criteria are being considered. For example, if during a session the platform 400 is only looking at three criteria among ten that are possible, then the graph database 424 would tend to infer a relatively low confidence that a good match has been found for the consumer, as compared to a situation (such as later in the same session), where eight of ten possible criteria have been identified through the dialog. Based on the criteria available at that point in the session, the graph database servers 424 rank products numerically and send the results back to the XPS servers 412, which can retrieve information relating to the identified products, such as from a content server 428 that contains content relating to the products of the implementer, such as images, text, and the like that enable display of the products. The XPS servers 412 can serve the content for the highest scoring products to the web browser 402, so that the consumer can begin to see products that could be of interest while the dialog session continues.”; Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”; also see para. 0092-0093, 00100, regarding using weights (i.e. metric)).
As per Claim 8, Adams teaches a method as provided above for claim 7. Adams further teaches wherein metric includes at least one of customer satisfaction, conversions, revenue, profit, customer engagement, customer entertainment, purchase amount, customer retention rate, customer loyalty, other traditional sales metrics, user feedback, and virality (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”).
As per Claim 9, Adams teaches a method as provided above for claim 7. Adams further teaches understanding and influencing the user based on information supplied by the user and/or other users, including answers to product questions, personality questions and psychological questions (para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0072, 0078-0095).
As per Claim 11, Adams teaches a method as provided above for claim 1. Adams further teaches generating and optimizing the questions, responses and recommendations to learn features about a user and tagging the user with features optimize future interactions with the current user and/or other users, and/or to help make recommendations for a current and/or other users with similar features (para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0078-0095).
As per Claim 12, Adams teaches a method as provided above for claim 7. Adams further teaches wherein an algorithm for optimizing the metric uses at least a portion of information about the user from a current and any prior interaction, including information transmitted in natural language, whether written or vocal/audible, wherein the information includes “at least one of” user's behavior, needs, preferences, tastes, background, demographics, purchasing activity, answers, input and feedback (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0077, “”The graph database servers 424 weights the quality of a match on various criteria (such as reflected in a series of questions and answers in a dialog) and accumulates those into a confidence score that weighs based on criteria already identified and on the extent to which the available criteria are being considered. For example, if during a session the platform 400 is only looking at three criteria among ten that are possible, then the graph database 424 would tend to infer a relatively low confidence that a good match has been found for the consumer, as compared to a situation (such as later in the same session), where eight of ten possible criteria have been identified through the dialog. Based on the criteria available at that point in the session, the graph database servers 424 rank products numerically and send the results back to the XPS servers 412, which can retrieve information relating to the identified products, such as from a content server 428 that contains content relating to the products of the implementer, such as images, text, and the like that enable display of the products. The XPS servers 412 can serve the content for the highest scoring products to the web browser 402, so that the consumer can begin to see products that could be of interest while the dialog session continues.”; also see para. 0092-0093, 00100, regarding using weights (i.e. metric); para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; para. 0074, “According to another embodiment, the implementation user interface 404 may be implemented in the context of a short message service (SMS), chatbot, messaging application, messaging service, messaging platform, intelligent personal assistant, and/or voice platform.” i.e., vocal/audible in voice platform and text/written in other messaging platforms).
As per Claim 13, Adams teaches a method as provided above for claim 7. Adams further teaches wherein an algorithm for optimizing the metric can use “one or more of” an amount of time that elapsed before the user made a decision; a trajectory of a finger, cursor or mouse of the user before the user made the decision; user's vocal or audible information; demographic information about the user; personal or personality information about the user; information sent by a referring website; crowd sourced data; information about a product or products being sold and information regarding similarity between the user and other users including user features, purchasing preferences, behaviors and tendencies (Note that the term “can use” makes the claim optional; regardless, Adams teaches these limitations as follows:
para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0077, “”The graph database servers 424 weights the quality of a match on various criteria (such as reflected in a series of questions and answers in a dialog) and accumulates those into a confidence score that weighs based on criteria already identified and on the extent to which the available criteria are being considered. For example, if during a session the platform 400 is only looking at three criteria among ten that are possible, then the graph database 424 would tend to infer a relatively low confidence that a good match has been found for the consumer, as compared to a situation (such as later in the same session), where eight of ten possible criteria have been identified through the dialog. Based on the criteria available at that point in the session, the graph database servers 424 rank products numerically and send the results back to the XPS servers 412, which can retrieve information relating to the identified products, such as from a content server 428 that contains content relating to the products of the implementer, such as images, text, and the like that enable display of the products. The XPS servers 412 can serve the content for the highest scoring products to the web browser 402, so that the consumer can begin to see products that could be of interest while the dialog session continues.”; also see para. 0092-0093, 00100, regarding using weights (i.e. metric); para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; para. 0074, “According to another embodiment, the implementation user interface 404 may be implemented in the context of a short message service (SMS), chatbot, messaging application, messaging service, messaging platform, intelligent personal assistant, and/or voice platform.” i.e., vocal/audible in voice platform and text/written in other messaging platforms).
As per Claim 16, Adams teaches a method as provided above for claim 1. Adams further teaches generating the questions, the responses, and the recommendations or optimizing and reorganizing the questions, the responses and the recommendations based on updated data received by the computer system (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”; also see para. 0078-0095).
As per Claim 17, Adams teaches a method as provided above for claim 16. Adams further teaches wherein the updated data includes “at least one of” personal information about a user; user's answers to questions; information known about the product or found online, including one or more of availability, price, shipping speeds, ratings, reviews, and manufacturer specifications; and experiences of other users that have used the product finder and guide for the set of products (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0078-0095).
As per Claim 18, Adams teaches a method for selling a recommended product to a user by providing a computerized product recommendation to the user via a computerized product recommendation engine (Abstract, regarding item recommendation, fig. 1 and para. 0057-0065, regarding the computer system if the invention), the method comprising:
automatically generating, by the computer system, the questions, responses and recommendations for a set of products based on data that includes at least one of manufacturer data, data from sellers of one or more of the set of products, and data from consumer reviews, where at least some of the data is obtained via web scraping and extracted using natural language processing (Figs. 5-18, para. 0020, “the dialog progresses through a series of product recommendations generated by the graph database using forward chaining logic and a series of questions generated using backward chaining logic, until a specific product recommendation is determined”; para. 0030, “the forward chaining steps for product recommendations and the backward chaining steps for determining a next question may be performed in real time, while a user is in a question-and-answer dialog, using runtime components of the system, including a graph database and other components. For example, in a system where there are about 50 facts relevant to a line of products, the system can rapidly return recommendations and next questions to maintain a dialog”; para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; also see para. 0072, 0078-0095 fig. 1 and para. 0057-0065, regarding the computer system if the invention; see specifically para. 0081 for web scraping “The XPS admin tools 418 may, as noted above, ingest data from customer data sources 422 (such as product data, metadata, links to additional information, etc.) and data from third party data sources 520 (which may contain further information about products, such as reviews on third party websites, as well as other information.”);
automatically generating, by the computerized product recommendation engine, questions and responses for a set of products (Figs. 5-18, para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0020, “the dialog progresses through a series of product recommendations generated by the graph database using forward chaining logic and a series of questions generated using backward chaining logic, until a specific product recommendation is determined”; para. 0030, “the forward chaining steps for product recommendations and the backward chaining steps for determining a next question may be performed in real time, while a user is in a question-and-answer dialog, using runtime components of the system, including a graph database and other components. For example, in a system where there are about 50 facts relevant to a line of products, the system can rapidly return recommendations and next questions to maintain a dialog”; also see para. 0072-0095; fig. 1 and para. 0057-0065, regarding the computer system if the invention);
receiving input from the user as user answers to the automatically generated questions, user answers being one of a potential answer generated by the computer system or a natural language response (Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”); and
dynamically generating, optimizing organizing or reorganizing, and sequencing or resequencing questions and responses as the user answers the questions generated by the computerized product recommendation engine (para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface; determine a second question using the received response to the first question and a weighted collection of attributes corresponding to a plurality of items; present the determined second question via the user interface; receive a response to the second question via the user interface; determine at least one recommended item from the plurality of items based on the response to the first question, the response to the second question, and the weighted collection of attributes corresponding to the plurality of items; and present the determined at least one recommended item via the user interface.”; para. 0085, “the platform 400 has the information to determine what sets of products might satisfy a consumer's wants or needs and what questions might be posed to narrow down the field to a most relevant set of products.”; figs. 5-18 and para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation [i.e., extracted using NLP], such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's [i.e., manufacturer/seller] product line that fit that description.”; Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”; also see para. 0078-0095);
recommending a product to the user (Abstract, Figs. 5-18, para. 0077, “The graph database servers 424 use a series of questions and responses to derive a product recommendation, such as determining, after a few well-targeted questions, that a consumer is looking for a red women's jacket for hiking, after which the platform 400 can present product options within an implementer's product line that fit that description.”); and
directing a user to a seller of the recommended product (Figs. 7-18, para. 0014, 0086, regarding the customer visiting the merchant’s website; para. 0072, 0077, 0085, regarding after a series of questions and answers providing the customer a recommended product {on the merchant’s website see figs. 7-18}).
As per Claim 21, Adams teaches a method as provided above for claim 1. Adams further teaches automatically generating, by the computer system, an interaction tree/graph comprising the automatically generated questions, potential answers, the responses and the recommendations for a set of products (Figs. 5-18, para. 0020, “the dialog progresses through a series of product recommendations generated by the graph database using forward chaining logic and a series of questions generated using backward chaining logic, until a specific product recommendation is determined”; para. 0030, “the forward chaining steps for product recommendations and the backward chaining steps for determining a next question may be performed in real time, while a user is in a question-and-answer dialog, using runtime components of the system, including a graph database and other components. For example, in a system where there are about 50 facts relevant to a line of products, the system can rapidly return recommendations and next questions to maintain a dialog”; fig. 1 and para. 0057-0065, regarding the computer system if the invention; Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”).
As per Claim 22, Adams teaches a method as provided above for claim 1. Adams further teaches automatically generating, optimizing, organizing or reorganizing, and sequencing or re-sequencing, by the computer system, the questions, the responses and the recommendations for the set of products by one or more of artificial intelligence, machine learning, reinforcement learning, neural networks and natural language processing (para. 0025, “Aspects of the invention are directed to a platform, with various components, methods, systems, services, modules and the like, that enables a company managing a product line, such as having one or more branded consumer products, to enable each of its consumers to engage in a dialog, managed by computer-implemented artificial intelligence, that results in a relevant, personalized product recommendation that meets the needs of the consumer. The dialog may simulate an interaction with a live salesperson, including allowing the customer to enter unstructured responses to a series of queries, where the queries are automatically generated”; para. 0076, “The runtime components 414 of the XPS servers 412 relay the query-response information for parsing to an artificial intelligence-based question analysis facility 430. Once the question-response information is parsed by the question analysis facility 430, the parsed information is returned to the runtime component 414 of the XPS server 412 and relayed to one or more graph database servers 424, referred to herein in some cases as XPS GraphDb Servers, which make inferences about topics of interest to the consumer and inferences about products that may be of interest to the consumer.”; para. 0082, “The platform 400, as described above, thus operates at runtime to manage a dialog session by taking a consumer through a series of questions that are automatically configured, using machine intelligence, to guide the customer to a suitable product within a set of products that have various features and attributes that may meet the needs of the consumer.”; 0081, “The admin components 414 may also provide documents and other training information for machine learning, such as via an interface 532 (e.g., an API or other interface or communication link as noted above), such as to an intelligent question and answer facility 530”; Also see figs. 5-18 and para. 0020, 0030, regarding graph database relating to question and answers; Figs. 5-18, para. 0076, “the XPS business logic layer 408 initiates a prompt to a user in the user interface 404, usually a question that starts the dialog with a consumer. The consumer responds with an answer, such as by typing (or speaking, in the case of a UI that enables speech recognition) an answer to the question.”; also see at least fig. 8 where the computer generated question provides 2 answer options to choose from “man” or “woman”).
As per Claim 23, Adams teaches a method as provided above for claim 1. Adams further teaches wherein at least one of the questions, responses and recommendations are output as vocal or audible signals and the user's answers are input by voice (para. 0072, “Specifically, the program modules 42 may present a first question via a user interface; receive a response to the first question via the user interface;…”; para. 0074, “According to another embodiment, the implementation user interface 404 may be implemented in the context of a short message service (SMS), chatbot, messaging application, messaging service, messaging platform, intelligent personal assistant, and/or voice platform.” i.e., vocal/audible in voice platform and text/written in other messaging platforms).
As per Claim 24, Adams teaches a method as provided above for claim 1. Adams further teaches using NLP tools to incorporate the input from the user into the questions, responses and recommendations during execution of the method by the user (para. 0110 and 0116, regarding NLP; para. 0005-0006, 0077-0078, regarding generating a confidence score based on the answer provided by the user (i.e., user feedback/customer engagement) to a first question and selecting the second question accordingly; Figs. 5-18, para. 0020, “the dialog progresses through a series of product recommendations generated by the graph database using forward chaining logic and a series of questions generated using backward chaining logic, until a specific product recommendation is determined”; para. 0030, “the forward chaining steps for product recommendations and the backwar