Prosecution Insights
Last updated: April 19, 2026
Application No. 17/742,090

SYSTEMS AND METHODS RELATING TO ARTIFICIAL INTELLIGENCE LONG-TAIL GROWTH THROUGH GIG CUSTOMER SERVICE LEVERAGE

Non-Final OA §101§103
Filed
May 11, 2022
Examiner
SCHNEIDER, JOSHUA D
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Genesys Cloud Services Inc.
OA Round
5 (Non-Final)
36%
Grant Probability
At Risk
5-6
OA Rounds
3y 10m
To Grant
87%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
41 granted / 113 resolved
-15.7% vs TC avg
Strong +50% interview lift
Without
With
+50.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
29 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
28.8%
-11.2% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 113 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-7, 11-17, and 21-23 are pending. Claims 1 and 11 are amended. Claims 21-23 are added. Claims 8-10 and 18-20 were previously cancelled. Response to Arguments Applicant's arguments with respect to the Section 101 have been fully considered but they are not persuasive. Applicant argues that the limitation set forth in the Office Action differ from the recitations of the pending claims, as they do not recite a "second predefined confidence threshold." The artifact word “second” has been removed from the updated rejection, though it is noted that this language in no way alters the substance or merit of the rejection. Applicant argues that the assertion that the claims are not directed to a certain method of organizing human activity and/or to a mental process. The arguments, on pages 11-12, presents an analysis of the claims based on a list a plurality of specific examples from the MPEP. However, the arguments fail to address the actual rejection as written or identify any error within. As noted in the rejection, the claim as a whole is directed to “Generating and Validating Expert Answers for Training an Answer System” which is an abstract idea because it is a method of organizing human activity and a mental process as set forth on pages 5 and 6 of the previous Office Action. References to training a machine learning model only emphasize the failure to address the rejection as written, as machine learning is addressed Under Step 2A, Prong 2. This failure to address the rejection as written is further exemplified in Applicant’s arguments with regards to Step 2A, Prong 2, and Step 2B analysis in the rejection. With regards to Step 2A, Prong 2, Applicant fails to address the rejection as written. The use of human expert analysis, when applied in a generic training step, does not amount to a technological improvement, but rather emphasize that the claims are directed to a human organized activity, not any technological improvement or technological solution. While Applicant discusses Berkheimer Memo at length, that memo is not relevant as nothing in the rejection was asserted to be well-understood, routine or conventional. Similarly, with Applicant’s arguments regarding Ex Parte Desiardins, Applicant requests that the directives of that decision followed. Those directives have been followed, and the analysis in the rejection as written clearly follow the same. As Applicant fails to address the rejection as written and that analysis is complete and correct, the arguments are not persuasive. Applicant's arguments filed with respect to the Section 103 rejection have been fully considered but they are not persuasive. Applicant argues that Barborak et al. does not teach or suggest transferring an interaction package to at least one evaluator for validation that the expert answer is accurate and automatically training a machine learning model of the long-tail bot is performed in response to successful validation, by the at least one evaluator, that the expert answer accurately answers the user question. Barborak et al. is clearly directed to a method for enhancing the accuracy of a question-answer system is disclosed (See paragraphs [0003]-[0008]). The accuracy enhanced data is used to improve/train a machine learning model (See paragraph [0051], “Finally, at 180, the obtained missing piece of data is added into the question-answer system. Again, the missing piece of data may any item of data, a fact, a syntactical relationship, a grammatical relationship, a logical rule, a taxonomy rule, a grammatical rule, or any other information that would increase a determined score for a piece of evidence that may support or refute a candidate answer to the question. The missing piece of data may be input into the corpus, algorithm, process, logical rule, or any other location or combination thereof wherein the data may affect the resulting score for a piece of evidence.” and paragraph [0088], “The QA system 210 can then look for expansions to the original question that would have produced this passage. The QA system 210 finds several expansions: quiz show is related to game show, Mexico is related to Mexican, 60′s is related to 1969. These could be used as training data to train a more effective question expander if they are actually implied by the clue.”). As such, this argument is not persuasive. Nevertheless, the rejection has been updated to address the amended claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 and 11-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Representative claim 1 recites “receive a user question from an interaction between a user and a chatbot; analyze the user question …to determine whether an intent of the user question matches an answer in a frequently asked questions (FAQ) knowledgebase of the system; analyze the user question … to determine whether the intent of the user question matches an answer in an expert answered questions (EAQ) knowledgebase of the system as a function of a confidence recognition match …, … trained on answers that have been provided by individuals designated as experts and that are assigned at least one rating that satisfies a respective quality threshold level; transfer real time control of the interaction to a primary subject matter expert qualified to provide answers to be used … in response to a determination that there is no match between the user question and an answer in the-EAQ knowledgebase of the system with a respective confidence value that exceeds a predefined confidence threshold; receive an expert answer to the user question from the primary subject matter expert; transfer an interaction package to at least one evaluator for validation that the expert answer is accurate, wherein the interaction package comprises the user question and the expert answer to the user question; and …”. Therefore, the claim as a whole is directed to “Generating and Validating Expert Answers for Training an Answer System”, which is an abstract idea because it is a method of organizing human activity including commercial or legal interactions (including agreements in the form of marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). “Generating and Validating Expert Answers for Training an Answer System” is considered to be is a method of organizing human activity because validating answers to questions is a common business practice to preparing FAQ sections of documents and webpages, preparing customer service scripts, and to creating issue flagging databases or knowledge bases for sales and customer service agents. “Generating and Validating Expert Answers for Training an Answer System” may also be considered to be is a mental process. In particular, analyzing the user question … to determine whether an intent of the user question matches an answer in a frequently asked questions (FAQ) knowledgebase of the system; analyzing the user question … to determine whether the intent of the user question matches an answer in an expert answered questions (EAQ) knowledgebase of the system, transferring real-time control of the interaction to a primary subject matter expert qualified to provide answers … in response to a determination that there is no match between the user question and an answer in the EAQ knowledgebase of the system with a respective confidence value that exceeds a second predefined confidence threshold; receiving an expert answer to the user question from the primary subject matter expert; and transferring an interaction package to at least one evaluator for validation, wherein the interaction package comprises the user question and the expert answer to the user question, are all processes that may be performed in the human mind, by two humans including an expert and an evaluator of the answers provided by the expert. That is reading written materials that make up a knowledgebase, and determining matching or missing answers, providing such answers and evaluating the provided answers are human mental processes that are performed in order to evaluate FAQs and other written materials. As such, the claims are directed to an abstract idea. This judicial exception is not integrated into a practical application. In particular, claim 1 recites the following additional element(s): a short-tail bot and a long-tail bot to analyze questions, and automatically training a machine learning model of the long-tail bot based on the user question and the expert answer in response to successful validation by the at least one evaluator that the expert answer accurately answers the user question. Such additional elements individually or in combination do not integrate the exception into a practical application. The recitations of generic and commercially available software elements amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). That is, the use of a short-tail bot and a long-tail bot are high level recitations of commercially available software elements. The data taken from those commercially available software elements is then analyzed by humans and data from the humans is then provided to the unchanged to other commercially available software elements. As such, those recitations of additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). The use of real time data is asserted to be an additional element, but is not an additional element. The transfer of an interaction is to a specific human is a human organized activity, and the claims do not recite any necessary technology for such a transfer. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 1 is directed to an abstract idea. Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements individually and in combination are merely being used to apply the abstract idea to a technological environment. That is, the use of a chatbot and a short-tail bot, and a long-tail bot are high level recitations of commercially available software elements. There are no improvements to those software elements, as the improvement is a human expert and human evaluator providing better data to those software elements. As such, the claims do not address a technical problem or provide a technical solution. Accordingly, claim 1 is ineligible. Claims 11 recited substantially similar features to those recited in representative claim 1 and are rejected based on substantially the same reasons. Dependent claims 2-10 and 12-20 merely further limit the abstract idea and are thereby considered to be ineligible. Dependent claims 2 and 12 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of the at least one evaluator comprises a secondary subject matter expert, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 2 and 12 are also non-statutory subject matter. Dependent claims 3 and 13 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of transmit a response to the user question to the user via the chatbot, wherein the response includes the expert answer, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 3 and 13 are also non-statutory subject matter. Dependent claims 4 and 14 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of receive a user rating of a quality of the expert answer from the user, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 4 and 14 are also non-statutory subject matter. Dependent claims 5 and 15 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of to automatically train the machine learning model of the long-tail bot in response to successful validation by the at least one evaluator and receipt of a favorable user rating of the quality of the expert answer from the user, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 5 and 15 are also non-statutory subject matter. Dependent claims 6 and 16 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of to transmit a matching answer to the user question via the chatbot in response to a determination that the intent of the user question matches one of an answer in the FAQ knowledgebase of the system or the EAQ knowledgebase of the system, which does not include meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 6 and 16 are also non-statutory subject matter. Dependent claims 7 and 17 further limit the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of add a question-answer pair to the EAQ knowledgebase of the system in response to successful validation of the expert answer by the at least one evaluator, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 7 and 17 are also non-statutory subject matter. Dependent claim 21 further limits the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of automatically train the machine learning model in response to a determination that the expert answer satisfies defined ethical criteria, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claim 21 is also non-statutory subject matter. Dependent claim 22 further limits the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of to automatically train the machine learning model in response to a determination that the expert answer satisfies defined criteria related to succinctness and clarity, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claim 22 is also non-statutory subject matter. Dependent claim 23 further limits the abstract idea of “Generating and Validating Expert Answers for Training an Answer System” by introducing the element of to withhold the expert answer from being transmitted in response to the user question until the expert answer is validated by the evaluator, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claim 23 is also non-statutory subject matter. Dependent claims 2-7, 12-17, and 21-23 are also not integrated into a practical application. The dependent claims recite no new additional elements not previously recited in the independent claims. As such, the additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computing system is merely being used to apply the abstract idea to a technological environment. That is, the claims provide no practical limits or improvements to any technology. Accordingly, dependent claims 2-7, 12-17, and 21-23 are also ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7, 11-17, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20210097096 to Osmon et al. in view of U.S. Patent Application Publication No. 2013/0017524 to Barborak et al. With regards to claims 1 and 11, Osmon et al. teaches: at least one processor; and at least one memory comprising a plurality of instructions stored therein that, in response to execution by the at least one processor, causes the system to (paragraph [0022]): receive a user question from an interaction between a user and a chatbot (paragraph [0017], “Real time customer assistance can be provided using tools including “chatbots” or other similar real time customer assistance systems, that use artificial intelligence (AI) and/or machine learning (ML) to determine responses to, and interaction with, a customer. Such a chatbot can provide an automated answering service that determines the best answers to questions and provide those answers to customer support agents within existing communication applications. Chatbots typically are able to respond to “conversational” queries, meaning queries posed by users in non-technical language.”); analyze the user question with a short-tail bot to determine whether an intent of the user question matches an answer in a frequently asked questions (FAQ) knowledgebase of the system (paragraph [0025], “When a user query is received, chatbot 140 parses the user query to identify any available alternatives to words used in the user query. Then, chatbot 140 accesses the response database to determine if a response to the user query exists in the response database and can be used to respond to the user query automatically. In most cases, a user query corresponding to a response in the response database is a short-tail query.”); analyze the user question with a long-tail bot to determine whether the intent of the user question matches an answer in an expert answered questions (EAQ) knowledgebase of the system (paragraph [0025], “Chatbot 140 may forward received long-tail queries to response orchestrator 110 for assistance in response.”; paragraph [0026], “Response orchestrator 110 is a software routine executing on a computing device provided by the application provider, such as an application server. In general, response orchestrator 110 interfaces with chatbot 140 to process long-tail queries for a user of user application 150.”) as a function of a confidence recognition match of the short-tail bot (paragraph [0025], “If no response to the user query can be located, chatbot 140 may determine that the user query is a long-tail query, of the sort that may be more suited to response using knowledge engine 130. Chatbot 140 may forward received long-tail queries to response orchestrator 110 for assistance in response.”; paragraph [0026], “Response orchestrator 110 is a software routine executing on a computing device provided by the application provider, such as an application server. In general, response orchestrator 110 interfaces with chatbot 140 to process long-tail queries for a user of user application 150.”), …; transfer real time control of the interaction to a primary subject matter expert qualified to provide answers to be used to train the long-tail bot in response to a determination that there is no match between the user question and an answer in the EAQ knowledgebase of the system (paragraph [0019], “As a result, in current systems, neither knowledge graphs nor chatbots may be able to effectively allow users to respond to all, or even most, long-tail queries, which frequently results in users requesting help from live support agents.”; paragraph [0059], “In some examples of method 400, the application server may further determine, based on the output of the natural language model, that the natural language utterance cannot identify the natural language utterance and generate a crowdsourcing job to obtain additional training data for the natural language model. The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device. In general, if the natural language model cannot identify the natural language utterance, it may mean that the natural language utterance relates to information not currently stored in the knowledge graph. By generating a crowdsourcing job, additional data may be added to the knowledge graph in order to improve both the knowledge graph itself and the functionality of the application server.”) with a respective confidence value that exceeds a … predefined confidence threshold (paragraph [0047], “In this example, natural language model identifies a single node corresponding to the natural language utterance, however, in some cases, natural language model 120 may, instead of identifying a single node identifier, output a plurality of node identifiers paired with confidence values associated with the node identifiers.”; paragraph [0052], “In such a case, the application server may apply a confidence threshold to identify all node identifiers which may be suitably related to the natural language utterance to include in a response.”; paragraph [0059], “In some examples of method 400, the application server may further determine, based on the output of the natural language model, that the natural language utterance cannot identify the natural language utterance and generate a crowdsourcing job to obtain additional training data for the natural language model. The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device”; it is noted that any outside human capable of answering a question is interpreted as an expert relative to the bot, as the bot is only as knowledgeable as the information stored in its associated knowledgebase. This interpretation is in keeping with paragraphs [0103]-[0106] of the specification); receive an expert answer to the user question from the primary subject matter expert (paragraph [0059], “The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device. In general, if the natural language model cannot identify the natural language utterance, it may mean that the natural language utterance relates to information not currently stored in the knowledge graph. By generating a crowdsourcing job, additional data may be added to the knowledge graph in order to improve both the knowledge graph itself and the functionality of the application server.”); ….; and automatically train a machine learning model of the long-tail bot based on the user question and the expert answer in response to successful validation the at least one evaluator that the expert answer accurately answers the user question (paragraph [0059], “In some examples of method 400, the application server may further determine, based on the output of the natural language model, that the natural language utterance cannot identify the natural language utterance and generate a crowdsourcing job to obtain additional training data for the natural language model. The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device. In general, if the natural language model cannot identify the natural language utterance, it may mean that the natural language utterance relates to information not currently stored in the knowledge graph. By generating a crowdsourcing job, additional data may be added to the knowledge graph in order to improve both the knowledge graph itself and the functionality of the application server.”). Osmon et al. also fails to explicitly teach transferring an interaction package for validation that the expert answer is accurate. However, Barborak et al. teaches in which the long-tail bot is trained on answers (paragraph [0123], “All of the above schemes and methods enable a QA system to expand its knowledge base as can be contained in a database, correct its processes for future questions using external sources, such as human users on a network or other QA systems, and generally increase its accuracy.”) that have been provided by individuals designated as experts (paragraph [0120], “According to FIG. 8, one embodiment herein permits specifically identified outside sources to supply answers 885 if they demonstrate a sufficient amount of accuracy in relation to a known trusted source 880, such as a human expert.”) and that are assigned at least one rating that satisfies a respective quality threshold level (paragraph [0120], “According to FIG. 8, one embodiment herein permits specifically identified outside sources to supply answers 885 if they demonstrate a sufficient amount of accuracy in relation to a known trusted source 880, such as a human expert. According to this embodiment, a trusted source 880 provides a response for several examples. The QA system 210 can generally trust respondents 270 that often agree with the trusted source 880 on those examples.”; paragraph [0121], “Accuracy in any of the embodiments can be determined by tracking the success rate of the outside source or sources 272 in relation to the metrics and comparisons indicated in the above embodiments. This can be done by keeping track of responses in relation to the above embodiments for definite duration or an infinite duration, and it can be done internally by the QA system 210, or by another tracking system or method known in the art.”); transferring real-time control of the interaction to a primary subject matter expert qualified to provide answers (paragraph [0039], “One aspect of the QA system is to be able to discover and pose follow-on inquiries to a user (or an external expert community) that, if answered, will improve the ability of the QA system to understand and evaluate supporting evidence for questions. Additionally, the acquired common-sense knowledge can be applied either off-line or during a live question answering session.”) to be used to train the long-tail bot in response to determining that there is no match between the user question and an answer in the EAQ knowledgebase of the system with a respective confidence value that exceeds a predefined confidence threshold (paragraph [0047], “A failure may result either from an inability to generate a candidate answer with a confidence score above a threshold value or if the QA system cannot correctly interpret the question. Additionally, a failure may also result from an individual piece of evidence receiving a score below a threshold value.”; paragraph [0048], “At 140, the failure is used to determine a missing piece of information. The missing piece of data/information may be data/information that would enable the QA system to improve a score for a piece of evidence, for example a passage, wherein the score for the piece of evidence is used in a confidence score for a candidate answer.”); receive an expert answer to the user question from the primary subject matter expert (paragraph [0049], “Next, at 150, a follow-on inquiry is output to obtain the missing piece of information. The inquiry may be directed to outside sources that can include a variety of users in an expert community who may be human users or may be other electronic systems capable of providing a response, such as other QA systems. A follow-on inquiry may involve, for example, keyword matching, expansion of the original question, and/or a request for lexical semantic relationships. For example, the QA system might request a clarification of what sense a word is being used, or what type of information is being requested in the question.”; paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data.”); transfer an interaction package to at least one evaluator for validation that the expert answer is accurate (paragraph [0007], “According to an embodiment herein, a method for enhancing the accuracy of a question-answer system is disclosed.”; paragraph [0075], “According to embodiments herein, the QA system 210 can receive a number of responses, for example, from a crowd sourcing environment or an external expert community. These responses can be filtered to make sure that the responses are from actual humans, as opposed to computerized answering systems, and that the responses are above a certain level of accuracy either based upon the reputation of the responder, correlation with the known correct answer, or agreement of the responses with one another.”), wherein the interaction package comprises the user question and the expert answer to the user question (paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data. The validation may include validation that the response is supported by a threshold number of experts, humans, or QA systems.”; paragraph [0120], “According to FIG. 8, one embodiment herein permits specifically identified outside sources to supply answers 885 if they demonstrate a sufficient amount of accuracy in relation to a known trusted source 880, such as a human expert. According to this embodiment, a trusted source 880 provides a response for several examples. The QA system 210 can generally trust respondents 270 that often agree with the trusted source 880 on those examples.”); and automatically train a machine learning model of the long-tail bot based on the user question and the expert answer in response to successful validation (claim 7, “validating that said response is accurate prior to adding said response to said corpus of data” and “selecting responses from first outside sources, said first outside sources having a previously established level of accuracy in providing previous responses to previous inquiries”) by the at least one evaluator that the expert answer accurately answers the user question (paragraph [0051], “Finally, at 180, the obtained missing piece of data is added into the question-answer system. Again, the missing piece of data may any item of data, a fact, a syntactical relationship, a grammatical relationship, a logical rule, a taxonomy rule, a grammatical rule, or any other information that would increase a determined score for a piece of evidence that may support or refute a candidate answer to the question. The missing piece of data may be input into the corpus, algorithm, process, logical rule, or any other location or combination thereof wherein the data may affect the resulting score for a piece of evidence.”). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the expert validation as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claims 2 and 12, Osmon et al. fails to explicitly teach using an evaluator for validation. However, Barborak et al. teaches the at least one evaluator comprises a secondary subject matter expert (paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data. The validation may include validation that the response is supported by a threshold number of experts, humans, or QA systems.”, where the threshold number of experts includes a secondary expert). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the expert validation as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claims 3 and 13, Osmon et al. teaches: transmit a response to the user question to the user via the chatbot, wherein the response includes the expert answer (paragraph [0031], “Upon receipt of the node identifiers from natural language model 120, response orchestrator 110 obtains data from the corresponding nodes of the knowledge graph, and uses the obtained data to generate a response to the initial user query. This response is then transmitted to chatbot 140 for display to the user.”; paragraph [0063], “The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device. In general, if the natural language model cannot identify the natural language utterance, it may mean that the natural language utterance relates to information not currently stored in the knowledge graph. By generating a crowdsourcing job, additional data may be added to the knowledge graph in order to improve both the knowledge graph itself and the functionality of the application server.”). With regards to claims 4 and 14, Osmon et al. fails to explicitly teach, but Barborak et al. teaches receive a user rating of a quality of the expert answer from the user (paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data. The validation may include validation that the response is supported by a threshold number of experts, humans, or QA systems.”). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the expert validation as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claims 5 and 15, Osmon et al. fails to explicitly teach, but Barborak et al. teaches automatically train machine learning model of the long-tail bot comprises to automatically train machine learning model of the long-tail bot in response to successful validation by the at least one evaluator and receipt of a favorable user rating of the quality of the expert answer from the user (paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data. The validation may include validation that the response is supported by a threshold number of experts, humans, or QA systems.”). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the expert validation as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claims 6 and 16, Osmon et al. teaches: to transmit a matching answer to the user question via the chatbot in response to a determination that the intent of the user question matches one of an answer in the FAQ knowledgebase of the system or the EAQ knowledgebase of the system (paragraph [0033], “Cognitive conversational systems use machine learning models to understand and classify the intent of a user's question. They then apply additional context or extracted information to determine the best response for a user. A key aspect of a cognitive based system is providing feedback on a correct or incorrect answer.”; paragraph [0059], “The crowdsourcing job may be forwarded to a different component of the computing device executing the application server, or to a human operator of the computing device. In general, if the natural language model cannot identify the natural language utterance, it may mean that the natural language utterance relates to information not currently stored in the knowledge graph. By generating a crowdsourcing job, additional data may be added to the knowledge graph in order to improve both the knowledge graph itself and the functionality of the application server.”). With regards to claims 7 and 17, Osmon et al. fails to explicitly teach, but Barborak et al. teaches add a question-answer pair to the EAQ knowledgebase of the system in response to successful validation of the expert answer by the at least one evaluator (paragraph [0050], “As a result, software providers typically provide other real time customer assistance services to users to provide answers to questions that may be regarding a node other than the one a user is currently interacting with. Real time customer assistance can provided using tools including “chatbots” or other similar real time customer assistance systems, that use artificial intelligence (AI) and/or machine learning (ML) to determine responses to, and interaction with, a customer. Such a chatbot can provide an automated answering service that determines the best answers to questions and provide those answers to customer support agents within existing communication applications. Chatbots typically are able to respond to “conversational” queries, meaning queries posed by users in non-technical language.”). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the expert validation as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claim 22, Osmon et al. fails to explicitly teach, but Barborak et al. teaches to automatically train the machine learning model of the long-tail bot comprises to automatically train the machine learning model in response to a determination that the expert answer satisfies defined criteria related to succinctness and clarity (paragraph [0051], “Finally, at 180, the obtained missing piece of data is added into the question-answer system. Again, the missing piece of data may any item of data, a fact, a syntactical relationship, a grammatical relationship, a logical rule, a taxonomy rule, a grammatical rule, or any other information that would increase a determined score for a piece of evidence that may support or refute a candidate answer to the question. The missing piece of data may be input into the corpus, algorithm, process, logical rule, or any other location or combination thereof wherein the data may affect the resulting score for a piece of evidence.”; paragraph [0070], “Additionally, the responses can be filtered to make sure that the responses are above a certain level of accuracy (based upon the reputation of the responder, correlation with the known correct answer, agreement of the responses with one another, etc.). Then, once the responses are filtered, the high-quality, human-based responses can be utilized to increase of the knowledge base of the question answer system, and/or also used to generate additional rules to help the question answer system score and rank the candidate answers 285.”, where the identification of missing information is related to succinctness and clarity). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the criteria related to succinctness and clarity as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). With regards to claim 23, Osmon et al. fails to explicitly teach, but Barborak et al. teaches the plurality of instructions causes the system to withhold the expert answer (paragraph [0047], “A failure may result either from an inability to generate a candidate answer with a confidence score above a threshold value”, where a candidate answer is otherwise provided as in paragraph [0069]) from being transmitted in response to the user question until the expert answer is validated by the evaluator (paragraph [0047], “Next, at 130, a failure in a question answering process is determined. The QA system generates one or more candidate answers to the question with associated confidence scores based on results from scoring processes/algorithms for pieces of evidence extracted from a corpus of data. A failure may result either from an inability to generate a candidate answer with a confidence score above a threshold value or if the QA system cannot correctly interpret the question. Additionally, a failure may also result from an individual piece of evidence receiving a score below a threshold value.”; paragraph [0050], “At 160, the QA system receives a response to the follow-on inquiry. The response is returned by a human user, expert community or other QA system. At 170, the response to the follow-on inquiry is validated to confirm the missing piece of data. The validation may include validation that the response is supported by a threshold number of experts, humans, or QA systems.”). This part of Barborak et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to question answering systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the holding of answers that fail to meet a threshold for response as taught by Barborak et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to utilize failures in system responses to enhance the accuracy of a QA system (see paragraphs [0003]-[0004] of Barborak et al.). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20210097096 to Osmon et al. in view of U.S. Patent Application Publication No. 2013/0017524 to Barborak et al. as applied to claims , further in view of U.S. Patent Application Publication No. 20200380310 to Weider et al. With regards to claim 21, Osmon et al. fails to explicitly teach, but Barborak et al. teaches to automatically train the machine learning model of the long-tail bot comprises to automatically train the machine learning model in response to a determination that the expert answer satisfies defined ethical criteria (paragraph [0060], “In a fully automated implementation, the features may be automatically and intelligently identified by the system. For example, method 600 may examine the dataset and determine if the dataset includes any features in a list of common features that are known to have ethical implications if the data distribution is not balanced. For example, the common features may include gender, race, sexual orientation, and age. In an example, the bias detection tool may examine the contents of the dataset and/or the type of ML model for which the dataset may be used to determine what feature(s) may be most appropriate for identifying bias and/or data imbalance.”; paragraph [0070], “By providing semi or fully data imbalance detection and correction, the methods and systems may quickly and efficiently identify, eliminate or reduce bias. This can improve efficiency of the training process, while ensuring they comply with ethical, fairness, regulatory and policy standards.”). This part of Weider et al. is applicable to the system of Osmon et al. as they both share characteristics and capabilities, namely, they are directed to improving training data for learning systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Osmon et al. to include the holding ethical data review as taught by Weider et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Osmon et al. in order to reduce bias in a data set to prevent harmful response (see paragraphs [0002]-[0003] of Weider et al.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joshua D Schneider whose telephone number is (571)270-7120. The examiner can normally be reached on Monday - Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on (571)272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.D.S./Examiner, Art Unit 3626 /JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

May 11, 2022
Application Filed
Mar 09, 2024
Non-Final Rejection — §101, §103
Jul 15, 2024
Response Filed
Oct 24, 2024
Final Rejection — §101, §103
Jan 29, 2025
Request for Continued Examination
Jan 30, 2025
Response after Non-Final Action
Mar 21, 2025
Non-Final Rejection — §101, §103
Jun 17, 2025
Response Filed
Sep 20, 2025
Final Rejection — §101, §103
Dec 23, 2025
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536604
SYSTEM AND METHOD FOR CONTINUOUS BIOMETRIC MONITORING
2y 5m to grant Granted Jan 27, 2026
Patent 12482043
SYSTEMS AND METHODS FOR DEVELOPING GUESTS OF A SERVICE BUSINESS
2y 5m to grant Granted Nov 25, 2025
Patent 12475472
METHOD FOR MANAGING GENUINE FABRIC WITH BLOCKCHAIN DATA
2y 5m to grant Granted Nov 18, 2025
Patent 12469596
METHOD AND SYSTEM FOR COORDINATING USER ASSISTANCE
2y 5m to grant Granted Nov 11, 2025
Patent 12462264
SYSTEMS AND METHODS FOR APPLYING AN IDENTIFICATION PATTERN TO AN ELECTRONIC DEVICE
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
36%
Grant Probability
87%
With Interview (+50.5%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 113 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month