Prosecution Insights
Last updated: April 19, 2026
Application No. 18/934,015

AUTOMATIC QUALITY ASSURANCE FOR INFORMATION RETRIEVAL AND INTENT DETECTION

Non-Final OA §101§103
Filed
Oct 31, 2024
Examiner
GOLDBERG, IVAN R
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Forethought Technologies Inc.
OA Round
1 (Non-Final)
35%
Grant Probability
At Risk
1-2
OA Rounds
4y 8m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
128 granted / 365 resolved
-16.9% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
57 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 365 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant The following is a Non-Final, first Office Action responsive to Applicant’s communication of 10/31/2024, in which applicant filed the application. Claims 1-32 are pending in the instant application and have been rejected below. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 386(c) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed applications, Application No. 63/501,163; 63/484,016; 63/155,449; 63/403,054; 17/682,537; 18/347,524; 18/347,527, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. “Test mode” and/or evaluation questions for answers/helpfulness/accuracy found in independent claims 1, 2, 12, and 19 are not disclosed in these earlier applications. The priority date is believed to be 10/31/23, based on provisional 63/594,726 disclosing the “Summarizer.” Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/4/25 is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 1, 2, 12, and 19: classifier; “autonomous artificial intelligence (AI) chatbot agent using a large language model”; and “evaluation engine” in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. [0085] as published states “computer program instructions stored on memory units to implement analytics functions 266, AI/ML training engines 278, and trained models and classifiers 280.” Accordingly, the “classifier” and “engine” are interpreted as corresponding to the structure of “computer program instructions stored on memory” to implement the limitations recited. [0077] as published states “An Artificial Intelligence (AI) augmented customer support module 140 may be implemented in different ways, such as being executed on its own server, being operated on the cloud, or executing on a server of the customer support application. The AI augmented customer support module in one implementation includes dedicated AI ASIC processor and memory.” [0080] As published states “FIG. 2A illustrates an example of functional modules in accordance with an implementation. AI/ML services may include an agent information assistant (an “Assist Module”) 205 to generate information to assist a human agent to respond to a customer question”; [0229] as published states “an autonomous AI chatbot 4704 that interacts with a large language model 4706”; [0230] as published states “the large language model serves as the autonomous AI chatbot agent, but more generally an Autonomous AI chatbot agent may use a large language model to enhance its capabilities.” Accordingly, the “autonomous artificial intelligence (AI) chatbot agent using a large language model” are interpreted as corresponding to the structure of “computer program instructions stored on memory” to implement the limitations recited. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without reciting significantly more. Step One - First, pursuant to step 1 in MPEP 2106.03, the claim 1 is directed to a apparatus which is a statutory category. Step 2A, Prong One - MPEP 2106.04 - The claim 1 recites– “An apparatus for responding to a customer service ticket, comprising: … to detect a topic of a customer question and associated intent based on a taxonomy of topics tickets; … using a … language model to generate an answer to solve a customer question; and an evaluation … accessing at least one … language model to implement to evaluate a true solve rate based on a measurement of helpfulness and accuracy of responses … to evaluate questions during a test mode.” As drafted, this is, under its broadest reasonable interpretation, directed to the Abstract idea groupings of “certain methods of organizing human activity” (business relations or “managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions))” and “mathematical relationships”- as here we have a customer question, a topic is determined based on a taxonomy/grouping of topics, then generating an answer to solve a customer question, and accessing a language model to estimate a true solve rate based on measurement of helpfulness and accuracy of responses to evaluate questions during testing; where mathematical relationships include “true solve rate based on measurement of helpfulness and accuracy” [see claim 8 as well – percentage of useful answers]. Accordingly, claim 1 is directed to an abstract idea because it is for helping determine a topic of a question from a user and presenting to a user an answer for their question and estimating a true solve rate (i.e. a percentage – see [0281] - if a thousand conversations are handled by an AI chatbot, suppose 90% were evaluated as being useful to customers. The true solve rate would be 90%). Step 2A, Prong Two - MPEP 2106.04 - This judicial exception is not integrated into a practical application. In particular, the claim 1 recites additional elements that are: An apparatus for responding to a customer service ticket, comprising: a classifier trained to detect a topic of a customer question and associated intent based on a taxonomy of topics tickets; an autonomous artificial intelligence (AI) chatbot agent using a large language model to generate an answer to solve a customer question; and an evaluation engine accessing at least one large language model to implement to evaluate a true solve rate based on a measurement of helpfulness and accuracy of responses by the AI chatbot to evaluate questions during a test mode. (Additional elements involve, based on claim interpretation, computer executing stored instructions, and using large language model and “training” on detecting topics; MPEP 2106.05f “apply it [abstract idea] on a computer applies – merely uses a computer as a tool to perform an abstract idea; see also MPEP 2106.05h field of use for combination of computer executing stored instructions, chatbot, and large language model; See also July 2024 Subject Matter Eligibility Update, Example 47, claim 2; Example 48, claim 1; the “machine learning model” and training are “mere instructions to implement abstract idea on a computer at MPEP 2106.05f and “field of use” (MPEP 2106.05h)). These elements “classifier trained”, amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)) and individually or in combination is consideration “field of use” (MPEP 2106.05h). Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim also fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, and/or an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. See 84 Fed. Reg. 55. The claim is directed to an abstract idea. Step 2B in MPEP 2106.05 - The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a computer system are MPEP 2106.05(f) (Mere Instructions to Apply an Exception – “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235) and MPEP 2106.05h (field of use). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. The claim is not patent eligible. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Independent claim 2 is directed to an apparatus at step 1, which is a statutory category. Claim 2 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2 and step 2b. Claim 2 further reasons “reasoning logic to evaluate the sequence of answers” which is considered narrowing the abstract idea in giving better answers to questions from a customer; to extent it’s “logic by a computer,” it’s MPEP 2106.05f (apply it [abstract idea] on a computer) at step 2a, prong two and step 2B. In addition, claim 2 recites: “triggering an information retrieval pipeline to access an information resource”- the “pipeline”, as best understood, is considered to be an additional element in that it is querying another computer/database; this is considered to be executed by a computer and in combination/individually at step 2a, prong two and step 2b - is considered MPEP 2106.05f – apply it [abstract idea] on a computer and field of use (MPEP 2106.05h). Independent claim 12 is directed to a system at step 1, which is a statutory category. Claim 12 recites similar limitations as claim 1 and 2, and is rejected for the same reasons at step 2a, prong one, 2a, prong 2, and step 2b. Claim 12 further has “implement a workflow for a topic” (specific text/description for situations) [0239] as published, and a “natural language workflow policy” [which can be “refund”] with a “description of available software tools and available API calls” to generate an “interactive workflow” to solve the customer question. This portion appears to be just the description of the available software and API calls, which is part of a possible answer. If claim is amended to positively recite available software tools and API calls being used and having interactive workflow, the disclosure [0228] as published gives example of workflow being for “issuing a refund”; further supporting that these limitations are “apply it [abstract idea – business relations, or following instructions] on a computer idea” (MPEP 2106.05f). Independent claim 19 is directed to a system at step 1, which is a statutory category. Claim 19 recites similar limitations as claim 1 and 2 [pipeline] and 12 [description of software tools and API calls], and is rejected for the same reasons at step 2a, prong one, 2a, prong 2, and step 2b. Claims 3, 24 narrow the abstract idea by stating that evaluation questions are determined based on “titles of content” of information resource (e.g. [0277, 0286] as published – examples of “information resources” are “historic tickets or knowledge-based articles”); To extent the generation is “by the computer [engine],” this is “mere instructions to implement abstract idea on a computer at MPEP 2106.05f” and field of use (MPEP 2106.05h). Claims 4, 25 narrow the abstract idea by stating that evaluation questions are determined based on “summaries” of information resource (e.g. [0277, 0286] as published – examples of “information resources” are “historic tickets or knowledge-based articles”); To extent the generation is “by the computer [engine],” this is “mere instructions to implement abstract idea on a computer at MPEP 2106.05f” and field of use (MPEP 2106.05h). Claims 5, 26 narrow the abstract idea for similar reasons, where evaluation questions are based on question-answer pairs in historic tickets. Claims 6, 27 narrow the abstract idea by stating that evaluation questions are “synthetic” based on content of information resource (e.g. [0277, 0286] as published – examples of “information resources” are “historic tickets or knowledge-based articles”); the “synthetic” refers to fact that they are akin to test questions generated by the computer [see [0298] as published “ if an article on neck creams has a title “Neck creams for dry skin” a synthetic question, generated by an LLM, could be “Are their neck creams for my dry skin?” If such a synthetic question is processed by the AI chatbot 5708 and LLM 5710, it should generate an answer paraphrasing the original article and article title.”] To extent the generation is “by the computer [engine],” this is “mere instructions to implement abstract idea on a computer at MPEP 2106.05f” and field of use (MPEP 2106.05h). Claims 7, 28 narrow the abstract idea by stating that evaluation of overall conversation resolution is from a series of questions and answers. This narrows the abstract idea to analyze the teaching instructions over a sequence of questions and answers. Claims 8, 29 narrow the abstract idea by generating a true solve rate that is a percentage of useful answers based on helpfulness and factual accuracy. Claims 9, 30 further narrow the abstract idea by stating that the information resource comprises knowledgeable articles. Additional element here is a “database” and “knowledge base”, which are viewed as MPEP 2106.05f (apply it [abstract idea] on a computer) and in combination with claim 1 and database are viewed as MPEP 2106.05h (field of use). Claims 10, 31 rejected for similar reasons, except here it is “historical customer support tickets” instead of “articles”; but additional elements of database is same as claims 9, 30. Claims 11, 15, 32 are rejected for similar reasons as claims 7, 28, as it also evaluates over a sequence/series of questions. Claims 13, 20 are rejected for narrowing the claim for detecting intent in questions and evaluating the accuracy. Claims 14, 21 narrow the abstract idea by evaluating “appropriateness” of workflow implemented. To extent this is “by computer” this is MPEP 2106.05f (apply it [abstract idea] on a computer). Claim 15 narrows the abstract idea by having a series of questions; to extent AI chatbot used, this is rejected for same reasons as in independent claims. Claims 16, 22 narrow the abstract idea by stating the workflow policy is written in at least one natural language sentence. Claims 17, 23 have additional elements of retrieving information from the large language model and sending information to it – conservations information, workflow policy, and applicable software tools. The claim at this time does not say what the language model does with the “conversation information” for some reason; it appears the workflow policy and software tools were already used in same manner in claim 12 “large language model is prompted with a natural language workflow policy and a description of available software tools.” Claim 18 narrows the abstract idea by stating rules in form of “guard rail” prompts for the model ([0236] as published gives examples – “ a guard rail could include guard rail prompts to remind the large language model know it is an AI customer service chatbot, it must respond truthfully to customer questions, it must follow the workflow policy provided”); this is also viewed as “apply it [abstract idea] on a computer” (MPEP 2106.05f) in the sense of providing the rules as programming for the large language model, so the output is more correct/desirable/truthful. At this time, there are not more details in claim 18. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. For more information on 101 rejections, see MPEP 2106. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, and 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari (US 2021/0133224) and Yu, et al, “Generate rather than retrieve: Large language models are strong context generators,” 2023, Published at International Conference on Learning Representations (ICLR) 2023, arXiv preprint arXiv:2209.10063, pages 1-10. Concerning claim 1, Tiwari discloses: An apparatus for responding to a customer service ticket (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message; see par 82 - the systems and methods gather different sources of data from the knowledge base (the tags added for each article, the user messages that are labeled by analysts to mark the correct answers, and also click data of the users to map their queries to the correct article). All these give the data in the format (User Query Article ID); systems and methods use each of these examples as a positive example and query the knowledge base to get the closest articles to the query as negative examples.), comprising: a classifier trained (Tiwari – see par 28 - As shown in FIG. 1, a runtime 104 processes various questions 114 and requests from any number of users 116 via any type of interface 118. see par 29 - Run time 104 also includes vector space intent classification to a best category 120 which attempts to classify the intent of a particular question 114. see par 76 - For each of the categories, the systems and methods automatically create an intent with all the tags of all the articles in the category added to the intent. The systems and methods then train an intent classification engine; see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) to detect a topic of a customer question and associated intent based on a taxonomy of topics (Tiwari – see par 36-37, FIG. 4 - method 400 accesses multiple data sources or knowledge bases and creates a conversational bot that can answer questions related to the data received from the multiple data sources or knowledge bases. For example, parser 404 may separate a document or other data item into multiple chapters, categories, subcategories, text pieces, and the like. Taxonomy mapping 406 includes mapping the parsed document to a node in a predefined taxonomy of topics, such as car functions or parts. see par 40 - FIG. 5 is a process diagram depicting an embodiment of a method 500 for processing messages (e.g., questions) received from one or more users. A user message 502 is received from a user or a system associated with a user. The user message 502 may also be referred to as a “question”, “query”, and the like. A bot 504 receives the user message 502 and an identity of the user is identified 506. In some embodiments, a cookie-based method is used to give a unique user identifier to each user, and the unique user identifier identifies each user. he method 500 continues as an intent classification model is applied 508 to identify the category for the knowledge base (e.g., the category associated with the user message 502).); an autonomous artificial intelligence (AI) chatbot agent (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) using a … language model to generate an answer to solve a customer question (Tiwari – see par 23 – conversational interface that includes an ability to interact with a computing system in natural language and in a conversational way; see par 51 - Utterance generation is an important problem in Question-Answering, Information Retrieval, and Conversational AI Assistants. Chatbots and conversational interfaces are being adopted for various conversational automation use cases such as website assistants, customer service automation and IT and enterprise service automation; This approach enables customers (e.g., system administrators) to easily configure flows and build chatbots very quickly. To solve this problem, the described systems and methods integrate dialog acts into the knowledge base decision trees. Dialog Acts are types of speech acts that serve common actions with respect to navigating a decision tree). Yu discloses: an autonomous artificial intelligence (AI) chatbot agent using a “large language model” to generate an answer to solve a customer question (Yu page 2, 1st paragraph - we show that generated contextual documents contain the correct answer more often than the top retrieved documents. We believe this is because large language models generate contextual documents by performing deep token-level cross-attention between all the question and document contents, resulting in generated documents that are more specific to the question than retrieved documents.) Tiwari and Yu disclose: an evaluation engine (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) accessing at least one large language model (See Yu above – page 2; page 1, Abstract - We call our method generate-then-read (GENREAD), which first prompts a large language model to generate contextual documents based on a given question, and then reads the generated documents to produce the final answer.) to implement to evaluate a true solve rate based on a measurement of helpfulness and accuracy of responses by the AI chatbot to evaluate questions during a test mode (Tiwari ‘224 – see par 26 - the knowledge base corpus 106 is accessed from multiple sources. These data sources may be normalized into a common format, such as CSV or JSON and mapped to certain fields in an index, as discussed herein. An example type of document may expect title, description, tags, category, and subcategory fields. see par 39 - In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. In some implementations, if a message response is not the correct article, the systems and methods may add more utterances to improve the likelihood of returning the correct article for the message; see also Yu page 5, section 4 - To evaluate the model performance, we use exact match (EM) score for evaluating open-domain QA (Zhu et al., 2021). An answer is considered correct if and only if its normalized form has a match in the acceptable answer list). Both Tiwari and Yu are analogous art as they are directed to answering questions (see Tiwari Abstract; Yu Abstract). Tiwari discloses an AI assistant and chatbot where the ability is to interact with the system in a “natural language and conversational way” (See par 23, 51). Tewari further discloses having “a large pool 710 of paraphrases” for selection of candidate paraphrases (See par 61) and knowledge bases with “large articles” that can have summaries (See par 47). Yu improves upon Tiwari by disclosing using a “large language model”. One of ordinary skill in the art would be motivated to further include explicitly having a “large language model” to efficiently improve upon the upon the chatbot, knowledge base with “large articles” (See par 47) and “large pool of paraphrases” (See par 61) in Tiwari. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the conversational bot that answers questions using knowledge bases in Tiwari (See abstract, par 36) to further use a large language model as disclosed in Yu, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 2, Tiwari and Yu disclose: An apparatus for responding to a customer service ticket (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message), comprising: a classifier trained (Tiwari [same as claim 1]– see par 28-29 - classify the intent of a particular question 114, 76 - The systems and methods then train an intent classification engine; see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) to detect a topic of a customer question and associated intent based on a taxonomy of topics (Tiwari [same as cl. 1] – see par 36-37, FIG. 4 ; par 40); an autonomous artificial intelligence (AI) chatbot agent (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) using a large language model (Yu [same as cl. 1] - page 2, 1st paragraph) to generate an answer to solve a customer information question (Tiwari – [same as cl.1 ] - see par 23, 51) by triggering an information retrieval … to access an information resource to answer the question (Tiwari – see par 25 – any number of data sources 102 represent a corpus of data associated with a particular topic, product, service, issue, and the like; Example data sources 102 include a knowledge base corpus 106, categories that are entered or extracted 108, categories made as intents 110, and words of importance made as utterances 112. The knowledge base corpus 106 includes, for example, operating manuals, user manuals, frequently asked questions and answers, articles, product support documents, catalogs, and the like; see par 27 - The categories made as intents 110, as discussed herein, may be used in combination with the categories entered or extracted 108. An automated utterance generation process (discussed herein) is part of extracting words and phrases of importance from each document. see par 38 - In some embodiments, a vector index 412 includes sentences in the knowledge base 410 embedded into vectors. An entity extraction 414 extracts meaningful entities from the knowledge base 410 and adds those entries to the index. An elastic search index 416 is an information retrieval index that uses an inverted word-document index. The indexes 412, 414, and 416 are used during run time to quickly identify answers to user questions and other user messages. For example, the indexes 412, 414, and 416 may be stored in knowledge base 410). While Tiwari discloses having a number of documents, articles, manuals, and using a number of indexes in its knowledge base, it is unclear if this is considered “pipeline” as claimed. Yu discloses: by triggering an information “retrieval pipeline” to access an information resource to answer the question (NPL-Yu – see page 2, 3rd paragraph - In contrast to the retrieve-then-read pipeline, our method is essentially a generate-then-read pipeline. Specifically, it first prompts a large language model to generate contextual documents based on a given question, and then reads the generated document to produce the final answer; page 2, 4th paragraph - We propose a novel clustering-based prompting approach to generate multiple diverse contextual documents that increases the likelihood of covering the correct answer; see page 5, Section 3.2.2 “increase knowledge coverage in generated documents, we propose a novel clustering-based prompt method. It first clusters the representations of a set of documents into K classes (K = 2 in Figure 1). Next, it randomly selects n question-document pairs (n = 5 in Figure 1) from each cluster. Lastly, a large language model presents the different n question-document pairs as in-context demonstrations for generating documents to a given question. In this way, large language models are based on different distributions of examples, hence resulting in generated documents covering different perspectives). Tiwari and Yu disclose: an evaluation engine (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) accessing at least one large language model (See Yu above – page 2; page 1, Abstract - large language model) to generate evaluation questions and evaluate answers, wherein in a test mode the evaluation engine generates a sequence of evaluation questions classified by the classifier and answered by the AI chatbot (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message; see par 40 - FIG. 5 is a process diagram depicting an embodiment of a method 500 for processing messages (e.g., questions) received from one or more users; The method 500 continues as an intent classification model is applied 508 to identify the category for the knowledge base (e.g., the category associated with the user message 502)), with the evaluation engine implementing reasoning logic to evaluate the sequence of answers generated by the AI chatbot agent (Tiwari –see par 39 - A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set; see par 85 - the user can provide a decision tree (if the user picks Apple, respond with this. If the user picks Orange, respond with that). This approach enables customers (e.g., system administrators) to easily configure flows and build chatbots very quickly. Dialog Acts are types of speech acts that serve common actions with respect to navigating a decision tree. The following are examples of dialog acts: [0086] Affirm—user has agreed to what the bot asked (typically a Yes/No question) [0087-0094] Negate—user has disagreed to what the bot asked (typically a Yes/No question). Yu also discloses having a “sequence” for the questions and answers being evaluated: Yu – See FIG. 1 – overall framework – question-document pairs from each embedding cluster; read documents to predict an answer; See page 5, section 3.2.2 – Clustering-based prompts – large language model presents different n question-document pairs as in-context demonstrations for generating documents to a given question; Q is set of questions in training split; generate document d for each question Q; See page 7, Section 4.2.1 – shorten training time; using 10 documents; see page 9, section 4.3.2 Exa- improvement in open-domain QA performance is due to the fact that correct answers are included more frequently in the generated text Recall@K is the most commonly used metric in existing works to measure the retrieval performance, which computes the percentage of top-K retrieved or generated documents that contain any possible answer at least once; To improve coverage, we propose GENREAD with clustering, where we include examples in the prompt from different clusters of the training data to elicit more diverse generations.) It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 above. In addition, Tiwari discloses a number of documents, articles, manuals, and indexes in its knowledge base (See par 25, 27) and performing testing, training, and different searches (See par 38, FIG. 4-5). Yu improves upon Tiwari by disclosing “pipeline” for the information retrieval (See page 2, 3rd paragraph). Concerning claim 3, Tiwari and Yu disclose: The apparatus of claim 2, wherein the evaluation engine generates evaluation questions from titles of content of the information resource (Tiwari – see par 26 - knowledge base corpus 106 is accessed from multiple sources. An example type of document may expect title, description, tags, category, and subcategory fields; see par 39 - A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. If the blind testing 426 results are not satisfactory, the knowledge base weights are tuned 428 by changing the knowledge base ranking model weights to improve accuracy and relevancy. In some embodiments, the systems and methods include a machine learning algorithm that optimizes and tunes the weights for various feature scores (e.g., text score, vector score, title similarity, utterance similarity, etc.) to combine those score features. If, after the tuning 428, the results are not satisfactory, method 400 adds utterances and tags to the articles 430 to improve accuracy.; see par 69 - given an article in a knowledge base consisting of a title and a description, the goal of the utterance generation process is to generate different utterances that potentially correspond to users' utterances with that particular knowledge base article). Concerning claims 4, Tiwari and Yu disclose: The apparatus of claim 2, wherein the evaluation engine generates evaluation questions from summaries of the content of the information resource ([0277, 0286] as published – information resource examples are “historic ticket or knowledge-based articles” Tiwari – see par 47 - For example, during the index time, if the systems and methods find the article to be too large, they automatically create a summary of the paragraph using an extractive summarizer. The summarizer picks the salient sentences from the large number of sentences and creates a summary. The summary is then stored back to the index. During the query time, if the systems and methods find the description to be too large, and if summarization is enabled in the bot, the systems and methods return just the summary from the index. see par 50 - Using relevant utterances as features in answering questions has shown to improve both the precision and recall for retrieving the right answer by a conversational bot. Therefore, utterance generation has become an important problem with the goal of generating relevant utterances (e.g., sentences or phrases) from a knowledge base article that consists of a title and a description; see par 50 - The systems and methods discussed herein 1) use extractive summarization to extract important sentences from the description, 2) use multiple paraphrasing techniques to generate a diverse set of paraphrases of the title and summary sentences, and 3) select good candidate paraphrases with the help of a candidate selection algorithm; see par 69 - the method proposed for utterance generation uses paraphrase generation and extractive summarization techniques to generate utterances. Paraphrase generation is used to generate multiple paraphrases of the title of an article, whereas extractive summarization is used to select the relevant sentences from the description of the article.). Concerning claims 5, Tiwari and Yu disclose: The apparatus of claim 2, where evaluation engine generates evaluation questions from question-answer pairs in historic customer tickets (Tiwari – see par 25 - The knowledge base corpus 106 includes, for example, operating manuals, user manuals, frequently asked questions and answers; see par 39 - For each category of articles [disclosing questions], the method automatically creates intents and adds important phrases from the articles as utterances for the intents [disclosing answers]. A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message; An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. If the blind testing 426 results are not satisfactory, the knowledge base weights are tuned 428 by changing the knowledge base ranking [i.e. answers for each question] model weights to improve accuracy and relevancy. see par 50 - Using relevant utterances as features in answering questions has shown to improve both the precision and recall for retrieving the right answer by a conversational bot. ; see par 51 - ] Utterance generation is an important problem in Question-Answering, Information Retrieval, and Conversational AI Assistants; see par 82 - the systems and methods gather different sources of data from the knowledge base (the tags added for each article, the user messages that are labeled by analysts to mark the correct answers, and also click data of the users to map their queries to the correct article). All these give the data in the format (User Query Article ID); systems and methods use each of these examples as a positive example and query the knowledge base to get the closest articles to the query as negative examples see also Yu – see FIG. 1 – leverages question-document pairs from each cluster; see page 5, section 3.2.2, 1st paragraph - randomly selects n question-document pairs (n = 5 in Figure 1) from each cluster. Lastly, a large language model presents the different n question-document pairs as in-context demonstrations for generating documents to a given question; 4th paragraph - By conditioning on different sampled in-context demonstrations collected from different clusters, the large language model has been biased for different perspectives. Although these different perspectives exist in a latent manner, we empirically show it works well in practice, by comparing it with sampling methods, diverse human prompts (Figure 2 and Table 2) and randomly sampling n pairs from the entire dataset). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 and claim 2 above. Concerning claim 7, Tiwari and Yu disclose: The apparatus of claim 2, wherein the evaluation engine evaluates overall conversation resolution over a series of evaluation questions and answers (Applicant’s specification [0279] gives example of “there may be a series of 1 to M different LLM evaluation engines 5722 each accessing an LLM and providing prompts, APIs, and tools for each respective LLM to implement a different verification test. As a few examples, different LLM engines may evaluate factual accuracy and helpfulness.” Tiwari – see par 29 - A vector space similarity search is performed by converting a query to a vector using sentence embedding during run time and comparing the query vector to the document vectors in the index (computed offline) to find the most relevant document to the query. see par 33, 39 - An automated testing module 220 measures the accuracy of a particular bot and works with a tuning module 222 and a tagging module 224 to improve the accuracy and relevancy of the bot. see par 52 - It is important that a conversational assistant understands various paraphrases and utterances that could be used in asking the same question. Using relevant utterances as features in a question-answering system has shown to improve the accuracy both in terms of precision and recall to retrieve the right answer. see par 42 -The method 500 continues by determining 520 whether a relevance score for each article is above a confidence threshold level. In some embodiments, the confidence threshold level is determined by a precision/recall accuracy measure. For example, for a set of messages (for various thresholds), the number of correct responses from the bot are measured. Based on the number of correct responses, the right confidence threshold is determined; see par 79 - the bot may not have all the information to answer the question. The systems and methods described herein allow the bot to navigates the system (based on entities) to find the right answer by asking the right question. The systems then get the entities/metadata from the query and also get them for each article. If they all match, then the method continues. However, if they don't match, the systems and methods find the difference in the entities and get the priority from the missing entities and generate a question for the entity. The systems and methods then prompt the user with this question. The process is repeated until a valid article is identified; Yu discloses the limitations based on broadest reasonable interpretation in light of the specification – see page 5, Section 4, Experiments - An answer is considered correct if and only if its normalized form has a match in the acceptable answer list. We also employ Recall@K (R@K) as an intermediate evaluation metric, measured as the percentage of top-K retrieved or generated documents that contain the answer.; see page 9, Section 4.3.2 - The improvement in open-domain QA performance is due to the fact that correct answers are included more frequently in the generated text Recall@K is the most commonly used metric in existing works to measure the retrieval performance, which computes the percentage of top-K retrieved or generated documents that contain any possible answer at least once). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 above. Concerning claim 8, Tiwari and Yu disclose: The apparatus of claim 2, further comprising a true solve rate detector to generate a true solve rate that is an estimate of a percentage of useful answers generated by the autonomous AI chatbot based at least in part on helpfulness and factual accuracy (Tiwari – see par 29 - A vector space similarity search is performed by converting a query to a vector using sentence embedding during run time and comparing the query vector to the document vectors in the index (computed offline) to find the most relevant document to the query. see par 33 - . An automated testing module 220 measures the accuracy of a particular bot and works with a tuning module 222 and a tagging module 224 to improve the accuracy and relevancy of the bot. See par 42 - The method 500 continues by determining 520 whether a relevance score for each article is above a confidence threshold level. In some embodiments, the confidence threshold level is determined by a precision/recall accuracy measure. For example, for a set of messages (for various thresholds), the number of correct responses from the bot are measured. Based on the number of correct responses, the right confidence threshold is determined; see par 52 - It is important that a conversational assistant understands various paraphrases and utterances that could be used in asking the same question. Using relevant utterances as features in a question-answering system has shown to improve the accuracy both in terms of precision and recall to retrieve the right answer; see par 39 - The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. If the blind testing 426 results are not satisfactory, the knowledge base weights are tuned 428 by changing the knowledge base ranking model weights to improve accuracy and relevancy. In some embodiments, the systems and methods include a machine learning algorithm that optimizes and tunes the weights for various feature scores (e.g., text score, vector score, title similarity, utterance similarity, etc.) to combine those score features; Yu discloses the limitations based on broadest reasonable interpretation in light of the specification – see page 5, Section 4, Experiments - An answer is considered correct if and only if its normalized form has a match in the acceptable answer list. We also employ Recall@K (R@K) as an intermediate evaluation metric, measured as the percentage of top-K retrieved or generated documents that contain the answer.; see page 9, Section 4.3.2 - The improvement in open-domain QA performance is due to the fact that correct answers are included more frequently in the generated text Recall@K is the most commonly used metric in existing works to measure the retrieval performance, which computes the percentage of top-K retrieved or generated documents that contain any possible answer at least once). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 above. Concerning claim 9, Tiwari and Yu disclose: The apparatus of claim 2, further comprising a database of knowledge base articles wherein the information resource comprises a knowledge base of articles and the evaluation engine generates evaluation questions from the knowledge base of articles (Tiwari –see par 25 - the knowledge base corpus 106 is accessed from websites, databases, and any other data sources; see par 39 - A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. If the blind testing 426 results are not satisfactory, the knowledge base weights are tuned 428 by changing the knowledge base ranking model weights to improve accuracy and relevancy. see par 73 - The systems and methods also perform question generation (using syntactic rules based on dependency parsing) from the summary sentences. For example, from the sentence “If you want to disconnect your phone and use it again later, simply touch Disconnect on the Bluetooth settings screen”, the systems and methods generate relevant questions such as “How can I disconnect my phone?”, “How do I disconnect my phone?”, and “What is the procedure to disconnect my phone?”. see par 79 - This is accomplished by getting the right articles for the query based on Elastic Search+Vector Search+Re-ranker. The systems then get the entities/metadata from the query and also get them for each article. If they all match, then the method continues. However, if they don't match, the systems and methods find the difference in the entities and get the priority from the missing entities and generate a question for the entity. The systems and methods then prompt the user with this question. The process is repeated until a valid article is identified). Concerning claim 10, Tiwari and Yu disclose: The apparatus of claim 2, further comprising a database of historical customer support tickets, wherein the evaluation engine generates questions based at least in part on questions in historic customer support tickets (Tiwari – see par 25 - The knowledge base corpus 106 includes, for example, operating manuals, user manuals, frequently asked questions and answers; see par 39 - Method 400 continues with an automatic creation of intents 418. For each category of articles, the method automatically creates intents and adds important phrases from the articles as utterances for the intents. A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking; In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set. If the blind testing 426 results are not satisfactory, the knowledge base weights are tuned 428 by changing the knowledge base ranking model weights to improve accuracy and relevancy; see par 82 - the systems and methods gather different sources of data from the knowledge base (the tags added for each article, the user messages that are labeled by analysts to mark the correct answers, and also click data of the users to map their queries to the correct article). All these give the data in the format (User Query Article ID). The systems and methods use each of these examples as a positive example and query the knowledge base to get the closest articles to the query as negative examples.). Concerning claim 11, Tiwari and Yu disclose: The apparatus of claim 2, wherein the evaluation engine evaluates the AI chatbot over a sequence of questions corresponding to a conversation with a customer (Tiwari – see par 36 - method 400 accesses multiple data sources or knowledge bases and creates a conversational bot that can answer questions related to the data received from the multiple data sources or knowledge bases; see par 73 - The systems and methods also perform question generation (using syntactic rules based on dependency parsing) from the summary sentences. For example, from the sentence “If you want to disconnect your phone and use it again later, simply touch Disconnect on the Bluetooth settings screen”, the systems and methods generate relevant questions such as “How can I disconnect my phone?”, “How do I disconnect my phone?”, and “What is the procedure to disconnect my phone?”). Claims 12-26, and 28-32 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari (US 2021/0133224) and Yu, et al, “Generate rather than retrieve: Large language models are strong context generators,” 2023, Published at International Conference on Learning Representations (ICLR) 2023, arXiv preprint arXiv:2209.10063, pages 1-10, and further in view of Koneru (US 2022/0343901). Concerning independent claim 12, Tiwari and Yu disclose: An apparatus for responding to a customer service ticket (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message), comprising: a classifier trained (Tiwari [same as claim 1]– see par 28-29 - classify the intent of a particular question 114, 76 - The systems and methods then train an intent classification engine; see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) to detect a topic of a customer question and associated intents based on a taxonomy of topics (Tiwari [same as cl. 1] – see par 36-37, FIG. 4 ; par 40); an autonomous artificial intelligence (AI) chatbot agent (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) to to implement a workflow for a least one detected topic (Applicant’s specification [0150] as published states “each workflow corresponds to a custom “intent.” An example of an intent is a refund request, or a reset password request, or a very granular intent such as a very specific customer question”) Tiwari – see par 24 - The described systems and methods can accessed the indexed data to provide an answer to the question. A similar approach is used for any type of data associated with any product, service, topic, issue, and the like. see par 25 - Any number of data sources 102 represent a corpus of data associated with a particular topic, product, service, issue, and the like. see par 44 - the answer a user is expecting is only a portion of a particular article or document. In these situations, the described systems and methods may highlight just the portion that is of interest to the user, rather than providing an entire section of data that contains additional details not necessary to answer the user's question; see par 85 – dialog acts for conversing with user; par 86 – classify user utterances to dialog acts (disclosing answer a specific question)) in which the large language model (Yu [same as cl. 1] - page 2, 1st paragraph) Tiwari discloses “the user can provide a decision tree (if the user picks Apple, respond with this. If the user picks Orange, respond with that). This approach enables customers (e.g., system administrators) to easily configure flows and build chatbots very quickly. In the naive version of the system, the user has to exactly match what was configured by the customer. However, the end users should be able to say something similar and still be able to navigate the decision tree. To solve this problem, the described systems and methods integrate dialog acts into the knowledge base decision trees.” (See par 85). Koneru discloses: is prompted with a “natural language workflow policy and a description of available software tools and available Application Programming Interface (API) calls” (Koneru– See par 49 - the utterance may be tagged by the conversation designer using a comment or annotation, designating the utterance as requiring a certain “service”. This certain service may then be converted to a “service node” by the developer, requiring some action to be taken, e.g., plugging to an external source using an API; see par 50 - designer can add business logic and/or rules, which can be converted into bot action nodes which include, for example, script nodes or service nodes, within the editable dialog tasks (disclosing available software tools); See par 130, FIG. 6F - the designer can provide additional business logic or rules in natural language text such as a request for API calls and other logic as shown at reference numeral 630 – “use the phone number to call API1, retrieve credit cards and their offers from the API1. Check the offers applicable to this phone number”; See FIG. 12A-12B, par 168 - FIG. 12A illustrates a sample conversation between a bot and a user. The user initiates the conversation with the intent to book a flight and the bot presents a series of prompts to fulfill the intent; par 169 – MessageID, SceneID for dialog task, such as “I want to Book flight”; See par 171 – FIG. 12A-D, use conversation designed in conversation tool 300 (FIG. 3)) to generate an interactive workflow to solve the customer question (Applicant’s specification [0150] as published states “each workflow corresponds to a custom “intent.” An example of an intent is a refund request, or a reset password request, or a very granular intent such as a very specific customer question” Koneru– See par 101, FIG. 5A - At step 525, API calls and other validations using business or other logic in a script node or a service node (e.g., a node that includes the API) can be added by the development tool. For example, the development tool can add the API calls between any of the created nodes, e.g., between an entity node and a message node. For example, the logic can be provided within a service node, or can be a script node created by the developer or the development tool. see par 120 - bot messages are messages sent by the bot to users as a greeting, information, answer to a user query, or request for input. See par 130, FIG. 6F - the designer can provide additional business logic or rules in natural language text such as a request for API calls and other logic as shown at reference numeral 630 – “use the phone number to call API1, retrieve credit cards and their offers from the API1. Check the offers applicable to this phone number…”). Tiwari, Yu, and Koneru disclose: an evaluation engine (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102 ) accessing at least one large language model (See Yu above – page 2; page 1, Abstract - large language model) to generate evaluation questions and evaluate answers, wherein in a test mode the evaluation engine generates a sequence of evaluation questions classified by the classifier and answered by the AI chatbot (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message; see par 40 - FIG. 5 is a process diagram depicting an embodiment of a method 500 for processing messages (e.g., questions) received from one or more users; The method 500 continues as an intent classification model is applied 508 to identify the category for the knowledge base (e.g., the category associated with the user message 502), with the evaluation engine implementing reasoning logic to evaluate the sequence of answers generated by the AI chatbot agent (Tiwari –see par 39 - A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer. An automated testing process 424 is used to measure the accuracy of the bot. Blind testing 426 is performed to evaluate the accuracy of the bot without knowing the test set; see par 85 - the user can provide a decision tree (if the user picks Apple, respond with this. If the user picks Orange, respond with that). This approach enables customers (e.g., system administrators) to easily configure flows and build chatbots very quickly. Dialog Acts are types of speech acts that serve common actions with respect to navigating a decision tree. The following are examples of dialog acts: [0086] Affirm—user has agreed to what the bot asked (typically a Yes/No question) [0087-0094] Negate—user has disagreed to what the bot asked (typically a Yes/No question). Yu and Koneru also discloses having a “sequence” for the questions and answers being evaluated: Yu – See FIG. 1 – overall framework – question-document pairs from each embedding cluster; read documents to predict an answer; See page 5, section 3.2.2 – Clustering-based prompts – large language model presents different n question-document pairs as in-context demonstrations for generating documents to a given question; Q is set of questions in training split; generate document d for each question Q; See page 7, Section 4.2.1 – shorten training time; using 10 documents; see page 9, section 4.3.2 Exa- improvement in open-domain QA performance is due to the fact that correct answers are included more frequently in the generated text Recall@K is the most commonly used metric in existing works to measure the retrieval performance, which computes the percentage of top-K retrieved or generated documents that contain any possible answer at least once; To improve coverage, we propose GENREAD with clustering, where we include examples in the prompt from different clusters of the training data to elicit more diverse generations; See also Koneru – see par 37 - the Intelligent Design and Development Platform can incorporate artificial intelligence (AI) capabilities, e.g., natural language processing (NLP), machine learning processing, and rules engines, with the integration of other functionality to design and develop very granular conversations in bot applications; see par 130 - A logic can then be converted into a service node or script node in the developer tool. For example, a script node can include code written by the developer to execute some action requested by the designer; whereas, a service node can include APIs in order to retrieve information from an external source, which can then be passed to the script node or other node for execution; see par 156 - selectable entity extractions 622 may include “Evaluate unused text and text used for extracting entities from previous utterances”; the selectable entity extractions 622 can be used for additional training in AI processes.) It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 and claim 2 above. In addition, Tiwari, Yu, and Koneru are analogous art as they are directed to answering questions using chatbots/Question answering (QA) (see Tiwari Abstract, par 51; Yu Abstract; Koneru Abstract, par 79). Tiwari discloses “the user can provide a decision tree (if the user picks Apple, respond with this. If the user picks Orange, respond with that). This approach enables customers (e.g., system administrators) to easily configure flows and build chatbots very quickly. In the naive version of the system, the user has to exactly match what was configured by the customer. However, the end users should be able to say something similar and still be able to navigate the decision tree. To solve this problem, the described systems and methods integrate dialog acts into the knowledge base decision trees.” (See par 85). Tiwari discloses communicating with data sources to answer questions (See par 25). Yu discloses using a “large language model”. Koneru improves upon Tiwari and Yu by disclosing having a natural language workflow policy/text for how business logic will work when a bot is conversing with a user (See par 130, 168-169, 171). One of ordinary skill in the art would be motivated to further include having “natural language” policy/logic to efficiently improve upon the chatbot, knowledge base with “large articles” (See par 47) and “dialog acts” placed in knowledge base decision trees in Tiwari and the “large language model” for analyzing text for answers in Yu. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the conversational bot that answers questions using knowledge bases in Tiwari (See abstract, par 36) to further use a large language model as disclosed in Yu, to further include natural language workflow policy along with APIs and software that is available as part of the answer as disclosed in Koneru, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Concerning independent claim 19, Tiwari, Yu, and Koneru disclose: An apparatus for responding to a customer service ticket (Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message) comprising: a classifier trained (Tiwari [same as claim 1, 12]– see par 28-29 - classify the intent of a particular question 114, 76 - The systems and methods then train an intent classification engine; see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) to detect a topic of a customer question based on a taxonomy of topics (Tiwari [same as cl. 1] – see par 36-37, FIG. 4 ; par 40); an autonomous artificial intelligence (AI) chatbot agent (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) using a large language model (Yu [same as cl. 1] - page 2, 1st paragraph) to generate an answer to solve a customer information question for a first set of topics (Tiwari – [same as cl.1 ] - see par 23, 51) by triggering an information retrieval pipeline (for pipeline - NPL-Yu [same as claim 2]– see page 2, 3rd paragraph - In contrast to the retrieve-then-read pipeline, our method is essentially a generate-then-read pipeline; see page 5, Section 3.2.2 “increase knowledge coverage in generated documents... It first clusters the representations of a set of documents into K classes (K = 2 in Figure 1). Next, it randomly selects n question-document pairs (n = 5 in Figure 1) from each cluster. Lastly, a large language model presents the different n question-document pairs as in-context demonstrations for generating documents to a given question. In this way, large language models are based on different distributions of examples, hence resulting in generated documents covering different perspectives) to access an information resource to answer the customer question (Tiwari [same as cl. 2] – see par 25 – any number of data sources 102 represent a corpus of data associated with a particular topic, product, service, issue, and the like; Example data sources 102 include a knowledge base corpus 106, ... The knowledge base corpus 106 includes, for example, operating manuals, user manuals, frequently asked questions and answers, articles, product support documents, catalogs, and the like; see par 27; see par 38); the autonomous artificial intelligence (AI) chatbot agent (Tiwari - see par 105 - programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1100, and are executed by processor(s) 1102) for a second set of topics implementing a workflow for at least one detected topic (Tiwari – see par 24 - The described systems and methods can accessed the indexed data to provide an answer to the question. A similar approach is used for any type of data associated with any product, service, topic, issue, and the like [disclosing multiple topics]. see par 25 - Any number of data sources 102 represent a corpus of data associated with a particular topic, product, service, issue, and the like. see par 44 - the answer a user is expecting is only a portion of a particular article or document. In these situations, the described systems and methods may highlight just the portion that is of interest to the user, rather than providing an entire section of data that contains additional details not necessary to answer the user's question; see par 85 – dialog acts for conversing with user; par 86 – classify user utterances to dialog acts (disclosing answer a specific question) in which the large language model (Yu [same as cl. 1] - page 2, 1st paragraph) is prompted with a natural language workflow policy and a description of available software tools and available Application Programming Interface (API) calls to generate an interactive workflow to solve the customer question (Koneru [as cl. 12]– See par 101 – API calls… using business logic; par 130, FIG. 6 - the designer can provide additional business logic or rules in natural language text such as a request for API calls and other logic as shown at reference numeral 630 – “use the phone number to call API1, retrieve credit cards and their offers from the API1. Check the offers applicable to this phone number; See FIG. 12A-12B, par 168 - FIG. 12A illustrates a sample conversation between a bot and a user. The user initiates the conversation with the intent to book a flight and the bot presents a series of prompts to fulfill the intent; par 169 – MessageID, SceneID for dialog task, such as “I want to Book flight”; See par 171 – FIG. 12A-D, use conversation designed in conversation tool 300 (FIG. 3))); and an evaluation engine accessing at least one large language model to generate evaluation questions and evaluate answers, wherein in a test mode the evaluation engine generates a sequence of evaluation questions classified by the classifier and answered by the AI chatbot, with the evaluation engine implementing reasoning logic to evaluate the sequence of answers generated by the AI chatbot agent (Tiwari – [same as cl. 12] – par 105 – for computing device executing program from storage; Yu – page 2 – large language model; Tiwari – see par 35 - Run time processing system 300 also includes a message processing module 308 that receives and manages the processing of messages (e.g., questions) from one or more users. A user identification module 310 determines the identity of a particular user and an intent classification module 312 applies an intent classification model to a received message; see par 39-40, 85-94; for “sequence” – Yu – FIG. 1, page 5, section 3.2.2; page 7, Section 4.2.1; page 9, section 4.3.2; Koneru par 37, 130, 156). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1, 2, and 12 above. Concerning claims 13 and 20, Tiwari discloses: The apparatus of claim 12, wherein the evaluation engine evaluates an accuracy in which an intent is detected by the intent detector in an incoming question (Tiwari – see par 29 - Run time 104 also includes vector space intent classification to a best category 120 which attempts to classify the intent of a particular question 114. A text and vector space similarity search within a category 122 includes a text similarity search and/or a vector space similarity search. A text similarity search includes traditional information retrieval methods to search for the presence of words (or synonyms) in a given query. see par 33 - A bot training module 218 trains bots using various models to identify the intent of a message or information. An automated testing module 220 measures the accuracy of a particular bot and works with a tuning module 222 and a tagging module 224 to improve the accuracy and relevancy of the bot.). Concerning claims 14 and 21, Tiwari and Yu and Koneru disclose: The apparatus of claim 12, wherein the evaluation engine evaluates the appropriateness of the workflow implemented by the AI chatbot ([0150 as published “ a support manager may configure specific workflows. In one implementation, a support manager can configure workflows in Solve, where each workflow corresponds to a custom “intent.” An example of an intent is a refund request, or a reset password request, or a very granular intent such as a very specific customer question.” Tiwari – see par 29 - A vector space similarity search is performed by converting a query to a vector using sentence embedding during run time and comparing the query vector to the document vectors in the index (computed offline) to find the most relevant document to the query; see par 42 - The method 500 continues by determining 520 whether a relevance score for each article is above a confidence threshold level. In some embodiments, the confidence threshold level is determined by a precision/recall accuracy measure. For example, for a set of messages (for various thresholds), the number of correct responses from the bot are measured; see also Koneru – see par 27 - the intelligent design and development platform allows the designer to design various scenes that are representative of the actual end-user conversations with the bot. Scenes can be shared with other teams for collaborative development, and can also be presented as prototypes to the business owners for receiving feedback. The design conversations use simple text messages, carousels (e.g., a pattern used to display multiple items in a horizontally scrollable portion of a graphical user interface (GUI) and lists), with elaborate conversations using linked messages across the multiple paths as a unified flow.) It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1, 2, and 12 above. Concerning claim 15, Tiwari and Yu and Koneru disclose: The apparatus of claim 12, wherein the evaluation engine evaluates the AI chatbot over a sequence of questions corresponding to a conversation with a customer (Tiwari – see par 36 - method 400 accesses multiple data sources or knowledge bases and creates a conversational bot that can answer questions related to the data received from the multiple data sources or knowledge bases; see par 73 - The systems and methods also perform question generation (using syntactic rules based on dependency parsing) from the summary sentences. For example, from the sentence “If you want to disconnect your phone and use it again later, simply touch Disconnect on the Bluetooth settings screen”, the systems and methods generate relevant questions such as “How can I disconnect my phone?”, “How do I disconnect my phone?”, and “What is the procedure to disconnect my phone?”). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1, 2, and 12 above. Concerning claims 16 and 22, Tiwari and Yu and Koneru disclose: The apparatus of claim 12, wherein the workflow policy comprises at least one natural language sentence (Koneru– See par 130, FIG. 6F - the designer can provide additional business logic or rules in natural language text such as a request for API calls and other logic as shown at reference numeral 630 – “use the phone number to call API1, retrieve credit cards and their offers from the API1. Check the offers applicable to this phone number”). It would have been obvious to combine Tiwari, Yu, and Koneru for the same reasons as claim 1, 2, and 19 above. Concerning claims 17 and 23, Tiwari and Yu and Koneru disclose: The apparatus of claim 12, wherein the large language model is prompted with conversation information associated with a customer ticket (Tiwari see par 53 - systems and methods discussed herein address the problem of utterance generation in the context of a conversational virtual assistant and question-answering; see par 79 - In a knowledge base system, some conversations or dialogs are one-shot (e.g., the user asks a question and the bot responds with an article or document). In other situations, the bot may not have all the information to answer the question. The systems and methods described herein allow the bot to navigates the system (based on entities) to find the right answer by asking the right question; see Yu for finding correct answers for open-domain questions (See page 10, page 1)), prompted with the workflow policy, and prompted with information on applicable software tools for the workflow policy (Koneru– See par 49 - the utterance may be tagged by the conversation designer using a comment or annotation, designating the utterance as requiring a certain “service”. This certain service may then be converted to a “service node” by the developer, requiring some action to be taken, e.g., plugging to an external source using an API; see par 50 - designer can add business logic and/or rules, which can be converted into bot action nodes which include, for example, script nodes or service nodes, within the editable dialog tasks (disclosing available software tools); See par 130, FIG. 6F - the designer can provide additional business logic or rules in natural language text such as a request for API calls and other logic as shown at reference numeral 630 – “use the phone number to call API1, retrieve credit cards and their offers from the API1. Check the offers applicable to this phone number”; See FIG. 12A-12B, par 168, 169, 171 - FIG. 12A illustrates a sample conversation between a bot and a user. The user initiates the conversation with the intent to book a flight and the bot presents a series of prompts to fulfill the intent). It would have been obvious to combine Tiwari and Yu and Koneru for the same reasons as claim 12 and claim 19 above. Concerning claim 18, Tiwari discloses looking at relevance scores and number of correct responses from a bot (See par 42) and considers works which are “not” stopwords (See par 59) and generates candidate paraphrases (See par 61). Yu discloses: The apparatus of claim 17, wherein the large language model is further prompted with guard rail prompts (Yu – see page 10, Ethics statement – “Previous work has shown various forms of bias, such as racial and gender bias, in large language models like GPT-3, even after explicit efforts to reduce toxic language (Chan, 2022); ethical solutions, future work includes… aligning language models with user intent to generate less biased contents and fewer fabricated facts; see page 21, Table 15 – showing cases studies of hallucination errors). It would have been obvious to combine Tiwari and Yu for the same reasons as claim 1 above. Yu also improves upon looking for stopwords in Tiwari by further stating a solution to addressing user desires (e.g. less bias; fewer fabrications of facts) is to align the language models with user intent. Concerning claim 24, this recites similar limitations as claim 3 above. Claim 24 is rejected over Tiwari and Yu for the same reasons. Concerning claim 25, this recites similar limitations as claim 4 above. Claim 25 is rejected over Tiwari and Yu for the same reasons. Concerning claim 26, this recites similar limitations as claim 5 above. Claim 26 is rejected over Tiwari and Yu for the same reasons. Concerning claim 28, this recites similar limitations as claim 7 above. Claim 28 is rejected over Tiwari and Yu for the same reasons. Concerning claim 29, this recites similar limitations as claim 8 above. Claim 29 is rejected over Tiwari and Yu for the same reasons. Concerning claim 30, this recites similar limitations as claim 9 above. Claim 30 is rejected over Tiwari and Yu for the same reasons. Concerning claim 31, this recites similar limitations as claim 10 above. Claim 31 is rejected over Tiwari and Yu for the same reasons. Concerning claim 32, this recites similar limitations as claim 11 above. Claim 32 is rejected over Tiwari and Yu for the same reasons. Claims 6 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari (US 2021/0133224) and Yu, et al, “Generate rather than retrieve: Large language models are strong context generators,” 2023, Published at International Conference on Learning Representations (ICLR) 2023, arXiv preprint arXiv:2209.10063, pages 1-10, as applied to claims 1-5, and 7-11 above, and further in view of Werner (US 2021/0406479). Concerning claim 6, Tiwari and Yu discloses: The apparatus of claim 2 wherein the evaluation engine generates evaluation questions that are … questions based on the content of the information resource (Tiwari – see par 29 - Run time 104 also includes vector space intent classification to a best category 120 which attempts to classify the intent of a particular question 114. see par 39, FIG. 4 – automatic creation of intents 418; For each category of articles (disclosing information resource), the method automatically creates intents and adds important phrases from the articles as utterances for the intents . A bot is then created 420 and trained 422 using various models associated with intent identification and knowledge base ranking. In some embodiments, if a message does not return the right answer from the bot, then the systems and methods relabel the correct intent (category/type/kind of question) and knowledge base article for the message. The systems and methods may also retrain the intent classification and knowledge base ranking algorithm to return the correct answer). Tiwari discloses testing articles for different labeled intents (See FIG. 4). Yu discloses conducting experiments and generating “sentences/documents (See page 4-5) for answering questions. Werner discloses generating “synthetic questions” as best understood: The apparatus of claim 2 wherein the evaluation engine generates evaluation questions that are “synthetic” questions based on the content of the information resource (Werner – see par 42 - FIG. 4 illustrates, by way of example, a diagram of an embodiment of a system 400 for generating synthetic question and answer pairs. The synthetic question and answer pairs can be used to supplement data from a closed domain corpus 440, such as to provide more training data. The system 400 as illustrated includes the closed domain corpus 440. Questions can be generated based on the domain corpus 440. The questions can be expressly from the domain corpus 440 or derived from the domain corpus 440. The questions can include one or more unstructured questions 442, structured questions 444, semi- or structured questions 446.)). Tiwari, Yu, and Werner are analogous art as they are directed to answering questions using chatbots/Question answering (QA) (see Tiwari Abstract, par 51; Yu Abstract; Werner Abstract, par 32). Tiwari discloses testing articles for different labeled intents (See FIG. 4). Yu discloses conducting experiments and generating “sentences/documents (See page 4-5) for answering questions. Werner improves upon Tiwari and Yu by disclosing having synthetic questions generated from a domain. One of ordinary skill in the art would be motivated to further include having “synthetic questions” generated from the domain to efficiently improve upon the chatbot, knowledge base with “large articles” (See par 47) and intent/question classification in Tiwari, where it can be used in the testing process of FIG. 4, and the “large language model” for analyzing text for answers in Yu. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the conversational bot that answers questions using knowledge bases in Tiwari (See abstract, par 36) to further use a large language model as disclosed in Yu, to further include synthetic questions generated from a domain as disclosed in Koneru, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Claims 27 are rejected under 35 U.S.C. 103 as being unpatentable over Tiwari (US 2021/0133224) and Yu, et al, “Generate rather than retrieve: Large language models are strong context generators,” 2023, Published at International Conference on Learning Representations (ICLR) 2023, arXiv preprint arXiv:2209.10063, pages 1-10, and Koneru (US 2022/0343901) as applied to claims 12-26 and 28-32 above, and further in view of Werner (US 2021/0406479). Concerning claims 27, this claim is the same as claim 6, but depends from claim 19. Claim 19 also uses the Koneru reference. Claim 27 is rejected for the same reasons as above in claim 6, based on the disclosure of Werner. Tiwari, Yu, Koneru, and Werner are analogous art as they are directed to answering questions using chatbots/Question answering (QA) (see Tiwari Abstract, par 51; Yu Abstract; Koneru Abstract, par 79; Werner Abstract, par 32). Tiwari discloses testing articles for different labeled intents (See FIG. 4). Yu discloses conducting experiments and generating “sentences/documents (See page 4-5) for answering questions. Koneru discloses automatically linking nodes within the conversation to create a fluid conversation (See par 55-56) and automatically create editable dialog tasks from respective scenes (See par 83) and “a bot utterance added in the scene and tagged as a question may be added as a bot prompt in an entity node” (See par 84). Werner improves upon Tiwari and Koneru and Yu by disclosing having synthetic questions generated from a domain. One of ordinary skill in the art would be motivated to further include having “synthetic questions” generated from the domain to efficiently improve upon the chatbot, knowledge base with “large articles” (See par 47) and intent/question classification in Tiwari, where it can be used in the testing process of FIG. 4, and the “large language model” for analyzing text for answers in Yu. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the conversational bot that answers questions using knowledge bases in Tiwari (See abstract, par 36) to further use a large language model as disclosed in Yu, to further include natural language workflow policy along with APIs and software that is available as part of the answer as disclosed in Koneru, and to further include synthetic questions generated from a domain as disclosed in Werner, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable and there is a reasonable expectation of success. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN R GOLDBERG whose telephone number is (571)270-7949. The examiner can normally be reached 830AM - 430PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Coupe can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IVAN R GOLDBERG/Primary Examiner, Art Unit 3619
Read full office action

Prosecution Timeline

Oct 31, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596970
SYSTEM AND METHOD FOR INTERMODAL FACILITY MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591826
SYSTEM FOR CREATING AND MANAGING ENTERPRISE USER WORKFLOWS
2y 5m to grant Granted Mar 31, 2026
Patent 12586020
DETERMINING IMPACTS OF WORK ITEMS ON REPOSITORIES
2y 5m to grant Granted Mar 24, 2026
Patent 12579493
SYSTEMS AND METHODS FOR CLIENT INTAKE AND MANAGEMENT USING HIERARCHICAL CONFLICT ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12555055
CENTRALIZED ORCHESTRATION OF WORKFLOW COMPONENT EXECUTIONS ACROSS SOFTWARE SERVICES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
35%
Grant Probability
72%
With Interview (+36.9%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 365 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month