DETAILED ACTION This Office Action is in response to Applicant's Communication received on 12/1 5 /2022. Claims 1-20 are presented for examination. Claims 1 and 1 3 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority Applicant’s claim for the benefit of Priority Applicant’s claim for the benefit of a prior filed provisional Application No. 63/ 320,041 filed on 03/15 /2022 is acknowledged by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claim s 1-12 are directed to a method and Claims 1 3 -20 are directed to a system . Thus, the claims fall within one of the statutory categories ( process and machine) and are eligible under Step 1. Step 2A Prong 1 Independent Claims Claim s 1 recite s : a method for generating training questions comprising: identifying a structure for generating an input; formulating the input according to the structure - th ese limitation s encompass a mental process (which is observing, evaluating, and judging that can be practically performed in the human mind or by a human using a pen and paper) , such as a user coming up with training questions, making judgement to decid e structure for an input, and writing down an input based on the decided upon structure . Claim s 13 recite s : identify a structure for generating an input; formulate the input according to the structure - th ese limitation s encompass a mental process (which is observing, evaluating, and judging that can be practically performed in the human mind or by a human using a pen and paper) , such as a user making judgement to decide structure for an input and writing down an input based on the decided upon structure . Accordingly, these claims recite an abstract idea that falls under the “mental process” grouping. Step 2A Prong 2 Independent Claims Additional elements Claim 1 : providing the input to a first machine learning model; receiving an output from the first machine learning model based on the input - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting (see MPEP § 2106.05(g)) . training a second machine learning model based on the output - this is a high level training step such that it merely recites the idea of training machine learning model using some data without providing any details of the training or the model (see MPEP § 2106.05(f)) . Claim 13 : a system for generating training questions comprising: a processor; and a memory, wherein the memory includes instructions that, when executed by the processor, cause the processor to - th ese limitation s are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)) . provid e the input to a first machine learning model; receiv e an output from the first machine learning model based on the input - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting (see MPEP § 2106.05(g)) . train a second machine learning model based on the output - this is a high level training step such that it merely recites the idea of training machine learning model using some data without providing any details of the training or the model (see MPEP § 2106.05(f)) . Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. These claims are directed to the abstract idea. Step 2B Independent Claims Additional elements Claim 1 : providing the input to a first machine learning model; receiving an output from the first machine learning model based on the input - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting , which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data” , “presenting offers”) . training a second machine learning model based on the output - this is a high level training step such that it merely recites the idea of training a machine learning model using some data without providing any details of the training or the model (see MPEP § 2106.05(f)) . Claim 13 : a system for generating training questions comprising: a processor; and a memory, wherein the memory includes instructions that, when executed by the processor, cause the processor to - th ese limitation s are recited at a high-level of generality such that it amount to no more than using generic computer components to apply the judicial exception (see MPEP § 2106.05(f)) . provid e the input to a first machine learning model; receiv e an output from the first machine learning model based on the input - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting , which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data” , “presenting offers”) . train a second machine learning model based on the output - this is a high level training step such that it merely recites the idea of training a machine learning model using some data without providing any details of the training or the model (see MPEP § 2106.05(f)) . Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, these claims are patent ineligible. Step 2A Prong 1 Dependent Claims Claim s 2 and 14 : the structure is a prompt structure for generating a prompt, wherein the prompt identifies a task for generating the output - th ese limitation s merely further the mental process of deciding structure for an input by specifying the structure . Claim s 4 and 16 : the prompt structure includes preset wording and a placeholder for entering at least an answer title or an answer content for generating the question - th ese limitation s merely further the mental process of deciding structure for an input by specifying the structure . Claim s 5 and 17 : the identifying of the structure includes identifying the structure based on a predicted success of the first machine learning model in generating the output - th ese limitation s encompasses mental process , such as, observing predicted success of a model and deciding the structure based the predicted success . Claim s 7 and 19 : filtering the output based on a predicted characteristic of the output - th ese limitation s encompasses mental process , such as, using evaluation and judgment to filter data based on certain characteristics . Claim 9 : selecting a hyperparameter for the first machine learning model for optimizing performance of the first machine learning model - th ese limitation s encompasses mental process , such as, using evaluation and judgment to identify and select a hyperparameter for the model . Claim 10 : generating a prompt according to the structure ; computing a metric for the first training question; and altering a parameter of the first machine learning model based on the metric - th ese limitation s encompasses mental process and mathematical concepts, such as, writing down a prompt based on the decided upon structure , calculating a metric, and changing parameter of a model based on the calculated metric . Claim 11 : the computing of the metric includes: computing the metric based on the feedback - th ese limitation s encompasses mental process and mathematical concepts, such as, calculating the metric based on received feedback. Claim 12 : including the first training question in a second prompt for generating a second training question - th ese limitation s encompasses mental process , such as, observing the first question and writing down a second prompt that includes the first question. Thus, the claims recite the abstract idea. Step 2A Prong 2 D ependent Claims Additional elements Claim s 3 and 15 : the output is a question generated based on the prompt - these limitations amount to merely limiting the data type to be manipulated (see MPEP § 2106.05( h )). Claim s 6 and 18 : the first machine learning model is a generative language model - these limitations amount to merely limiting the data source or data type to be manipulated (see MPEP § 2106.05( h )). Claim s 7 and 19 : the training of the second machine learning model is based on the filtered output - this is a high level training step such that it merely recites the idea of training a machine learning model using filtered data without providing the details of the training or the model (see MPEP § 2106.05(f)) . Claim s 8 and 20 : the first machine learning model and the second machine learning model are different models - these limitations amount to merely limiting the data source or data type to be manipulated (see MPEP § 2106.05( h )). Claim 10 : providing the prompt to the first machine learning model; receiving a first training question from the first machine learning model based on the prompt - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting (see MPEP § 2106.05(g)). Claim 11 : receiving feedback about the first training question - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting (see MPEP § 2106.05(g)). Accordingly, these additional elements do not integrate the judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to the abstract idea. Step 2B Dependent Claims Additional elements Claim s 3 and 15 : the output is a question generated based on the prompt - these limitations amount to merely limiting the data type to be manipulated (see MPEP § 2106.05( h )). Claim s 6 and 18 : the first machine learning model is a generative language model - these limitations amount to merely limiting the data source or data type to be manipulated (see MPEP § 2106.05( h )). Claim s 7 and 19 : the training of the second machine learning model is based on the filtered output - this is a high level training step such that it merely recites the idea of training a machine learning model using filtered data without providing the details of the training or the model (see MPEP § 2106.05(f)) . Claim s 8 and 20 : the first machine learning model and the second machine learning model are different models - these limitations amount to merely limiting the data source or data type to be manipulated (see MPEP § 2106.05( h )). Claim 10 : providing the prompt to the first machine learning model; receiving a first training question from the first machine learning model based on the prompt - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting , which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data”). Claim 11 : receiving feedback about the first training question - these limitations amount to insignificant extra-solution activity of mere data gathering and outputting , which is well-understood, routine, and conventional activity (see MPEP § 2106.05(d), “receiving/ transmitting data”). Accordingly, these additional elements do not amount to significantly more than the judicial exception. As such, the claims are patent ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7-8, 13-15, and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rodrigo Cavalin et al. (US 2023/0092274 A1 hereinafter Rodrigo ). Regarding Claim 1 , Rodrigo teaches a method for generating training questions ([0050] a method of generating of training data examples; [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic) comprising: identifying a structure for generating an input ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model - thus, identifying structure/ topic for generating input to the model) ; formulating the input according to the structure ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model - thus, formulating input including topic and utterances to the model) ; providing the input to a first machine learning model ([0053] the received topic and the extracted utterances can be input to a trained machine learning model (i.e., first machine learning model)) ; receiving an output from the first machine learning model based on the input ([0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent (i.e., output)) ; and training a second machine learning model based on the output ([0053] the trained machine learning model generates example utterances for the new intent (i.e., output); [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic; an answer can be identified for responding the question associated with the topic; training the chatbot (i.e., second machine learning model) using the new intent including the example utterances and the answer (i.e., based on the output); [0023] chatbot can be programmed using artificial intelligence techniques such as machine learning (ML) and natural language processing (NLP); chatbot can be trained using one or more machine learning techniques using training data which can include intents) . As to dependent Claim 2 , Rodrigo teaches all the limitations of Claim 1. Rodrigo further teaches wherein the structure is a prompt structure for generating a prompt ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent - thus, the structure/ topic is for generating prompt/ extracting utterances based on the topic (i.e., prompt structure) to provide to the machine learning model), wherein the prompt identifies a task for generating the output ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta- knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent; an example of the generated examples utterances for new intent are shown in FIG. 4 at 406; [0046] when creating a new intent “Transfer Z”, then topics “Transfer” and “Z” can be used as inputs to generate samples from either Intent 1 or 2; the generated examples are shown at 406; topical metadata 408 show topics or meta-knowledge (underlined text) that can be extracted automatically from example utterances. See fig. 4 - it shows the prompt/ extracted utterances 408 based on the topic identifies the task/ transfer for generating output/ example utterances 406 for new intent -Transfer Z). As to dependent Claim 3 , Rodrigo teaches all the limitations of Claim 2. Rodrigo further teaches wherein the output is a question generated based on the prompt ([0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent (i.e., output); [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic (i.e., question generated based on the prompt/extracted utterances)) . As to dependent Claim 7 , Rodrigo teaches all the limitations of Claim 1. Rodrigo further teaches wherein filtering the output based on a predicted characteristic of the output ([0036] the generated intents, example utterances/ questions and answers can be filtered to remove any noise; [0044] the resulting generated examples are evaluated, for noise removal; those that do not meet minimal values in metrics such as perplexity, can be discarded (i.e., filtering the output based on a predicted characteristic of the output) , wherein the training of the second machine learning model is based on the filtered output ([0044] the resulting generated examples are evaluated, e.g., for noise removal. For example, those that do not meet minimal values in metrics such as perplexity, can be discarded; [0053] the trained machine learning model generates example utterances for the new intent; [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic; an answer can be identified for responding the question associated with the topic; training the chatbot (i.e., second machine learning model) using the new intent including the example utterances and the answer) . As to dependent Claim 8 , Rodrigo teaches all the limitations of Claim 1. Rodrigo further teaches wherein the first machine learning model and the second machine learning model are different models ([0053] the received topic and the extracted utterances can be input to a trained machine learning model (i.e., first machine learning model); the trained machine learning model generates example utterances for the new intent; [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic; an answer can be identified for responding the question associated with the topic; training the chatbot (i.e., second machine learning model) using the new intent including the example utterances and the answer) . Claims 13-1 5 and 19-20 are system claims corresponding to the method claims 1- 3 and 7-8 above and therefore, rejected for the same reasons. Rodrigo further teaches system for generating training questions comprising: a processor and a memory, wherein the memory includes instructions that, when executed by the processor, cause the processor to perform method ([0055] components of a system that can create new intents for chatbot training; one or more hardware processors; memory device store data and/or processor instructions for implementing various functionalities associated with the methods and/or systems described; processors execute computer instructions stored in memory). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 4-5 and 16-17 are r ejected under 35 U.S.C. 103 as being unpatentable over Rodrigo in view of McRitchie et al. (US 2021/0081615 A1 hereinafter McRitchie ) . As to dependent Claim 4 , Rodrigo teaches all the limitations of Claim 3. Rodrigo further teaches wherein the prompt structure includes preset wording for generating the question ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent; [0046] when creating a new intent “Transfer Z”, then topics “Transfer” and “Z” can be used as inputs to generate samples from either Intent 1 or 2; the generated examples are shown at 406; topical metadata 408 show topics or meta-knowledge (underlined text) that can be extracted automatically from example utterances. See fig. 4 - thus, the prompt structure/ topic includes preset wording/ transfer in 404 for generating questions/ 406). However, Rodrigo fails to expressly teach wherein the prompt structure includes a placeholder for entering at least an answer title or an answer content. In the same field of endeavor, McRitchie teaches wherein the prompt structure includes a placeholder for entering at least an answer title or an answer content ([0137] FIG. 5 illustrates example utterances 510 and a generalized template 520 for a bot intent; [0183] input is generated for the chatbot that has been configured with the bot intent identified; the input generated include an indication of which template was deemed the matching template; the chatbot determine a response that depends on which template matched, such as starting a conversation in a different dialog flow state depending on the matching template. See fig. 5 - it shows placeholders for entering answer title/ content in 522, 524). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein the prompt structure includes a placeholder for entering at least an answer title or an answer content, as taught by McRitchie into Rodrigo. Doing so would be desirable because it would improve the ability of templates to match to a greater range of user utterances ( McRitchie [0175]) and allow for the user input to be handled accurately and efficiently by the chatbot ( McRitchie [0024]). As to dependent Claim 5 , Rodrigo teaches all the limitations of Claim 1. However, Rodrigo fails to expressly teach wherein the identifying of the structure includes identifying the structure based on a predicted success of the first machine learning model in generating the output. In the same field of endeavor, McRitchie teaches wherein the identifying of the structure includes identifying the structure based on a predicted success of the first machine learning model in generating the output ([0063] the trained model/ the trained skill bot (equivalent to first machine learning model) can be used to handle and respond to user utterances; [0024] classifying an intent (i.e., structure) of a user input based on matching an input utterance to a template; [0027] classification involves comparing an input utterance to a set of templates, where each template is associated with a particular chatbot (i.e., template associated with the first machine learning model); [0046] a skill bot can be trained to infer an intent for an utterance; [0182]the candidate templates are ranked by score to identify the intent associated with the highest scoring template as being the bot intent to which the user utterance corresponds (i.e., predicted success of the first machine learning model/ skill bot associated with the template); [0183] input is generated for the chatbot that has been configured with the bot intent identified; the input generated include an indication of which template was deemed the matching template; the chatbot determine a response (i.e., generating output) that depends on which template matched, such as starting a conversation in a different dialog flow state depending on the matching template)) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein the identifying of the structure includes identifying the structure based on a predicted success of the first machine learning model in generating the output, as taught by McRitchie into Rodrigo. Doing so would be desirable because it would improve the ability of templates to match to a greater range of user utterances ( McRitchie [0175]) and allow for the user input to be handled accurately and efficiently by the chatbot ( McRitchie [0024]). Claims 16-17 are system claims corresponding to the method claims 4-5 above and therefore, rejected for the same reasons. Claim s 6, 10, and 18 are r ejected under 35 U.S.C. 103 as being unpatentable over Rodrigo in view of Klein et al. (US 2022/0067486 A1 hereinafter Klein ) . As to dependent Claim 6 , Rodrigo teaches all the limitations of Claim 1. However, Rodrigo fails to expressly teach wherein the first machine learning model is a generative language model. In the same field of endeavor, Klein teaches wherein the first machine learning model is a generative language model ([0036] the first machine learning model performing the question generation task may be implemented using a transformer decoder network, such as, generative pretrained transformer GPT; [0042] the first machine learning model implemented using a transformer decoder network, such as, generative pretrained transformer, which may be a traditional language model). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein the first machine learning model is a generative language model, as taught by Klein into Rodrigo. Doing so would be desirable because it would allow for predicting, based on one or more previous words in a word sequence, one or more subsequent words (Klein [0042]) and given the natural sequential ordering of the language model, it would provide an efficient sampling strategy for the question generation task (Klein [0043]). As to dependent Claim 10 , Rodrigo teaches all the limitations of Claim 1. Rodrigo further teaches wherein generating a prompt according to the structure ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model - thus, generating prompt/ extracting utterances based on the structure/ topic to provide to the machine learning model) ; providing the prompt to the first machine learning model ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model - thus, providing the extracted utterances based on the topic (i.e., prompt) to the machine learning model) ; receiving a first training question from the first machine learning model based on the prompt ([0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent (i.e., based on the prompt/ extracted utterances); [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic) ; computing a metric for the first training question ([0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent; [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic; [0044] the resulting generated examples are evaluated for noise removal; those that do not meet minimal values in metrics such as perplexity, can be discarded). However, Rodrigo fails to expressly teach wherein altering a parameter of the first machine learning model based on the metric. In the same field of endeavor, Klein teaches wherein altering a parameter of the first machine learning model based on the metric ([0036] the first machine learning model performing the question generation task may be implemented using a transformer decoder network, such as, generative pretrained transformer GPT; [0038] the collaborative training of the first machine learning model and the second machine learning model include adjusting the weights (i.e., altering parameter of the first machine learning model ) applied by the first machine learning model when generating questions in order to minimize the errors (i.e., metric) present in the answers output by the second machine learning model ; [0050] during each round of optimization, the weights of the first machine learning model may be adjusted). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein altering a parameter of the first machine learning model based on the metric, as taught by Klein into Rodrigo. Doing so would be desirable because it would minimize the errors present in the output of the machine learning model (Klein [0002]). Claim 18 is a system claim corresponding to the method claim 6 above and therefore, rejected for the same reasons. Claim 9 is r ejected under 35 U.S.C. 103 as being unpatentable over Rodrigo in view of Hoang et al. (US 2022/0172021 A1 hereinafter Hoang ) . As to dependent Claim 9 , Rodrigo teaches all the limitations of Claim 1. However, Rodrigo fails to expressly teach wherein selecting a hyperparameter for the first machine learning model for optimizing performance of the first machine learning model. In the same field of endeavor, Hoang teaches wherein selecting a hyperparameter for the first machine learning model for optimizing performance of the first machine learning model ([0119] performance of the above described techniques, such as, the iterative and ensemble technique for determining an overall prediction and an overall confidence score of the DNN was evaluated; the evaluations were performed for a DNN model that is hyperparameter tuned; hyperparameter tuning is a process of choosing a set of optimal hyperparameters for the DNN model (i.e., selecting hyperparameter for the first machine learning model for optimizing performance), where a hyperparameter is a parameter whose value is used to control the learning process of the DNN model). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein selecting a hyperparameter for the first machine learning model for optimizing performance of the first machine learning model, as taught by Hoang into Rodrigo. Doing so would be desirable because it would address overconfidence problem associated with machine learning models that are used in chatbot systems (Hoang [0007]). Claims 11 and 12 are r ejected under 35 U.S.C. 103 as being unpatentable over Rodrigo in view of Klein, further in view of Ramnani et al. (US 2023/0068338 A1 hereinafter Ramnani ) . As to dependent Claim 11 , Rodrigo and Klein teach all the limitations of Claim 10. Rodrigo further teaches wherein the computing of the metric includes: receiving feedback about the first training question ([0037] the pairs of training examples, such as questions and answer representing the new intents can be shown to the SME in the curation graphical user interface; the SME can manually validate and correct the generated content (i.e., feedback about the training question )). However, Rodrigo and Klein fail to expressly teach wherein computing the metric based on the feedback. In the same field of endeavor, Ramnani teaches wherein computing the metric based on the feedback ([0032] a test question scoring machine learning model to calculate a score (i.e., metric) based on one or more of the user's feedback) . It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated wherein computing the metric based on the feedback, as taught by Ramnani into Rodrigo and Klein. Doing so would be provide the ability to converse back and forth with users and to automatically generate questions tailored to specific subjects being taught or to specific users' histories ( Ramnani [0002]). As to dependent Claim 12 , Rodrigo, Klein, and Ramnani teach all the limitations of Claim 11. Rodrigo further teaches wherein including the first training question in a second prompt for generating a second training question ([0050] a topic can be received for building a new intent; [0051]a database of chatbot training data can be searched for a candidate intent having meta-knowledge similar to the received topic; utterances associated with the candidate intent are extracted or retrieved from the database; [0053] the received topic and the extracted utterances can be input to a trained machine learning model; the trained machine learning model generates example utterances for the new intent; [0054] the example utterances generated by the trained machine learning model can include a question associated with the received topic (including first training question); [0036] the generated intents, example utterances and answers can be stored in a repository or database - thus, the generated example utterances/ first training question are stored in the database and included in a second prompt/ extracted utterances for generating example utterances/ second training question for the next new intent) . Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 CFR § 1.111(c) to consider these references fully when responding to this action. Bobbarjung et al. (US 2019/0124020 A1) teaches: intents are the basic building blocks of a chatbot; each chatbot has one or more intents; each intent has the following components: Intent Phrases: This is an optional set of utterances/phrases that enables the intent identification engine to determine the best intent. Actions: A set of actions to be performed after the intent is triggered; an intent can be either an “entry” intent or a “follow-on/conversation” intent; the intent phrases are needed only for the entry intents. The follow-on/conversation intents are invoked based on the context of the conversation; FIG. 7 illustrates an example user interface 700 for creating intents (see [0055]-[0058]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Value for firstName-middleName-lastName?" \* MERGEFORMAT REJI KARTHOLY whose telephone number is FILLIN "Insert your individual area code and phone number." \* MERGEFORMAT (571)272-3432 . The examiner can normally be reached on FILLIN "Work schedule?" \* MERGEFORMAT Monday - Thursday from 7:30 am to 3:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "Insert your SPE’s name." \* MERGEFORMAT Jennifer Welch , can be reached at telephone number FILLIN "Insert your SPE’s area code and phone number." \* MERGEFORMAT 571-272-7212 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /REJI KARTHOLY/ Primary Examiner, Art Unit 2143 /JENNIFER N WELCH/ Supervisory Patent Examiner, Art Unit 2143