Prosecution Insights
Last updated: April 19, 2026
Application No. 18/316,983

SYSTEMS AND METHODS FOR EXTRACTING MEANINGFUL PHRASES AND A CRUX OF A CONVERSATION FROM TEXT DATA

Non-Final OA §101§103
Filed
May 12, 2023
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Verizon Patent and Licensing Inc.
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter). Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Claims 1-7 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Claims 8-20 are directed to storage mediums and processors which are machines. Regarding claim 1, the following claim elements are abstract ideas: preprocessing…the taxonomy data with one or more preprocessing techniques to generate preprocessed data (This is an abstract idea of a mental process. The limitation recites reviewing text and modifying it according to linguistic rules, such as removing stop-words, correcting characters, replacing abbreviations, or lemmatizing words. A person could read transcripts, cross out common words, normalize abbreviation (e.g., replace “acct” with “account”), correct misspellings, and group different forms of the same word together. This type of rule-based text cleaning and normalization can be performed through observation and judgement in the human mind or with the aid of pen and paper. See MPEP 2106.04(a)(2)(III).); processing…the taxonomy data…to generate intents, features of each of the intents, and a taxonomy collection (This is an abstract idea of a mental process, The limitation recites generating intents (topics), generating features associated with those intents, and organizing them into a taxonomy collection based on taxonomy data that includes terms such as “delay,” “network issue”, and “address change.” A person could review domain specific transcripts, determine relevant topics, formulate categories for those topics, identify associated phrases, and organize them into a structured classification scheme. This type of cognitive categorization and organization of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); processing…the taxonomy data…to generate concepts or entities associated with the intents (This is an abstract idea of a mental process. The limitation recites generating concepts or entities from taxonomy data and associating them with previously generated intents (i.e., communicative purposes of text). A person could read conversation transcripts, recognize references to specific people, accounts, or objects, and associate those entities with the relevant topic of conversation. This type of recognition and logical association of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); combining…the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (This is abstract idea of a mental process. The limitation recites combining previously identified intents, features, and entities into an association collection, which amounts to organizing and linking related information. A person could manually create a table or list that associates particular topics with corresponding phrases and identified entities based on observation and judgement. This type of logical organization and association can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) processing…the preprocessed data…to generate accelerated data (This is an abstract idea of a mental process. The limitation recites further analyzing text using models such as coreference resolution, dependency parsing, and summarization to generate additional structured information (“accelerated data”). A person could read text, determine when different words refer to the same entity, understand relationships between words in a sentence, and summarize the key content. This type of linguistic analysis and interpretation can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); generating…training data based on the association collection and the accelerated data (This is an abstract idea of a mental process. The limitation recites generating training data by selecting and organizing information from previously formed associations and analyzed text data. A person could review categorized topics, associated phrases, and structured sentence information, and choose relevant examples to use for training or reference. This type of selection and organization of information, based on observation and judgement, can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); processing…the text data, the taxonomy collection, and the association collection, to determine a crux of the text data (This is an abstract idea of a mental process. The limitation recites analyzing conversation text together with previously organized topics and associates to determine a “crux,” i.e., the central issue or main idea of the conversation. A person could read a transcript, consider relevant categories and associated terms, and determine that the main issue is, for example, removing a phone from an account. This type of evaluation and summarization of information based on observation and judgement can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receiving, by a device, taxonomy data associated with different domains (The step of “receiving” taxonomy data is merely a generic data-gathering operation that amounts to receiving or transmitting data over a network, which is well understood, routine, and conventional activity. This limitation does not recite any specialized mechanism for obtaining the data and therefore does not provide a meaningful limitation beyond implementing the abstract idea on a generic computer.) ; preprocessing, by the device (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).) a machine learning-based feedback model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) machine learning interpolative based feedback model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), machine learning accelerator models (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a machine learning model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) performing, by the device, one or more actions based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract ideas: performing a stop-word removal technique on the taxonomy data to generate the preprocessed data; performing a bad character removal technique on the taxonomy data to generate the preprocessed data; performing an abbreviation regular expression technique on the taxonomy data to generate the preprocessed data; performing a placeholder replace technique on the taxonomy data to generate the preprocessed data; performing a custom noun entity technique on the taxonomy data to generate the preprocessed data; or performing a lemmatization technique on the taxonomy data to generate the preprocessed data (These are abstract idea of “mental processes.” The limitations recites reviewing text and modifying it according to linguistic rules, such as removing commonly occurring words, deleting unwanted characters, expanding abbreviations, replacing placeholders with actual information, identifying proper nouns, or grouping different inflected forms of the same word together. A person could read transcripts, cross out common words, correct or remove stray characters, replace abbreviations (e.g., “acct” with “account”), identify proper names, and normalize word forms through observation and judgement. This type of rule-based text cleaning and normalization can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.). Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: processing the preprocessed data, with a coreference resolution model, to generate a first portion of the accelerated data; processing the preprocessed data, with a semantic and dependency parsing model, to generate a second portion of the accelerated data; and processing the preprocessed data, with a summarization model, to generate a third portion of the accelerated data (There limitations recite using generic computer components and models to process data and generate output data. The models are described at a high level of generality and no specific improvement to computer functionality is recited. Generating output data using such models is well-understood, routine, and conventional activity. These steps merely instruct the use of generic computer tools to carry out the abstract idea and therefore do not amount to significantly more than the judicial exception.). Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas: extracting parts of speech (POS) references from the association collection and the accelerated data; extracting POS sequences from the association collection and the accelerated data; performing an association check for the POS references and the POS sequences; and generating the training data based on the POS references, the POS sequences, and performing the association check for the POS references and the POS sequences (These limitations are abstract ideas of “mental processes.” They recite extracting grammatical elements (parts of speech) from previously organized linguistic data, identifying sequences in those grammatical elements, comparing the references and sequences, and using that comparison to generate training data. A person could review structured text, identify nouns, verbs, or adjectives, observe patterns in their order, determine whether certain pattens correspond, and select examples based on that determination. This type of grammatical identification, comparison, and selection can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following abstract ideas: generating relevant phrases and unmatched sentences based on the association collection and the accelerated data (This an abstract idea of a mental process. The limitation recites identifying phrases that correspond to previously organized linguistic associations and identifying sentences that do not match those associations. A person could review structured text information, determine which phrases relate to known topics or entities, and separate out sentences that do not correspond. This type of classification and selection of information based on observation and judgement can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); and The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: utilizing the relevant phrases as the training data (This limitation amounts to insignificant extra-solution activity.). Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following abstract ideas: to identify more relevant phrases in the training data relative to other phrases in the training data (This limitation recites evaluating phrases and determining which are more relevant compared to others. A person could review a set of phrases, applying judgement based on context or importance, and select those deemed more relevant. This type of comparative evaluation and selection of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: training the machine learning model (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract ideas: generating embeddings for the taxonomy data with an embeddings layer of the machine learning interpolative-based feedback model (This is an abstract idea of a mathematical concept. The limitation recites generating numerical vector representations (“embeddings”) of textual data to represent relationships between words or phrases. Generating such embeddings involves mathematical calculations that convert text into numerical form and compute relationships in vector space. Mathematical calculations and transformations of data into numerical representations fall within the mathematical concept grouping of abstract ideas.); to generate the intents, the features of each of the intents, and the taxonomy collection (This is an abstract idea of a mental process. The limitation recites determining communicative purposes (intents), identifying features associated with those purposes, and organizing them into a taxonomy collection. A person could review conversation text, determine the purpose of the conversation, identify recurring words or phrases associated with that purpose, and group them into a structured classification. This type of categorization and organization of information, based on observation and judgement, can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receiving the taxonomy data with an input layer of the machine learning interpolative based feedback model (This limitation recites receiving data using a generic input layer of a machine learning model. The claim does not describe a specific improvement to the input layer or to computer functionality. Receiving data through an input layer constitutes well-understood, routine, and conventional computer activity and does not amount to significantly more than the judicial exception.); processing the embeddings, with one or more dense layers of the machine learning interpolative-based feedback model (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).), outputting the intents, the features of each of the intents, and the taxonomy collection with an output layer of the machine learning interpolative-based feedback model (The step of “outputting” data using an output layer is merely a generic computer operation that amounts to transmitting or providing data. The claims do not recite any specific improvement to the output layer or to computer functionality. Outputting data in this manner is well-understood, routine and conventional activity and therefore does not amount to significantly more than the judicial exception.). Regarding claim 8, the following claim elements are abstract ideas: preprocess the taxonomy data with one or more preprocessing techniques to generate preprocessed data (This is an abstract idea of a mental process. The limitation recites reviewing text and modifying it according to linguistic rules, such as removing stop-words, correcting characters, replacing abbreviations, or lemmatizing words. A person could read transcripts, cross out common words, normalize abbreviation (e.g., replace “acct” with “account”), correct misspellings, and group different forms of the same word together. This type of rule-based text cleaning and normalization can be performed through observation and judgement in the human mind or with the aid of pen and paper. See MPEP 2106.04(a)(2)(III).); process the taxonomy data…to generate intents, features of each of the intents, and a taxonomy collection (This is an abstract idea of a mental process, The limitation recites generating intents (topics), generating features associated with those intents, and organizing them into a taxonomy collection based on taxonomy data that includes terms such as “delay,” “network issue”, and “address change.” A person could review domain specific transcripts, determine relevant topics, formulate categories for those topics, identify associated phrases, and organize them into a structured classification scheme. This type of cognitive categorization and organization of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); process the taxonomy data…to generate concepts or entities associated with the intents (This is an abstract idea of a mental process. The limitation recites generating concepts or entities from taxonomy data and associating them with previously generated intents (i.e., communicative purposes of text). A person could read conversation transcripts, recognize references to specific people, accounts, or objects, and associate those entities with the relevant topic of conversation. This type of recognition and logical association of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); combine the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (This is abstract idea of a mental process. The limitation recites combining previously identified intents, features, and entities into an association collection, which amounts to organizing and linking related information. A person could manually create a table or list that associates particular topics with corresponding phrases and identified entities based on observation and judgement. This type of logical organization and association can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) process the preprocessed data…to generate accelerated data (This is an abstract idea of a mental process. The limitation recites further analyzing text using models such as coreference resolution, dependency parsing, and summarization to generate additional structured information (“accelerated data”). A person could read text, determine when different words refer to the same entity, understand relationships between words in a sentence, and summarize the key content. This type of linguistic analysis and interpretation can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); process the text data, the taxonomy collection, and the association collection, to determine a crux of the text data (This is an abstract idea of a mental process. The limitation recites analyzing conversation text together with previously organized topics and associates to determine a “crux,” i.e., the central issue or main idea of the conversation. A person could read a transcript, consider relevant categories and associated terms, and determine that the main issue is, for example, removing a phone from an account. This type of evaluation and summarization of information based on observation and judgement can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: one or more processors (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) receive taxonomy data associated with different domains (The step of “receiving” taxonomy data is merely a generic computer operation that amounts to obtaining or transmitting data. The claim does not recite any specific mechanism for receiving the data or any improvement to computer functionality. Receiving data in this manner is well-understood, routine and conventional activity and therefore does not amount to significantly more than the judicial exception.); a machine learning-based feedback model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a machine learning interpolative-based feedback model(This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), train a machine learning model with the association collection and the accelerated data to generate a trained machine learning model (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); receive text data associated with a chatbot, a live chat, or an interactive voice response system (The step of “receiving” text data is merely a generic computer operation that amounts to obtaining or transmitting data. The claim does not recite any specific mechanism for receiving the text data or any improvement to computer functionality. Receiving data in this manner is well-understood, routine, and conventional activity and therefore does not amount to significantly more than the judicial exception.); perform one or more actions based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 9, the rejection of claim 8 is incorporated herein. Further, claim 9 recites the following abstract ideas: generate embeddings for the taxonomy data and the classes with an embeddings layer of the machine learning-based feedback model (This is an abstract idea of a mathematical concept and mental process. The limitation recites generating numerical representations (“embeddings”) for textual data and classes, which involves mathematical calculations that convert words or phrases into numerical vectors representing relationships among them. Such transformations of information into numerical form and computation of relationships between values are mathematical calculations that can be performed in the human mind with basic tools such as pen and paper and/or a calculator. Accordingly, this limitation falls within the mathematical concept and mental process groupings of abstract ideas.); to generate the concepts or entities (This is an abstract idea of a mental process. The limitation recites identifying or determining concepts or entities in text information. A person could review text, recognize references to specific people, places, accounts, or objects, and determine correspond concepts or entities based on observation and judgement. This type of recognition and categorization of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: receive the taxonomy data with an input layer of the machine learning-based feedback model (This step of receiving data with an input layer is a generic computer operation. The claim does not recite any specific improvement to the input layer or to computer functionality. Receiving data in this manner is well-understood, routine, and conventional activity and therefore does not amount to significantly more than the judicial exception.); receive classes associated with the different domains with a domain layer of the machine learning-based feedback model (The step of receiving classes using a domain layer is a generic computer operation. The claim does not recite any specific improvement to the domain layer or to computer functionality. Receiving data in this manner is well-understood, routine and conventional activity and therefore does not amount to significantly more than the judicial exception.); process the embeddings, with one or more dense layers of the machine learning-based feedback model (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).), output the concepts or entities with an output layer of the machine learning-based feedback model (The step of outputting data using an output layer is a generic computer operation. The claim does not recite any specific improvement to the output layer or to computer functionality. Outputting data in this manner is well-understood, routine, and conventional activity and therefore does not amount to significantly more than the judicial exception.). Regarding claim 10, the rejection of claim 8 is incorporated herein. Further, claim 10 recites the following abstract ideas: combine the text data, the taxonomy collection, and the association collection to determine the crux of the text data (This is an abstract idea of a mental process. The limitation recites reviewing text together with previously organized topics and associations and determining the central issue or main idea (“crux”) of the text. A person could read a conversation, consider relevant categories and associated phrases, and determine the primary issued being discussed. This type of evaluation, synthesis, and summarization of information based on observation, and judgement can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.). Regarding claim 11, the rejection of claim 8 is incorporated herein. Further, claim 11 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: provide the crux of the text data for display to a user device (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); or perform a search for a topic based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 12, the rejection of claim 8 is incorporated herein. Further, claim 12 recites the following abstract ideas: determine a customer journey, issue, or need based on the crux of the text data (This is an abstract idea of a mental process. The limitation recites evaluating the main idea of a conversation and determining a corresponding customer issue or need. A person could read the conversation, understand its central topic, and determine the customer’s issue or need based on judgement. This can be performed in the human mind and therefore falls under the mental process grouping of abstract ideas.); or identify a category for the text data based on the crux of the text data (This is an abstract idea of a mental process. The limitation recites determining a category for the text based on its main idea. A person could read text, understand its central topic, and classify it into a category using judgement. This can be performed in the human mind and therefore falls within the mental process grouping of abstract ideas.). Regarding claim 13, the rejection of claim 8 is incorporated herein. Further, claim 13 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: enable a content creator to create a document based on the crux of the text data; or retrain the machine learning model based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 14, the rejection of claim 8 is incorporated herein. Further, claim 14 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the crux of the text data is an abstractive summarization of the text data (This limitation amounts to insignificant extra-solution activity. It merely characterizes the result of the abstract analysis as a summarization of the text data and does not impose any meaningful limitation on how the summarization is performed or improve computer functionality. Accordingly, it does not amount to significantly more than the judicial exception.). Regarding claim 15, the following claim elements are abstract ideas: preprocess the taxonomy data with one or more preprocessing techniques to generate preprocessed data (This is an abstract idea of a mental process. The limitation recites reviewing text and modifying it according to linguistic rules, such as removing stop-words, correcting characters, replacing abbreviations, or lemmatizing words. A person could read transcripts, cross out common words, normalize abbreviation (e.g., replace “acct” with “account”), correct misspellings, and group different forms of the same word together. This type of rule-based text cleaning and normalization can be performed through observation and judgement in the human mind or with the aid of pen and paper. See MPEP 2106.04(a)(2)(III).); process the taxonomy data…to generate intents, features of each of the intents, and a taxonomy collection (This is an abstract idea of a mental process, The limitation recites generating intents (topics), generating features associated with those intents, and organizing them into a taxonomy collection based on taxonomy data that includes terms such as “delay,” “network issue”, and “address change.” A person could review domain specific transcripts, determine relevant topics, formulate categories for those topics, identify associated phrases, and organize them into a structured classification scheme. This type of cognitive categorization and organization of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); process the taxonomy data…to generate concepts or entities associated with the intents (This is an abstract idea of a mental process. The limitation recites generating concepts or entities from taxonomy data and associating them with previously generated intents (i.e., communicative purposes of text). A person could read conversation transcripts, recognize references to specific people, accounts, or objects, and associate those entities with the relevant topic of conversation. This type of recognition and logical association of information can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); combine the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (This is abstract idea of a mental process. The limitation recites combining previously identified intents, features, and entities into an association collection, which amounts to organizing and linking related information. A person could manually create a table or list that associates particular topics with corresponding phrases and identified entities based on observation and judgement. This type of logical organization and association can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.) process the preprocessed data…to generate accelerated data (This is an abstract idea of a mental process. The limitation recites further analyzing text using models such as coreference resolution, dependency parsing, and summarization to generate additional structured information (“accelerated data”). A person could read text, determine when different words refer to the same entity, understand relationships between words in a sentence, and summarize the key content. This type of linguistic analysis and interpretation can be performed in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas.); combine the text data, the taxonomy collection, and the association collection to determine a crux of the text data (This is an abstract idea of a mental process. The limitation recites reviewing text together with previously organized information and determining the central issue or main idea (“crux”) of the text. A person could read the conversation, consider the related topics and associations, and determine the primary issue using judgement. This can be performed in the human mind and therefore falls within the mental process grouping of abstract ideas.); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: one or more processors (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) A non-transitory computer-readable medium storing a set of instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), receive taxonomy data associated with different domains (The step of “receiving” taxonomy data is merely a generic computer operation that amounts to obtaining or transmitting data. The claim does not recite any specific mechanism for receiving the data or any improvement to computer functionality. Receiving data in this manner is well-understood, routine and conventional activity and therefore does not amount to significantly more than the judicial exception.); a machine learning-based feedback model (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a machine learning interpolative-based feedback model(This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).), machine learning accelerator models (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) receive text data associated with a chatbot, a live chat, or an interactive voice response system (The step of “receiving” text data is merely a generic computer operation that amounts to obtaining or transmitting data. The claim does not recite any specific mechanism for receiving the text data or any improvement to computer functionality. Receiving data in this manner is well-understood, routine, and conventional activity and therefore does not amount to significantly more than the judicial exception.); perform one or more actions based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 16, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. Regarding claim 17, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 3. Therefore, the same subject matter analysis that was utilized for claim 3, as described above, is equally applicable to claim 17. Therefore, claim 17 is ineligible. Regarding claim 18, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 7. Therefore, the same subject matter analysis that was utilized for claim 7, as described above, is equally applicable to claim 18. Therefore, claim 18 is ineligible. Regarding claim 19, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 9. Therefore, the same subject matter analysis that was utilized for claim 9, as described above, is equally applicable to claim 19. Therefore, claim 19 is ineligible. Regarding claim 20, the rejection of claim 15 is incorporated herein. Further, claim 20 recites the following abstract ideas: determine a customer journey, issue, or need based on the crux of the text data (This is an abstract idea of a mental process. The limitation recites evaluating the main idea of a conversation and determining a corresponding customer issue or need. A person could read the conversation, understand its central topic, and determine the customer’s issue or need based on judgement. This can be performed in the human mind and therefore falls under the mental process grouping of abstract ideas.); or identify a category for the text data based on the crux of the text data (This is an abstract idea of a mental process. The limitation recites determining a category for the text based on its main idea. A person could read text, understand its central topic, and classify it into a category using judgement. This can be performed in the human mind and therefore falls within the mental process grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: provide the crux of the text data for display to a user device (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); or perform a search for a topic based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). enable a content creator to create a document based on the crux of the text data; or retrain the machine learning model based on the crux of the text data (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-13 and 15, 18-20 are rejected under the 35 U.S.C. 103 as being unpatentable over Narendula (Pub. No.: US 20220229986 A1 (Filed: 2022)) in view of Cavalin et al., (Pub. No.: US 20230092274 A1 (Filed: 2021)). Regarding claim 1, Narendula teaches the following limitations: receiving, by a device, taxonomy data associated with different domains (Narendula, paragraph [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances…It is also presently recognized that, since the meaning associated with various intents and entities is continuously evolving within different contexts (e.g., different language evolutions per domain, per cultural setting, per client, and so forth)”); preprocessing, by the device, the taxonomy data with one or more preprocessing techniques to generate preprocessed data (Narendula, [Abstract] “In particular, taxonomy lookup sources can be compiled from suitable taxonomy source data that represents relationships between various entities within a domain of a client. These taxonomy lookup sources can extract taxonomy segmentations from utterances…The taxonomy segmentations can then be leveraged by the NLU system to perform vocabulary injection to expand the number of meaning representations in the utterance meaning model and/or the understanding model” [0079] “ The vocabulary manager 118, which may be part of the vocabulary subsystem discussed below, addresses out-of-vocabulary words and symbols that were not encountered by the NLU framework 104 during vocabulary training. For example, in certain embodiments, the vocabulary manager 118 can identify and replace synonyms and domain-specific meanings of words and acronyms within utterances analyzed by the agent automation framework 100 (e.g., based on the collection of rules 114), which can improve the performance of the NLU framework 104 to properly identify intents and entities within context-specific utterances.” – teaches preprocessing of domain/taxonomy-related data by applying techniques such as synonym replacement, domain-specific meaning replacement, out-of-vocabulary handling, and extracting taxonomy segmentations (and then performing vocabulary injection). These are “one or more preprocessing techniques” applied to the taxonomy-related data to produce a processed form (i.e., preprocessed data) suitable for downstream NLU, intent processing.); combining, by the device, the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (Narendula, paragraph [0053] “ As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” [0075] “the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances.” – teaches an intent-entity model that associates particular intents with particular entities and sample utterances, wherein entities are encoded as parameters of the corresponding intent within a structured model. Narendula further teaches storing this intent-entity model, along with utterance corpora and rules, within the taxonomy-structured database of conversational artifacts. This model inherently combines intents, features (sample utterances), taxonomy structures, and entities into a unified set of associations. Accordingly, Narendula discloses combing intents, features, taxonomy collections, and entities to generate an association collection, as recited.); processing, by the device, the preprocessed data, with machine learning accelerator models, to generate accelerated data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128” [0085] “For the illustrated embodiment, the NLU framework 104 processes a received user utterance 122 to extract intents/entities 140 based on the intent/entity model 108” [0124] “different rule-based parsers 186 and/or ML-based parsers 188 of the structure subsystem 172 of the meaning extraction subsystem 150 parse (block 325) each of the utterances 324 to generate a multiple annotated utterance tree structures 326 for each of the utterances 324.” – teaches applying machine-learning -based parsers, including semantic parsing and grammar engineering components, to process conversational utterances and generate annotated utterance tree structures and extract intent/entity representations. These machine-learning components operate on processed utterances to produce enriched and structured semantic outputs. The annotated utterance trees and extracted intent/entity representations constitute accelerated data under the broadest reasonable interpretation, as they provide enhanced semantic structure for downstream processing. Accordingly, Narendula discloses processing preprocessed data with machine learning accelerator models to generate accelerated data, as recited.); However, Narendula does not teach but, Narendula in view of Cavalin teaches the following limitations: processing, by the device, the taxonomy data, with a machine learning interpolative-based feedback model, to generate intents, features of each of the intents, and a taxonomy collection (Narendula, paragraph [0053] “ An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” Cavalin, paragraph [0004] “The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “ The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches an understanding model that includes a taxonomy model and an intent-entity model, which organizes utterances into structured intent and entity relationships stored in a database alongside a corpus of utterances. These models associate particular intent with entities and sample utterances, thereby forming a structured taxonomy collection of conversational artifacts. Cavalin teaches processing conversational data using trained machine-learning models, including neural networks trained via gradient descent, to generate new intents and associated example utterances. Cavalin further teaches using the generated utterances as training data to refine the chatbot model, thereby providing a feedback-based machine-learning cycle. Accordingly, the combined teachings disclose processing taxonomy-structured conversational data with a machine-learning, similarity-driven feedback model to generate intents, features of each intent (including associated entities and vector representations), and structured collection of intent-utterance associations corresponding to a taxonomy collection.); processing, by the device, the taxonomy data, with a machine learning-based feedback model, to generate concepts or entities associated with the intents (Narendula, paragraph [0053] “An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” [0004] “A computer-implemented method…can include receiving a topic for building a new intent…searching a database…extracting utterances…inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches taxonomy-structured conversational data that includes an intent-entity model and an entity model, where entities are explicitly defined as parameters of corresponding intents and stored within a structured taxonomy of conversational artifacts. Cavalin teaches applying a trained machine-learning model, including neural networks trained via gradient descent, to process conversational data, generate new intent-related artifacts, and use those generated artifacts as additional training data for further refinement of the model, thereby forming a machine-learning feedback cycle. Accordingly, the combined teachings disclose processing taxonomy-structured data with a machine-learning based feedback model to generate entities associated with intents, as recited.); generating, by the device, training data based on the association collection and the accelerated data (Narendula, paragraph [0053] “As used herein an “intent-entity model” refers to a model that associates particular intents with particular entities and particular sample utterances, wherein entities associated with the intent may be encoded as a parameter of the intent within the sample utterances of the model.” [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128” [0085] “ the NLU framework 104 processes a received user utterance 122 to extract intents/entities 140 based on the intent/entity model 108.” Cavalin, paragraph [0004] “A computer-implemented method, in an aspect, can include receiving a topic for building a new intent on which to train a chatbot. The method can also include searching a database of chatbot training data for a candidate intent having meta-knowledge similar to the received topic. The method can also include extracting utterances associated with the candidate intent. The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent” [0006] “ The method can further include training the chatbot using the new intent including the example utterances and the answer.” – Narendula provides the association collection in the form of intent-entity model that links intents, entities, and sample utterances, and further provides accelerated data in the form of annotated utterance trees and extracted semantic structures. Cavalin teaches generating example utterances for new intents by supplying extracted utterances and topic information to a trained machine-learning model, and further using the generated utterances as training data for model training. The generated example utterances constitute training data. Accordingly, it would have been obvious to generate training data based on the association collection and the accelerated data by applying Cavalin’s machine learning generation framework to the structured intent-entity associations and parsed semantic representations produced by Narendula in order to produce training examples for model training.). training, by the device, a machine learning model with the training data to generate a trained machine learning model (Cavalin, paragraph [0004] “The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Cavalin teaches training a machine learning model using generated example utterances as training data, and further discloses neural network models trained via gradient descent. The generated example utterances are used to train and update the model, thereby producing a trained machine learning model. Accordingly, Cavalin discloses training a machine learning model with training data to generate a trained machine learning model as required by the limitation.); receiving, by the device, text data associated with a chatbot, a live chat, or an interactive voice response system (Cavalin, paragraph [0050] “ At 802, a topic can be received for building a new intent. The new intent can be used as training data for training a chatbot. For instance, the chatbot is trained to carry on a dialog with a user and/or answer questions a user may have about a subject matter or topic.” [0051] “the database can store intent and utterances associated with that intent. Utterances, for example, can include questions, for instance, asked posed by a user to a chatbot, for the chatbot to answer.” [0052] “At 806, utterances associated with the candidate intent are extracted or retrieved from the database.” [0056] “In an aspect, the system may implement a controllable text generation, which can convert texts from some topics to others.” – teaches receiving text data associated with chatbot operation, including topics provided for building new intents and user utterances posed to a chatbot. Cavalin further discloses retrieving utterances from a database and generating new example utterances through controllable text generation. These topics, user questions, retrieved utterances, and generated examples all constitute text data associated with a chatbot-based conversational system.); processing, by the device, the text data, the taxonomy collection, and the association collection, with the trained machine learning model, to determine a crux of the text data; and performing, by the device, one or more actions based on the crux of the text data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128…to identify matching intents from the intent/entity model 108, such that the RA/BE 102 can perform one or more actions based on the identified intents.” [0053] “As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” Cavalin, paragraph [0040] “At 304, the method can include finding meta-knowledge associated with the received one or more topics, for example, from a database of intents and associated utterances.” [0041] “One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” [0042] “At 308, a predefined threshold is used to filter out irrelevant results…The filtered results include one or more intents and sample questions (utterances) associated with those intents.” [0043] “ At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic…The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” [0025] “An intent is an action the user expects the chatbot to do.” – As previously mapped, Narendula discloses the text data in the form of user utterances, and further discloses a taxonomy collection and association collection in the form of an intent-entity model that links intents, entities, and sample utterances within a structured database. Cavalin discloses training a machine learning model using example utterances and meta-knowledge to generate a trained model. Narendula teaches processing received text data using annotated utterance trees to identify matching intents. Cavalin teaches applying a trained machine learning model, including neural networks and embedding-based similarity techniques, to determine meta-knowledge and corresponding intents from textual input. Under the broadest reasonable interpretation, determining a matching intent and associated meta-knowledge corresponds to determining a crux of the text data, as the identified intent reflects the central semantic meaning extracted from the text. Cavalin further explains that an intent represents an action the system is expected to perform. Narendula expressly discloses that, once matching intents are identified, the reasoning agent/behavior engine performs one or more actions based on the identified intents. Accordingly, the combined teachings disclose processing the text data, taxonomy collection, and association collection with a trained machine learning model to determine a crux of the text data, and performing one or more actions based on the determined crux.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Narendula and Cavalin before them, to incorporate the use of taxonomy data associated with different domains, as taught by Narendula, into the intent-generation and chatbot-training framework of Cavalin. One would have been motivated to make such a combination in order to improve the domain specificity and semantic organization of the training data used to generate and refine intent models, thereby enhancing the accuracy of conversational understanding across different contexts. This would allow more precise and reliable intent modeling by supplying Cavalin’s machine-learning system with structured, domain-segmented taxonomy data. Regarding claim 4, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: extracting parts of speech (POS) references from the association collection and the accelerated data; extracting POS sequences from the association collection and the accelerated data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128 “ [0087] “the meaning extraction subsystem 150 of the NLU framework 104 receiving the intent/entity model 108, which includes sample utterances 155 for each of the various intents/entities of the model. The meaning extraction subsystem 150 generates an understanding model 157 that includes meaning representations 158 of the sample utterances 155 of the intent/entity model 108.” – Narendula teaches performing semantic parsing and grammar engineering on both stored sample utterances and processed user utterances to generate annotated utterance trees and meaning representations. These annotated tree structures include grammatical annotations generated during parsing, from which individual parts-of-speech (POS) references are extractable. Because the annotated utterance trees represent structured and ordered grammatical relationships among tokens, ordered POS sequences are likewise extractable from both the association collection (e.g., intent/entity model and understanding model) and the accelerated data (e.g., the annotated utterances trees and meaning representations) under the broadest reasonable interpretation.); performing an association check for the POS references and the POS sequences (paragraph [0092] “the structure subsystem 172 of the meaning extraction subsystem 150 analyzes a linguistic shape of the utterance 168 using a combination of rule-based and ML-based structure parsing plugins 184. In other words, the illustrated structure plug-ins 184 enable analysis and extraction of the syntactic and grammatical structure of the utterances 122 and 155.” [0112] “for each intent subtree of the meaning representation 162 identified in block 282, the meaning search system 152 compares (block 284) the subtree of the meaning representation 162 to the meaning representations 158 of the understanding model 157, based on the contents of the compilation model template 244, to generate corresponding intent-subtree similarity scores 285 using the tree-model comparison algorithm 272.” – Narendula teaches comparing subtrees of a processed meaning representation to stored meaning representations using a tree-model comparison algorithm to generate similarity scores. Because these meaning representations and subtrees are generated through semantic parsing and grammar engineering and include structured and ordered grammatical elements, this comparison constitutes performing an association check between extracted POS references and POS sequences of the processed utterance and those stored association collection under the broadest reasonable interpretation.); and generating the training data based on the POS references, the POS sequences, and performing the association check for the POS references and the POS sequences (Cavalin, paragraph [0052] “At 806, utterances associated with the candidate intent are extracted or retrieved from the database. Examples of utterances are shown in FIG. 4 and FIG. 5.” [0053] “At 808, the received topic and the extracted utterances can be input to a trained machine learning model. The trained machine learning model generates example utterances for the new intent.“ [0054] “In an embodiment, the method can also include training the chatbot using the new intent including the example utterances and the answer.” – Cavalin teaches extracting utterances associated with candidate intents and inputting those utterances in a trained machine learning model to generate new example utterances that are used as training data for chatbot training. When combined with Narendula, which performs semantic parsing, extracts structural grammatical representations, and compares subtrees of meaning representations to generate similarity scores, a person of ordinary skill in the art would have understood that the structured grammatical representations and comparison results would form the basis for selecting and generating the training utterances within Cavalin’s framework. Accordingly, the generated training data is based on the extracted grammatical references, structured sequences, and the results of the association check under the broadest reasonable interpretation.). Regarding claim 5, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: generating relevant phrases and unmatched sentences based on the association collection and the accelerated data; and utilizing the relevant phrases as the training data (Narendula [0092] “the structure subsystem 172 of the meaning extraction subsystem 150 analyzes a linguistic shape of the utterance 168 using a combination of rule-based and ML-based structure parsing plugins 184. In other words, the illustrated structure plug-ins 184 enable analysis and extraction of the syntactic and grammatical structure of the utterances “ [0112] “ for each intent subtree of the meaning representation 162 identified in block 282, the meaning search system 152 compares (block 284) the subtree of the meaning representation 162 to the meaning representations 158 of the understanding model 157, based on the contents of the compilation model template 244, to generate corresponding intent-subtree similarity scores 285 using the tree-model comparison algorithm 272… the extracted intents/entities 140 may only include intents/entities associated with intent subtree similarity scores greater than a predetermined threshold value” Cavalin [0052] “At 806, utterances associated with the candidate intent are extracted or retrieved from the database” [0053] “At 808, the received topic and the extracted utterances can be input to a trained machine learning model. The trained machine learning model generates example utterances for the new intent. An example of the generated examples utterances for new intent are shown in FIG. 4 at 406” [0054] “ the method can also include training the chatbot using the new intent including the example utterances and the answer.” – Narendula discloses segmenting parsed meaning representations into intent subtrees and comparing those subtrees to stored meaning representations to generate a similarity score. Narendula further teaches retraining only those intent/entities associated with similarity scores above a predetermined threshold. Thus, Narendula teaches identifying utterance portions that satisfy similarity criteria while excluding portions that do not. Under the broadest reasonable interpretation, the retained intent subtrees correspond to “relevant phrases,” and the filtered portions correspond to “unmatched sentences,” because they fail to meet the similarity threshold. Cavalin teaches extracting utterances associated with a candidate intent and inputting those utterances into a trained machine learning model to generate example utterances used as training data. Accordingly, utilizing the retrained relevant phrase within Cavalin’s training framework corresponds to utilizing the relevant phrases as the training data as recited in the limitation.). Regarding claim 6, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: training the machine learning model to identify more relevant phrases in the training data relative to other phrases in the training data (Cavalin, paragraph [0038] “ A learning model 202, also referred to as “learn to convert samples” model can receive as input 204, pairs of intent samples. A pair of intent sample can include one or more questions or utterances (also referred to as “examples”) associated with an intent…The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm… The model uses both the intent examples (206a, 208a) and the meta-knowledge (206b, 208b) to learn the conversion from intent 1 to intent 2, and outputs or generates the EGM (learned model) as the end result.” [0054] “ In an embodiment, the method can also include training the chatbot using the new intent including the example utterances and the answer.” – Cavalin discloses training a machine learning model using example utterances associated with particular intents and associated meta-knowledge, including learning via gradient descent. Cavalin further teaches training a chatbot using the new intent including example utterances. An intent represents a particular semantic meaning expressed by an utterance. Under BRI, the utterance portions that convey that meaning constitute the relevant phrases for that intent. By training the model on intent-labeled utterances, Cavalin necessarily trains the model to distinguish the meaning bearing phrases aligned with a particular intent from phrases aligned with different intents. Thus, the training process identifies phrases that are more relevant to a given intent relative to other phrases in the training data.). Regarding claim 7, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: receiving the taxonomy data with an input layer of the machine learning interpolative-based feedback model (Cavalin, paragraph [0038] “A learning model 202, also referred to as “learn to convert samples” model can receive as input 204, pairs of intent samples. A pair of intent sample can include one or more questions or utterances (also referred to as “examples”) associated with an intent (e.g., 206a) and a meta-knowledge associated with the intent (e.g., 206b).” [0041] “At 306, the meta-knowledge in the input (e.g., determined at 304) is used to find intents with similar meta-knowledge. One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” - Cavalin discloses a machine learning model that receives as input pairs of intent samples, including utterances and associated meta-knowledge. Cavalin further teaches using techniques such as word embeddings and taxonomy to identify intents with similar meta-knowledge and intent associations represents structured classification information describing relationships among intents, which under BRI corresponds to taxonomy data. Because Cavalin’s learning model explicitly receives these intent samples and associated meta knowledge as input, this corresponds to receiving the taxonomy data at an input layer of the machine learning model, as recited.); generating embeddings for the taxonomy data with an embeddings layer of the machine learning interpolative-based feedback model (Cavalin, [0041] “At 306, the meta-knowledge in the input (e.g., determined at 304) is used to find intents with similar meta-knowledge. One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” [0038] “A learning model 202, also referred to as “learn to convert samples” model can receive as input 204, pairs of intent samples… The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another.” – Cavalin teaches using word-embedding techniques in conjunction with taxonomy-associated meta-knowledge to determine similarity between intents. Cavalin further discloses implementing the learning model as a neural network, such as a sequence-to-sequence neural network, which inherently applies embedding layers to convert input text and associated metadata into vector representations for downstream processing. Under BRI, generating word embeddings for taxonomy-associated meta knowledge within a neural network corresponds to generating embeddings for the taxonomy data with an embeddings layer of the machine learning interpolative based feedback model, as recited.); processing the embeddings, with one or more dense layers of the machine learning interpolative-based feedback model, to generate the intents, the features of each of the intents, and the taxonomy collection (Cavalin, paragraph [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” [0043] “At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic. For instance, a new intent (corresponding to the received topic) and example utterances (e.g., questions) corresponding to the new intent are created based on the existing intents and utterances. The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” – Cavalin discloses implementing the learning model as a neural network, such as Seq2Seq neural network, which operates on word embeddings and learns to convert utterances based on associated meta-knowledge. Neural network architectures process input embeddings through one or more fully connected (dense) layers to generate output representations. Cavalin further teaches generating new intents and corresponding utterances using the learned model. Under BRI, processing embeddings within a neural network to produce intent-related outputs corresponds to processing the embeddings with one or more dense layers to generate intents and associated structured information, as recited in the limitation.); outputting the intents, the features of each of the intents, and the taxonomy collection with an output layer of the machine learning interpolative-based feedback model (Cavalin, paragraph .” [0043] “At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic. For instance, a new intent (corresponding to the received topic) and example utterances (e.g., questions) corresponding to the new intent are created based on the existing intents and utterances. The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” [0053] “ At 808, the received topic and the extracted utterances can be input to a trained machine learning model. The trained machine learning model generates example utterances for the new intent.” – Cavalin discloses that the trained neural network model generates new intents and corresponding example utterances as output of the model. Under BRI, a neural network such as the disclosed sequence-to-sequence model necessarily includes an output later that produces the final generated intent representations and associated utterances. Therefore, outputting the generated intents and associate features through the neural network corresponds to outputting the intents, the feature of the intents, and the taxonomy collection with an output layer of the machine learning interpolative-based feedback model as recited.). Regarding claim 8, Narendula teaches the following claim limitations: A device, comprising: one or more processors configured to (Narendula, paragraph [0069] “ As illustrated, the computing system 80 may include various hardware components such as, but not limited to, one or more processors 82”): receive taxonomy data associated with different domains (Narendula, paragraph [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances…It is also presently recognized that, since the meaning associated with various intents and entities is continuously evolving within different contexts (e.g., different language evolutions per domain, per cultural setting, per client, and so forth)”); preprocess the taxonomy data with one or more preprocessing techniques to generate preprocessed data (Narendula, [Abstract] “In particular, taxonomy lookup sources can be compiled from suitable taxonomy source data that represents relationships between various entities within a domain of a client. These taxonomy lookup sources can extract taxonomy segmentations from utterances…The taxonomy segmentations can then be leveraged by the NLU system to perform vocabulary injection to expand the number of meaning representations in the utterance meaning model and/or the understanding model” [0079] “ The vocabulary manager 118, which may be part of the vocabulary subsystem discussed below, addresses out-of-vocabulary words and symbols that were not encountered by the NLU framework 104 during vocabulary training. For example, in certain embodiments, the vocabulary manager 118 can identify and replace synonyms and domain-specific meanings of words and acronyms within utterances analyzed by the agent automation framework 100 (e.g., based on the collection of rules 114), which can improve the performance of the NLU framework 104 to properly identify intents and entities within context-specific utterances.” – teaches preprocessing of domain/taxonomy-related data by applying techniques such as synonym replacement, domain-specific meaning replacement, out-of-vocabulary handling, and extracting taxonomy segmentations (and then performing vocabulary injection). These are “one or more preprocessing techniques” applied to the taxonomy-related data to produce a processed form (i.e., preprocessed data) suitable for downstream NLU, intent processing.); combine the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (Narendula, paragraph [0053] “ As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” [0075] “the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances.” – teaches an intent-entity model that associates particular intents with particular entities and sample utterances, wherein entities are encoded as parameters of the corresponding intent within a structured model. Narendula further teaches storing this intent-entity model, along with utterance corpora and rules, within the taxonomy-structured database of conversational artifacts. This model inherently combines intents, features (sample utterances), taxonomy structures, and entities into a unified set of associations. Accordingly, Narendula discloses combing intents, features, taxonomy collections, and entities to generate an association collection, as recited.); process the preprocessed data, with machine learning accelerator models, to generate accelerated data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128” [0085] “For the illustrated embodiment, the NLU framework 104 processes a received user utterance 122 to extract intents/entities 140 based on the intent/entity model 108” [0124] “different rule-based parsers 186 and/or ML-based parsers 188 of the structure subsystem 172 of the meaning extraction subsystem 150 parse (block 325) each of the utterances 324 to generate a multiple annotated utterance tree structures 326 for each of the utterances 324.” – teaches applying machine-learning -based parsers, including semantic parsing and grammar engineering components, to process conversational utterances and generate annotated utterance tree structures and extract intent/entity representations. These machine-learning components operate on processed utterances to produce enriched and structured semantic outputs. The annotated utterance trees and extracted intent/entity representations constitute accelerated data under the broadest reasonable interpretation, as they provide enhanced semantic structure for downstream processing. Accordingly, Narendula discloses processing preprocessed data with machine learning accelerator models to generate accelerated data, as recited.); However, Narendula does not teach but, Narendula in view of Cavalin teaches the following limitations: process the taxonomy data, with a machine learning interpolative-based feedback model, to generate intents, features of each of the intents, and a taxonomy collection (Narendula, paragraph [0053] “ An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” Cavalin, paragraph [0004] “The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “ The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches an understanding model that includes a taxonomy model and an intent-entity model, which organizes utterances into structured intent and entity relationships stored in a database alongside a corpus of utterances. These models associate particular intent with entities and sample utterances, thereby forming a structured taxonomy collection of conversational artifacts. Cavalin teaches processing conversational data using trained machine-learning models, including neural networks trained via gradient descent, to generate new intents and associated example utterances. Cavalin further teaches using the generated utterances as training data to refine the chatbot model, thereby providing a feedback-based machine-learning cycle. Accordingly, the combined teachings disclose processing taxonomy-structured conversational data with a machine-learning, similarity-driven feedback model to generate intents, features of each intent (including associated entities and vector representations), and structured collection of intent-utterance associations corresponding to a taxonomy collection.); process the taxonomy data, with a machine learning-based feedback model, to generate concepts or entities associated with the intents (Narendula, paragraph [0053] “An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” [0004] “A computer-implemented method…can include receiving a topic for building a new intent…searching a database…extracting utterances…inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches taxonomy-structured conversational data that includes an intent-entity model and an entity model, where entities are explicitly defined as parameters of corresponding intents and stored within a structured taxonomy of conversational artifacts. Cavalin teaches applying a trained machine-learning model, including neural networks trained via gradient descent, to process conversational data, generate new intent-related artifacts, and use those generated artifacts as additional training data for further refinement of the model, thereby forming a machine-learning feedback cycle. Accordingly, the combined teachings disclose processing taxonomy-structured data with a machine-learning based feedback model to generate entities associated with intents, as recited.); train a machine learning model with the training data to generate a trained machine learning model (Cavalin, paragraph [0004] “The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Cavalin teaches training a machine learning model using generated example utterances as training data, and further discloses neural network models trained via gradient descent. The generated example utterances are used to train and update the model, thereby producing a trained machine learning model. Accordingly, Cavalin discloses training a machine learning model with training data to generate a trained machine learning model as required by the limitation.); receive text data associated with a chatbot, a live chat, or an interactive voice response system (Cavalin, paragraph [0050] “ At 802, a topic can be received for building a new intent. The new intent can be used as training data for training a chatbot. For instance, the chatbot is trained to carry on a dialog with a user and/or answer questions a user may have about a subject matter or topic.” [0051] “the database can store intent and utterances associated with that intent. Utterances, for example, can include questions, for instance, asked posed by a user to a chatbot, for the chatbot to answer.” [0052] “At 806, utterances associated with the candidate intent are extracted or retrieved from the database.” [0056] “In an aspect, the system may implement a controllable text generation, which can convert texts from some topics to others.” – teaches receiving text data associated with chatbot operation, including topics provided for building new intents and user utterances posed to a chatbot. Cavalin further discloses retrieving utterances from a database and generating new example utterances through controllable text generation. These topics, user questions, retrieved utterances, and generated examples all constitute text data associated with a chatbot-based conversational system.); process the text data, the taxonomy collection, and the association collection, with the trained machine learning model, to determine a crux of the text data; and perform one or more actions based on the crux of the text data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128…to identify matching intents from the intent/entity model 108, such that the RA/BE 102 can perform one or more actions based on the identified intents.” [0053] “As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” Cavalin, paragraph [0040] “At 304, the method can include finding meta-knowledge associated with the received one or more topics, for example, from a database of intents and associated utterances.” [0041] “One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” [0042] “At 308, a predefined threshold is used to filter out irrelevant results…The filtered results include one or more intents and sample questions (utterances) associated with those intents.” [0043] “ At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic…The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” [0025] “An intent is an action the user expects the chatbot to do.” – As previously mapped, Narendula discloses the text data in the form of user utterances, and further discloses a taxonomy collection and association collection in the form of an intent-entity model that links intents, entities, and sample utterances within a structured database. Cavalin discloses training a machine learning model using example utterances and meta-knowledge to generate a trained model. Narendula teaches processing received text data using annotated utterance trees to identify matching intents. Cavalin teaches applying a trained machine learning model, including neural networks and embedding-based similarity techniques, to determine meta-knowledge and corresponding intents from textual input. Under the broadest reasonable interpretation, determining a matching intent and associated meta-knowledge corresponds to determining a crux of the text data, as the identified intent reflects the central semantic meaning extracted from the text. Cavalin further explains that an intent represents an action the system is expected to perform. Narendula expressly discloses that, once matching intents are identified, the reasoning agent/behavior engine performs one or more actions based on the identified intents. Accordingly, the combined teachings disclose processing the text data, taxonomy collection, and association collection with a trained machine learning model to determine a crux of the text data, and performing one or more actions based on the determined crux.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Narendula and Cavalin before them, to incorporate the use of taxonomy data associated with different domains, as taught by Narendula, into the intent-generation and chatbot-training framework of Cavalin. One would have been motivated to make such a combination in order to improve the domain specificity and semantic organization of the training data used to generate and refine intent models, thereby enhancing the accuracy of conversational understanding across different contexts. This would allow more precise and reliable intent modeling by supplying Cavalin’s machine-learning system with structured, domain-segmented taxonomy data. Regarding claim 9, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: receive the taxonomy data with an input layer of the machine learning-based feedback model (Cavalin, paragraph [0038] “A learning model 202, also referred to as “learn to convert samples” model can receive as input 204, pairs of intent samples. A pair of intent sample can include one or more questions or utterances (also referred to as “examples”) associated with an intent (e.g., 206a) and a meta-knowledge associated with the intent (e.g., 206b).” [0041] “At 306, the meta-knowledge in the input (e.g., determined at 304) is used to find intents with similar meta-knowledge. One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” - Cavalin discloses a machine learning model that receives as input pairs of intent samples, including utterances and associated meta-knowledge. Cavalin further teaches using techniques such as word embeddings and taxonomy to identify intents with similar meta-knowledge and intent associations represents structured classification information describing relationships among intents, which under BRI corresponds to taxonomy data. Because Cavalin’s learning model explicitly receives these intent samples and associated meta knowledge as input, this corresponds to receiving the taxonomy data at an input layer of the machine learning model, as recited.); receive classes associated with the different domains with a domain layer of the machine learning-based feedback model (Cavalin, paragraph [0025] “A chatbot can include an intent classifier. An intent is an action the user expects the chatbot to do. An intent classifier can be a text classifier with N classes. Training samples include a set of text examples used to train the chatbot for handling each intent.” [0031] “Input can be received from a user, e.g., an SME, a chatbot developer, or the like 102. Input received can include a high-level meta-knowledge of a new intent. High-level meta-knowledge, for example, can be a topic or subject matter, for example, given as a token, e.g., a phrase and/or word.” [0046] “n an embodiment, high-level knowledge such as topical cluster (also referred to as one or more topics) can be either automatically extracted from the intents or marked by users, and such high-level knowledge and the example can be used to train a model that is able to perform controllable text generation. For example, consider that “X”, “Y”, “Z” are different types of payment or banking methods. When learning to convert examples from Intent 1 to Intent 2 shown at 402, topics “transfer” and “X” are inputted together with examples from Intent 1, and topics “transfer” and “Y” are inputted together with examples from Intent 2. When creating a new intent “Transfer Z”, then topics “Transfer” and “Z” can be used as inputs to generate samples from either Intent 1 or 2.” – Cavalin teaches an intent classifier that includes N classes corresponding to different intents. Cavalin further teaches organizing and preprocessing intents based on high level meta knowledge or topics representing different subject-matter domains. These classes and associated meta-knowledge are received and processed by the machine learning model. Under BRI, the model component that receives and processes the domain-associated class information corresponds to a domain layer of the machine learning based feedback model, as recited.); generate embeddings for the taxonomy data and the classes with an embeddings layer of the machine learning-based feedback model (Cavalin paragraph [0025] “A chatbot can include an intent classifier. An intent is an action the user expects the chatbot to do. An intent classifier can be a text classifier with N classes. Training samples include a set of text examples used to train the chatbot for handling each intent.” [0041] “ At 306, the meta-knowledge in the input (e.g., determined at 304) is used to find intents with similar meta-knowledge. One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” [0043] “ At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic. For instance, a new intent (corresponding to the received topic) and example utterances (e.g., questions) corresponding to the new intent are created based on the existing intents and utterances. The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another. “ – Cavalin teaches representing textual meta-knowledge using word embeddings techniques and implementing a word embedding mode within the machine learning framework. Cavalin further teaches an intent classifier having N classes corresponding to different intents. Because these classes and associated meta knowledge are textual elements processed by the model, generating word embeddings for such elements corresponds to generating embeddings for the taxonomy data and the classes within an embedding layer of the machine learning model under the broadest reasonable interpretation.); process the embeddings, with one or more dense layers of the machine learning-based feedback model, to generate the concepts or entities (Cavalin, paragraph [0025] “A chatbot can include an intent classifier. An intent is an action the user expects the chatbot to do. An intent classifier can be a text classifier with N classes.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another.” [0043] “At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic. For instance, a new intent (corresponding to the received topic) and example utterances (e.g., questions) corresponding to the new intent are created based on the existing intents and utterances. The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” [0041] “One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge. “ [0032] “One or more natural language processing techniques can be used to find or determine semantic similarity between terms (e.g., terms in the topic and the terms in meta-knowledge). Other techniques such as ontology of concepts can be utilized to determine similarity.” - Cavalin teaches receiving topic and meta-knowledge-based input and determining semantic similarity between terms using NLP techniques, including taxonomy and ontology based concept structures. These techniques operate on embedded representations of the input terms to identify conceptual relationships and similarities. Under BRI, processing embedded representations through neural network layers to determine semantic similarity and concept-level relationships corresponds to processing the embeddings with one or more dense layers to generate the concepts or entities, as recited.); and output the concepts or entities with an output layer of the machine learning-based feedback model (Cavalin, paragraph [0025] “A chatbot can include an intent classifier. An intent is an action the user expects the chatbot to do. An intent classifier can be a text classifier with N classes.” [0032] “In an embodiment, the similarity can be measured in terms of semantic similarity between the topic and meta-knowledge. One or more natural language processing techniques can be used to find or determine semantic similarity between terms (e.g., terms in the topic and the terms in meta-knowledge). Other techniques such as ontology of concepts can be utilized to determine similarity.” [0043] “At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic. For instance, a new intent (corresponding to the received topic) and example utterances (e.g., questions) corresponding to the new intent are created based on the existing intents and utterances.” – Cavalin teaches generating intent-level outputs using a trained neural network model and further discloses an intent classifier have N classes corresponding to intent-level concepts. Cavalin also teaches determining semantic similarity and utilizing ontology of concepts in identifying and generating intents. Under BRI, producing these intent-class or concept-level outputs through the neural network corresponds to outputting the concepts or entities with an output layer of the machine learning based feedback model as recited.). Regarding claim 10, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: combine the text data, the taxonomy collection, and the association collection to determine the crux of the text data (Narendula, paragraph [0053] “As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” [0075] “he database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances” [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128…to identify matching intents from the intent/entity model 108, such that the RA/BE 102 can perform one or more actions based on the identified intents” – Narendula teaches receiving text data in form of user utterances and generating annotated utterance trees that represent the semantic structure of the utterance which corresponds to a taxonomy collection. Narendula further discloses an intent-entity model that stores associations between intents, entities, and sample utterances with entities treated as parameters of the corresponding intent, which corresponds to an association collection. The NLU annotator processes the utterance using both taxonomy-structured annotated trees and the stored intent-entity association to identify the matching intent. Under BRI, using utterance text, the taxonomy collection, and the association collection to identify the correct intent corresponds to combining the text data, the taxonomy collection, and the association collection to determine the crux of the text data, as recited.). Regarding claim 11, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: provide the crux of the text data for display to a user device (Narendula, paragraph [0085] “ the NLU framework 104 processes a received user utterance 122 t extract intents/entities 140 based on the intent/entity model 108…virtual agent utterances 124 in response to the received user utterance 122.” [0086] “ Additionally, it should be noted that, while the user utterance 122 and the agent utterance 124 are discussed herein as being conveyed using a written conversational medium or channel (e.g., chat, email, ticketing system, text messages, forum posts), in other embodiments, voice-to-text and/or text-to-voice modules or plugins could be included to translate spoken user utterance 122 into text and/or translate text-based agent utterance 124 into speech to enable a voice interactive system, in accordance with the present disclosure” – teaches processing received text data to extract intent/entities that represent the core meaning of the utterance and generating corresponding virtual agent utterances that are conveyed to the user through chat or other written interfaces. Under BRI, the extracted intent/entities constitute the crux of the text data, and conveying the corresponding output to the user corresponds to providing the crux of the text data for display to a user device as recited.); or perform a search for a topic based on the crux of the text data (Narendula, paragraph [0053] “ As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent.” [0112] “the meaning search system 152 compares (block 284) the subtree of the meaning representation 162 to the meaning representations 158 of the understanding model 157, based on the contents of the compilation model template 244, to generate corresponding intent-subtree similarity scores 285 using the tree-model comparison algorithm 272.” – teaches that the extracted intent represents the underlying purpose of the text data and further teaches comparing the meaning representation of the utterance to stored intent representations to generate similarity scores. Under BRI, comparing the extracted meaning to stored topic/intent representations corresponds to performing a search for a topic based on the crux of the text data as recited.). Regarding claim 12, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: determine a customer journey, issue, or need based on the crux of the text data; or identify a category for the text data based on the crux of the text data (Cavalin, paragraph [0027] “ system and/or method can generate training examples to create new intents for chatbots, helping chatbot developers to create new intents and update the content of a chatbot, using controllable text generation, relying on knowledge from previously-created intents, to suggest samples to be used to train an intent classifier for the new intents. For example, a user need only input high-level meta-knowledge into the system, and the system may search for and suggest possible new intents based on the similarity of the meta-knowledge with existing content (e.g., questions that have already been curated by subject matter experts for question generation for other topics), also verify the proposed examples, and suggest documents that can be used to create chatbot responses.” – teaches receiving high-level meta-knowledge representing the subject matter or purpose of a communication and searching for and suggesting new intents based on similarity or existing content. Because an intent represents the user’s desired action or objective, identifying and suggesting the appropriate intent necessarily determines the user’s underlying need or issue expressed in the text. Under BRI, determining the appropriate intent based on the extracted meaning corresponds to determining a customer journey, issue, or need based on the crux of the text data as recited.). Regarding claim 13, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: enable a content creator to create a document based on the crux of the text data; or retrain the machine learning model based on the crux of the text data (Cavalin, paragraph [0027] “ system and/or method can generate training examples to create new intents for chatbots, helping chatbot developers to create new intents and update the content of a chatbot, using controllable text generation, relying on knowledge from previously-created intents, to suggest samples to be used to train an intent classifier for the new intents. For example, a user need only input high-level meta-knowledge into the system, and the system may search for and suggest possible new intents based on the similarity of the meta-knowledge with existing content (e.g., questions that have already been curated by subject matter experts for question generation for other topics), also verify the proposed examples, and suggest documents that can be used to create chatbot responses.” [0037] “ At 120, the pairs of training examples (e.g., questions and answer), representing the new intents, can be validated with external documents, and can be shown to the SME in the curation graphical user interface (GUI) 122. Those pairs include a set of training examples for the new intents, and an answer either generated with the EGM or from a search on external documents.” – teaches identifying intents based on meta-knowledge and further teaches generating and suggesting documents and example content to a user through a graphical interface for use in chatbot responses. Under BRI, enabling a user to generate or curate documents and response content based on the identified intent corresponds to enabling a content creator to create a document based on the crux of the text data as recited.). Regarding claim 15, Narendula teaches the following claim limitations: A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: (Narendula, paragraph [0071] “The memory 86 may include any tangible, non-transitory, and computer-readable storage media. Although shown as a single block in FIG. 3, the memory 86 can be implemented using multiple physical units of the same or different types in one or more physical locations. The input devices 88 correspond to structures to input data and/or commands to the one or more processors 82.”) receive taxonomy data associated with different domains (Narendula, paragraph [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances…It is also presently recognized that, since the meaning associated with various intents and entities is continuously evolving within different contexts (e.g., different language evolutions per domain, per cultural setting, per client, and so forth)”); preprocess the taxonomy data with one or more preprocessing techniques to generate preprocessed data (Narendula, [Abstract] “In particular, taxonomy lookup sources can be compiled from suitable taxonomy source data that represents relationships between various entities within a domain of a client. These taxonomy lookup sources can extract taxonomy segmentations from utterances…The taxonomy segmentations can then be leveraged by the NLU system to perform vocabulary injection to expand the number of meaning representations in the utterance meaning model and/or the understanding model” [0079] “ The vocabulary manager 118, which may be part of the vocabulary subsystem discussed below, addresses out-of-vocabulary words and symbols that were not encountered by the NLU framework 104 during vocabulary training. For example, in certain embodiments, the vocabulary manager 118 can identify and replace synonyms and domain-specific meanings of words and acronyms within utterances analyzed by the agent automation framework 100 (e.g., based on the collection of rules 114), which can improve the performance of the NLU framework 104 to properly identify intents and entities within context-specific utterances.” – teaches preprocessing of domain/taxonomy-related data by applying techniques such as synonym replacement, domain-specific meaning replacement, out-of-vocabulary handling, and extracting taxonomy segmentations (and then performing vocabulary injection). These are “one or more preprocessing techniques” applied to the taxonomy-related data to produce a processed form (i.e., preprocessed data) suitable for downstream NLU, intent processing.); combine the intents, the features, the taxonomy collection, and the concepts or the entities to generate an association collection (Narendula, paragraph [0053] “ As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” [0075] “the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106. The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances.” – teaches an intent-entity model that associates particular intents with particular entities and sample utterances, wherein entities are encoded as parameters of the corresponding intent within a structured model. Narendula further teaches storing this intent-entity model, along with utterance corpora and rules, within the taxonomy-structured database of conversational artifacts. This model inherently combines intents, features (sample utterances), taxonomy structures, and entities into a unified set of associations. Accordingly, Narendula discloses combing intents, features, taxonomy collections, and entities to generate an association collection, as recited.); process the preprocessed data, with machine learning accelerator models, to generate accelerated data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128” [0085] “For the illustrated embodiment, the NLU framework 104 processes a received user utterance 122 to extract intents/entities 140 based on the intent/entity model 108” [0124] “different rule-based parsers 186 and/or ML-based parsers 188 of the structure subsystem 172 of the meaning extraction subsystem 150 parse (block 325) each of the utterances 324 to generate a multiple annotated utterance tree structures 326 for each of the utterances 324.” – teaches applying machine-learning -based parsers, including semantic parsing and grammar engineering components, to process conversational utterances and generate annotated utterance tree structures and extract intent/entity representations. These machine-learning components operate on processed utterances to produce enriched and structured semantic outputs. The annotated utterance trees and extracted intent/entity representations constitute accelerated data under the broadest reasonable interpretation, as they provide enhanced semantic structure for downstream processing. Accordingly, Narendula discloses processing preprocessed data with machine learning accelerator models to generate accelerated data, as recited.); However, Narendula does not teach but, Narendula in view of Cavalin teaches the following limitations: process the taxonomy data, with a machine learning interpolative-based feedback model, to generate intents, features of each of the intents, and a taxonomy collection (Narendula, paragraph [0053] “ An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” Cavalin, paragraph [0004] “The method can also include inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “ The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches an understanding model that includes a taxonomy model and an intent-entity model, which organizes utterances into structured intent and entity relationships stored in a database alongside a corpus of utterances. These models associate particular intent with entities and sample utterances, thereby forming a structured taxonomy collection of conversational artifacts. Cavalin teaches processing conversational data using trained machine-learning models, including neural networks trained via gradient descent, to generate new intents and associated example utterances. Cavalin further teaches using the generated utterances as training data to refine the chatbot model, thereby providing a feedback-based machine-learning cycle. Accordingly, the combined teachings disclose processing taxonomy-structured conversational data with a machine-learning, similarity-driven feedback model to generate intents, features of each intent (including associated entities and vector representations), and structured collection of intent-utterance associations corresponding to a taxonomy collection.); process the taxonomy data, with a machine learning-based feedback model, to generate concepts or entities associated with the intents (Narendula, paragraph [0053] “An understanding model may include a vocabulary model that associates certain tokens (e.g., words or phrases) with particular word vectors, an intent-entity model, an intent model, an entity model, a taxonomy model, other models, or a combination thereof.” [0075] “For the embodiment illustrated in FIG. 4A, the database 106 may be a database server instance (e.g., database server instance 44A or 44B, as discussed with respect to FIG. 2), or a collection of database server instances. The illustrated database 106 stores an intent/entity model 108, a conversation model 110, a corpus of utterances 112, and a collection of rules 114 in one or more tables (e.g., relational database tables) of the database 106.” [0004] “A computer-implemented method…can include receiving a topic for building a new intent…searching a database…extracting utterances…inputting the received topic and the extracted utterances to a trained machine learning model, the trained machine learning model generating example utterances for the new intent.” [0006] “The method can further include training the chatbot using the new intent including the example utterances and the answer.” [0038] “The learning model 202 can include machine learning models such as Seq2seq Neural Networks, Generative Adversarial Networks (GANs), rule-based algorithms, and/or another. The learning module 202 learns to convert example utterances from one intent to another given the meta-knowledge associated with the intents, for example, using gradient descent algorithm.” – Narendula teaches taxonomy-structured conversational data that includes an intent-entity model and an entity model, where entities are explicitly defined as parameters of corresponding intents and stored within a structured taxonomy of conversational artifacts. Cavalin teaches applying a trained machine-learning model, including neural networks trained via gradient descent, to process conversational data, generate new intent-related artifacts, and use those generated artifacts as additional training data for further refinement of the model, thereby forming a machine-learning feedback cycle. Accordingly, the combined teachings disclose processing taxonomy-structured data with a machine-learning based feedback model to generate entities associated with intents, as recited.); receive text data associated with a chatbot, a live chat, or an interactive voice response system (Cavalin, paragraph [0050] “ At 802, a topic can be received for building a new intent. The new intent can be used as training data for training a chatbot. For instance, the chatbot is trained to carry on a dialog with a user and/or answer questions a user may have about a subject matter or topic.” [0051] “the database can store intent and utterances associated with that intent. Utterances, for example, can include questions, for instance, asked posed by a user to a chatbot, for the chatbot to answer.” [0052] “At 806, utterances associated with the candidate intent are extracted or retrieved from the database.” [0056] “In an aspect, the system may implement a controllable text generation, which can convert texts from some topics to others.” – teaches receiving text data associated with chatbot operation, including topics provided for building new intents and user utterances posed to a chatbot. Cavalin further discloses retrieving utterances from a database and generating new example utterances through controllable text generation. These topics, user questions, retrieved utterances, and generated examples all constitute text data associated with a chatbot-based conversational system.); combine the text data, the taxonomy collection, and the association collection to determine a crux of the text data; and perform one or more actions based on the crux of the text data (Narendula, paragraph [0084] “The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128…to identify matching intents from the intent/entity model 108, such that the RA/BE 102 can perform one or more actions based on the identified intents.” [0053] “As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent. It is noted that, for present embodiments, certain entities are treated as parameters of a corresponding intent within an intent-entity model.” Cavalin, paragraph [0040] “At 304, the method can include finding meta-knowledge associated with the received one or more topics, for example, from a database of intents and associated utterances.” [0041] “One or more techniques such as word embedding, taxonomy, or another technique can be used to find intents with similar meta-knowledge.” [0042] “At 308, a predefined threshold is used to filter out irrelevant results…The filtered results include one or more intents and sample questions (utterances) associated with those intents.” [0043] “ At 310, using a learned or trained model, the sample questions in the filtered results are converted to utterances corresponding to the received topic…The trained model can be a Seq2seq model, a rule-based model, a generative adversarial network (GAN), a graph neural network or another neural network, a word embedding model, and/or another.” [0025] “An intent is an action the user expects the chatbot to do.” – Narendula teaches receiving text data in the form of user utterances and generating annotated utterance trees that represent the semantic structure of the utterance, corresponding to a taxonomy collection. Narendula further discloses an intent-entity model that stores associations between intents, entities, and sample utterances, corresponding to an association collection. The NLU annotator processes the utterance using both the taxonomy-structured annotated trees and the stored intent-entity associations to identify the matching intent, which reflects the underlying purpose or crux of the text data. Cavalin likewise teaches using the taxonomy-based similarity techniques and training machine learning models to determine the appropriate intent based on meta-knowledge associated with the text. Narendula further discloses that, once the matching intent is identified, the reasoning agent or behavior engine performs one or more actions based on that intent. Under BRI, combining text data with taxonomy collection and association collection to determine a matching intent, and performing actions based on that intent, corresponds to combining the text data, the taxonomy collection, and the association collection to determine a crux of the text data and performing one or more actions based on the crux, as recited.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Narendula and Cavalin before them, to incorporate the use of taxonomy data associated with different domains, as taught by Narendula, into the intent-generation and chatbot-training framework of Cavalin. One would have been motivated to make such a combination in order to improve the domain specificity and semantic organization of the training data used to generate and refine intent models, thereby enhancing the accuracy of conversational understanding across different contexts. This would allow more precise and reliable intent modeling by supplying Cavalin’s machine-learning system with structured, domain-segmented taxonomy data. Regarding claim 18, Narendula in view of Cavalin teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Regarding claim 19, Narendula in view of Cavalin teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 9 and is rejected for similar reasons as claim 9 using similar teachings and rationale. Regarding claim 20, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin further teaches: provide the crux of the text data for display to a user device (Narendula, paragraph [0085] “ the NLU framework 104 processes a received user utterance 122 t extract intents/entities 140 based on the intent/entity model 108…virtual agent utterances 124 in response to the received user utterance 122.” [0086] “ Additionally, it should be noted that, while the user utterance 122 and the agent utterance 124 are discussed herein as being conveyed using a written conversational medium or channel (e.g., chat, email, ticketing system, text messages, forum posts), in other embodiments, voice-to-text and/or text-to-voice modules or plugins could be included to translate spoken user utterance 122 into text and/or translate text-based agent utterance 124 into speech to enable a voice interactive system, in accordance with the present disclosure” – teaches processing received text data to extract intent/entities that represent the core meaning of the utterance and generating corresponding virtual agent utterances that are conveyed to the user through chat or other written interfaces. Under BRI, the extracted intent/entities constitute the crux of the text data, and conveying the corresponding output to the user corresponds to providing the crux of the text data for display to a user device as recited.); or perform a search for a topic based on the crux of the text data (Narendula, paragraph [0053] “ As used herein, an “intent” refers to a desire or goal of a user which may relate to an underlying purpose of a communication, such as an utterance. As used herein, an “entity” refers to an object, subject, or some other parameterization of an intent.” [0112] “the meaning search system 152 compares (block 284) the subtree of the meaning representation 162 to the meaning representations 158 of the understanding model 157, based on the contents of the compilation model template 244, to generate corresponding intent-subtree similarity scores 285 using the tree-model comparison algorithm 272.” – teaches that the extracted intent represents the underlying purpose of the text data and further teaches comparing the meaning representation of the utterance to stored intent representations to generate similarity scores. Under BRI, comparing the extracted meaning to stored topic/intent representations corresponds to performing a search for a topic based on the crux of the text data as recited.). determine a customer journey, issue, or need based on the crux of the text data (Cavalin, paragraph [0027] “ system and/or method can generate training examples to create new intents for chatbots, helping chatbot developers to create new intents and update the content of a chatbot, using controllable text generation, relying on knowledge from previously-created intents, to suggest samples to be used to train an intent classifier for the new intents. For example, a user need only input high-level meta-knowledge into the system, and the system may search for and suggest possible new intents based on the similarity of the meta-knowledge with existing content (e.g., questions that have already been curated by subject matter experts for question generation for other topics), also verify the proposed examples, and suggest documents that can be used to create chatbot responses.” – teaches receiving high-level meta-knowledge representing the subject matter or purpose of a communication and searching for and suggesting new intents based on similarity or existing content. Because an intent represents the user’s desired action or objective, identifying and suggesting the appropriate intent necessarily determines the user’s underlying need or issue expressed in the text. Under BRI, determining the appropriate intent based on the extracted meaning corresponds to determining a customer journey, issue, or need based on the crux of the text data as recited.). identify a category for the text data based on the crux of the text data (Narendula [0084] “he shared NLU annotator 127 performs semantic parsing… and returns annotated utterance trees of the utterance 122 to the NLU predictor 128 of client instance 42. The NLU predictor 128 then uses these annotated structures of the utterance 122, discussed below in greater detail, to identify matching intents from the intent/entity model 108, “ [0075] “The intent/entity model 108 stores associations or relationships between particular intents and particular sample utterances” [0112] “ the meaning search system 152 compares (block 284) the subtree of the meaning representation 162 to the meaning representations 158 of the understanding model 157, based on the contents of the compilation model template 244, to generate corresponding intent-subtree similarity scores 285 “ [0229] “an example set of taxonomy lookup sources may be used to inference an example utterance, “I want a desk”, to extract taxonomy segmentations indicating that “desk” refers most specifically to a “desk” class or category, which may be part of a broader “office furniture” class or category” – Narendula teaches performing semantic parsing of a user utterance and identifying matching intents from the intent/entity model based on meaning representation comparison and similarity scoring. Narendula further teaches extracting taxonomy segmentations indicating that a term in the utterance refers to a specific “class or category,” including hierarchical broader categories. Because the system identifies a matching intent and extracts a corresponding class or category based on semantic interpretation of the utterance, the system identifies a category for the text data based on semantic interpretation of the utterance, the system identifies a category for the text data based on its core meaning. Under BRI, identifying the matching intent or taxonomy class corresponding to the underlying meaning of the text constitutes identifying a category for the text data as recited.); enable a content creator to create a document based on the crux of the text data; or retrain the machine learning model based on the crux of the text data (Cavalin, paragraph [0027] “ system and/or method can generate training examples to create new intents for chatbots, helping chatbot developers to create new intents and update the content of a chatbot, using controllable text generation, relying on knowledge from previously-created intents, to suggest samples to be used to train an intent classifier for the new intents. For example, a user need only input high-level meta-knowledge into the system, and the system may search for and suggest possible new intents based on the similarity of the meta-knowledge with existing content (e.g., questions that have already been curated by subject matter experts for question generation for other topics), also verify the proposed examples, and suggest documents that can be used to create chatbot responses.” [0037] “ At 120, the pairs of training examples (e.g., questions and answer), representing the new intents, can be validated with external documents, and can be shown to the SME in the curation graphical user interface (GUI) 122. Those pairs include a set of training examples for the new intents, and an answer either generated with the EGM or from a search on external documents.” – teaches identifying intents based on meta-knowledge and further teaches generating and suggesting documents and example content to a user through a graphical interface for use in chatbot responses. Under BRI, enabling a user to generate or curate documents and response content based on the identified intent corresponds to enabling a content creator to create a document based on the crux of the text data as recited.). Claim 2 and 16 rejected under the 35 U.S.C. 103 as being unpatentable over Narendula (Pub. No.: US 20220229986 A1 (Filed: 2022)) in view of Cavalin et al., (Pub. No.: US 20230092274 A1 (Filed: 2021)) further in view of Varghese et al., (NPL: “Lexical And Semantic Analysis of Sacred Texts Using Machine Learning and Natural Language Processing” (Published: 2019)). Regarding claim 2, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: performing a stop-word removal technique on the taxonomy data to generate the preprocessed data, performing a bad character removal technique on the taxonomy data to generate the preprocessed data (Narendula, paragraph [0156] “defines a preprocessing subsystem 1070 that is designed to prepare source data…may also be designed to prepare an incoming user utterance (or a sub-phrase thereof) to be inferenced…Example plugins for the illustrated preprocessing subsystem include tokenizers 1072, data cleansers 1074, or any other suitable preprocessors. A non-limiting list of example preprocessing may include, but is not limited to: removal of punctuation or other characters, removal of stop words”); performing an abbreviation regular expression technique on the taxonomy data to generate the preprocessed data (Narendula, paragraph [0127] “ However, it is appreciated that many utterances exchanges in different conversational channels (e.g., chat rooms, forums, emails) may demonstrate different diction, such as slang terms, abbreviated terms, acronyms, and so forth. With this in mind, the continual learning loop illustrated in FIG. 15 enables the word vector distribution model 342 to be modified to include new word vectors, and to change values of existing word vectors, based on source data gleaned from the growing collections of user and agent utterances 122 and 124, to become more adept at generating annotated utterance trees 166 that include these new or changing terms.”); performing a placeholder replace technique on the taxonomy data to generate the preprocessed data (Narendula, paragraph [0156] “example preprocessing may include, but is not limited to…reformatting or reorganizing source data… data cleansers 1074 of the preprocessing subsystem 1070 may take a full name column from an employee table in a particular format (e.g., “Last, First”) and generate a cleansed and tokenized data set including all of the first names of the employees in a first column and all of the last names of the employees in a second column (e.g., “First”, “Last”).” – discloses a preprocessing subsystem that includes data cleansers configured to reformat or reorganize structured source data. Narendula further teaches transforming formatted textual fields (e.g., “Last, First”) into normalized structured components (e.g., separate “First” and “Last” fields) during preprocessing. Under BRI, replacing structured formatted text patterns with normalized representations correspond to performing a placeholder replace technique on the taxonomy data to generate preprocessed data. Accordingly, Narendula discloses performing a placeholder replace technique as required by the limitation.); performing a custom noun entity technique on the taxonomy data to generate the preprocessed data (Narendula, paragraph [0243] “ the lookup source system 1016 may limit or restrict taxonomy lookup source inference to tokens of the parsed utterances 1556 that were recognized and tagged as being either nouns or noun-phrases by the NLU system 1012. It is presently recognized that this enables the NLU system 1012 to appropriately disambiguate certain intents (e.g., “access” as a verb) and entities (e.g., “ACCESS” as a noun representing a colloquial name of a software entity).” – teaches limiting taxonomy lookup inference to tokens recognized and tagged as nouns or noun-phrases, and further discloses disambiguating context-specific noun entities, such as distinguishing “assess” as a verb from “ACCESS” as a noun representing a software entity. Under BRI, selectively identifying a noun and noun-phrase tokens and disambiguating noun entities corresponds to performing a custom noun entity technique on the taxonomy data to generate preprocessed data. Accordingly, Narendula discloses performing a custom noun entity technique as required by the limitation.); or However, Narendula in view of Cavalin does not teach but Narendula in view of Cavalin further in view of Varghese teaches the following limitation: performing a lemmatization technique on the taxonomy data to generate the preprocessed data (Varghese, [section 3, table 1] “Lemmatization captures canonical forms based on a word’s lemma. TABLE 1 shows examples of stemming and Lemmatization. Text normalization is also known as text cleansing or wrangling which creates a standardized textual data from raw text using Natural Language Processing and Analytics Systems. Text normalization includes…Lemmatized text” – teaches performing lemmatization as part of text preprocessing and normalization including reducing words to their canonical lemma forms. The reference explicitly identifies lemmatization as a preprocessing technique and describes converting inflected forms to their base forms. Under BRI, performing lemmatization on textual data corresponds to performing a lemmatization technique on taxonomy data to generate the preprocessed data as required by the claim.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Narendula, Cavalin, and Varghese before them, to incorporate the use of lemmatization as a preprocessing technique, as taught by Varghese, into the taxonomy-data preprocessing pipeline of Narendula prior to generating training data within the machine learning framework of Cavalin. One would have been motivated to make such a combination in order to normalize different inflected forms of words into their canonical base forms, thereby improving the consistency of the textual data used to generate and train intent models. This would allow more accurate and reliable machine learning training by reducing linguistic variation and noise in training data supplied by the model. Regarding claim 16, Narendula in view of Cavalin teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Claim 3, 14, and 17 is rejected under the 35 U.S.C. 103 as being unpatentable over Narendula (Pub. No.: US 20220229986 A1 (Filed: 2022)) in view of Cavalin et al., (Pub. No.: US 20230092274 A1 (Filed: 2021)) further in view of Liu et al., (NPL: “Coreference-Aware Dialogue Summarization” (Published: 2021)). Regarding claim 3, Narendula in view of Cavalin teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1. Narendula in view of Cavalin further teaches: processing the preprocessed data, with a semantic and dependency parsing model, to generate a second portion of the accelerated data (Narendula, paragraph [0084] “the NLU predictor 128 passes the utterance 122 and the intent/entity model 108 to the shared NLU annotator 127 for parsing and annotation of the utterance 122. The shared NLU annotator 127 performs semantic parsing, grammar engineering, and so forth, of the utterance 122 based on the intent/entity model 108 and returns annotated utterance trees of the utterance 122 to the NLU predictor 128” - Narendula teaches processing textual data with an NLU annotator that performs semantic parsing and grammar-based analysis to generate annotated utterance trees representing structured syntactic and semantic relationships of the utterance. These annotated utterance trees constitute structured parsed output derived from the input text, thereby meeting the limitation of processing the preprocessed data with a semantic and dependency parsing model to generate a second portion of the accelerated data under a broadest reasonable interpretation.); However, Narendula in view of Cavalin does not teach but Narendula in view of Cavalin further in view of Liu teaches the following limitations: processing the preprocessed data, with a coreference resolution model, to generate a first portion of the accelerated data (Liu, [section 3] “automatic coreference resolution is needed to process the samples. Neural approaches (Joshi et al., 2020) have shown impressive performance on document coreference resolution… we observed some common issues… Based on the observation, to improve the overall quality of dialogue coreference resolution, we conducted data post-processing on the automatic output: (1) First, we applied a model ensemble strategy to obtain more accurate cluster predictions; (2) Then, we re-assigned coreference cluster labels to the words with speaker roles that were not included in any chains; (3) Moreover, we compared the clusters and merged those that presented the same coreference chain.” – Liu teaches processing dialogue text with a neural coreference resolution model to generate coreference clusters and merged coreference chains as structured output derived from the text. These coreference clusters and merged chains constitute processed output data generated from the input text, thereby meeting the limitation of processing the preprocessed data with a coreference resolution model to generate a first portion of the accelerated data.).; processing the preprocessed data, with a summarization model, to generate a third portion of the accelerated data (Liu, [section 4] “In this section, we adopt a neural model for abstractive dialogue summarization, and investigate various methods to enhance it with the coreference information obtained in Section 3. The base neural architecture is a sequence-to-sequence model Transformer… Given a conversation containing n tokens T   = t 1 ,   t 2 , … , t n , self-attention-based encoder is used to produce the contextualized hidden representations H = h 1 , h 2 , … , h n ,   , then an autoregressive decoder generates the target sequence O = { w 1 , w 2 , … , w k } sequentially.” – Liu teaches applying a neural sequence-to-sequence Transformer model for abstractive dialogue summarization, wherein the model processes dialogue text and generates a summarized target sequence as output. The generated summarized sequence constitutes processed output data derived from the input text, thereby meeting the limitation of processing the preprocessed data with a summarization model to generate a third portion of the accelerated data under the broadest reasonable interpretation.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention , having a combination of Narendula, Cavalin, and Liu before them, to incorporate the use of semantic parsing and annotated utterance trees, as taught by Narendula, together with the use of coreference resolution and dialogue summarization models, as taught by Liu, into the intent-generation and chatbot-training framework of Cavalin. One would have been motivated to make such a combination in order to improve the structural and contextual organization of the training data used to generate and refine intent models, thereby enhancing the coherence and quality of the conversational understanding across different contexts. This would allow more precise and reliable intent modeling by supplying Cavalin’s machine learning system with parsed, coreference-resolved, and summarized dialogue data. Regarding claim 14, Narendula in view of Cavalin teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. Narendula in view of Cavalin does not teach but Narendula in view of Cavalin further in view of Liu teaches the following limitation: wherein the crux of the text data is an abstractive summarization of the text data (Liu, [Introduction] “Therefore, in this paper, we propose to improve abstractive dialogue summarization by explicitly incorporating coreference information. Since entities are linked to each other in coreference chains, we postulate adding a graph neural layer could readily characterize the underlying structure, thus enhancing contextualized representation.” – As previously mapped, Narendula determines the crux of the text data by identifying the matching intent through semantic parsing and meaning comparison, and Cavalin teaches that an intent represents the underlying purpose or goal of the communication. Thus, the crux corresponds to the core semantic meaning of the text data. Liu teaches performing abstractive dialogue summarization to generate a contextualized representation that captures the underlying structure and meaning of conversational text. Because an abstractive summary distills essential meaning of the text into a concise representation, representing the previously identified crux in summarized form corresponds to the claim limitation.). It would have been obvious to apply Liu’s abstractive summarization techniques to the intent-determination systems of Narendula and Cavalin in order to present an identified core meaning of the utterance in a concise and structured manner, thereby improving clarity and usability of chatbot outputs. Regarding claim 17, Narendula in view of Cavalin teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month