Prosecution Insights
Last updated: April 19, 2026
Application No. 18/764,424

Auto-Learning Chatbot Scenarios

Non-Final OA §101§103§112
Filed
Jul 05, 2024
Examiner
AZIZ, SHEZA ABDUL
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Text S A
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
20.0%
-20.0% vs TC avg
§103
65.0%
+25.0% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “108”, in Figure 1 has been used to designate both the “User Interface” block and the “Scenario View” block. The drawings are objected to under 37 CFR 1.83(a) because Figure 4 fails to show the “Suggested response” block 207, as described in the specification at paragraph [0042]. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: The specification does not always match up with what is illustrated in the drawings. In ¶ [0027], the specification does not describe “Data collection” block 104 in FIG. 1, but it does describe, “The data source (103) collects data and sends it to storage (105)”. Based on FIG. 1, that sentence should describe “The data collection (104) collects data and sends it to storage (105).” Also in ¶ [0027], the user interface block (107) is illustrated as user interface block (108), the statistics block (108) is illustrated as statistics block (107), and ¶ [0027] the scenario view block and statistics block are both referenced as element 108. In ¶ [0030] the data processing block (202) is illustrated in FIG. 2 as block 201, the sentiment analysis block (205) is illustrated as block 204, the question detection block (206) is illustrated as block 205, and the response matching block (207) is illustrated as block 206. Also in ¶ [0030], the response matching block and the suggested response block are both referenced as element 207. ¶ [0032]-[0033] indicate that the description applies to FIG. 2, but they are describing elements included in FIG. 3. ¶ [0034] and [0041] describe data processing block (202), which is illustrated in FIGS. 3 and 4 as block 201, and the sentiment analysis block (205), which is illustrated in FIGS. 3 and 4 as block 204. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim [4, 8, 9, 13, 14, 16, 17 ] are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim [4, 13] recite the limitation “the emotional tone” and “the user behavior” in line 2. There is insufficient antecedent basis for this limitation in the claim. Claim 1 does not introduce either of these terms. Claim [8, 17] recite “the machine learning techniques” which are not introduced in Claim 1 and Claim 10. There is insufficient antecedent basis for this limitation in the claim. Claim [9, 14] recite a “the dataset” and “the likelihood”, neither of which have a proper antecedent basis. Claim [16] recites “the Natural Language Processing methods” which isn’t introduced in Claim 10. There is insufficient antecedent basis for this limitation in the claim. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 5 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. In this case, all the limitations of claim 5 are already recited in claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim [1-17] are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process without significantly more. Regarding claim 1 and 10, recite a method and a system for enabling the analysis of collected content to search for patterns and generate automated responses, the system comprising: a database configured to store collected content, wherein the collected content comprises conversations between an agent and an end user (this data can be gathered by a human, such as a paper database of conversation logs) a processing module configured to: classify the content of the conversations to determine whether the sentiment of a message is a question; analyze the classified content based on a plurality of assumptions, including selection of the most appropriate answers where the end-user rated the response as helpful, selection of the answers after which the user closed the communication, and selection of the answers after which the response from the user is immediate (this can be analyzed by a human and it can be determined by a human if the response was helpful or not based on the listed scenarios) transform the classified content into vector form using a sentence-transformer model; cluster the vectorized content based on a similarity function between sentences; generate responses for the clustered content by identifying the most relevant and accurate response within each cluster (These processes can be handled by a human, using pencil and paper to transform content into vectors, cluster the vectors, and then select the most common response). an output module configured to provide the generated responses to the user; and a communication interface configured to retrieve data from an instant communication channel via an API (This can be generated by a human- response from a person). As described above, these limitations can be carried out as a series of mental steps by a human. In addition to the mental steps, claim 1 also describes a processing module, a sentence-transformer model, an output module, and a communication interface that communicates with a communication channel via an API. These all are general-purpose hardware and software being used as a tool to implement the abstract idea, so they don’t describe a practical application or significantly more than mental process. Regarding 2 and 11 recite a system and method wherein the processing module further comprises identifying patterns in user input using machine learning techniques to make decisions and learn from past conversations (person can identify patterns in user input and then make decision based on these patterns). These additional limitations do not prevent the process from being carried out as a mental process. This judicial exception is not integrated into a practical application because the only additional element recited is “machine learning techniques” and this additional element is nothing more than general-purpose software. This claim does not include any additional elements that amount to significantly more than a mental process, for the same reason as discussed regarding the practical application. Claim 3 and 12, recite wherein the processing module further comprises detecting behavioral patterns of the user using Natural Language Processing methods (person can detect behaviors of the user). These additional limitations do not prevent the process from being carried out as a mental process. This judicial exception is not integrated into a practical application because the only additional elements recited is the natural language processing methods and this additional element is nothing more than instructions to apply the mental process using a general-purpose software model. This claim does not include any additional elements that amount to significantly more than a mental process, for the same reason as discussed regarding the practical application. Claim 4 and 13, recite wherein the processing module further comprises performing sentiment assessment to analyze the emotional tone of the user's behavior (analyzing the emotional tone of the user can be done by a person). These additional limitations do not prevent the process from being carried out as a mental process. This claim has no additional elements and therefore if doesn’t describe a practical application or significantly more than a mental process. Claim 5 recites the assumptions include at least one of: " selection of the most appropriate answers where the end-user rated the response as helpful; " selection of the answers after which the user closed the communication; and " selection of the answers after which the response from the user is immediate (this can be analyzed by a human and it can be determined by a human if the response was helpful or not based on the listed criteria). This claim has no additional elements and therefore if doesn’t describe a practical application or significantly more than a mental process. Claim 6 recites comprising storage means for storing system elements on both cloud storage and on-premises physical storage (a person can store system elements such as either by remembering them or by writing them down using a pencil or paper). This judicial exception is not integrated into a practical application and doesn’t describe significantly more because the only additional elements recited are cloud storage and on-premises physical storage and these additional elements are nothing more than general-purpose hardware. Claim 7 and 15, recites comprising a user interface configured to allow a user to view and evaluate scenario elements and preview scenario statistics (a person can evaluate the responses given and ensure by previous analysis which response would be more feasible). This additional limitation does not prevent the process from being carried out as a mental process. This judicial exception is not integrated into a practical application and doesn’t describe significantly more because the only additional elements recited is the user interface and this additional element is nothing more than general-purpose hardware being used as a tool. Claim 8 and 17, recites wherein the machine learning techniques include the use of Naive Bayesian Classifier and clustering algorithms based on cosine similarity (These can be done by a human manually calculating according to the Naive Bayesian Classifier and clustering algorithms based on cosine similarity). This additional limitation does not prevent the process from being carried out as a mental process. This judicial exception is not integrated into a practical application and doesn’t describe significantly more because the only additional elements recited is a machine learning technique (described as high level) and this additional element is nothing more than a tool to apply the mental process using a general-purpose software. Claim 9 and 14, recites wherein upon assuming an answer is helpful, the system updates the dataset to enhance the likelihood of selecting similar responses in future interactions (a person can determine if answer is helpful and then based on that provide similar response for future interactions) This additional limitation does not prevent the process from being carried out as a mental process. This claim has no additional elements and therefore if doesn’t describe a practical application or significantly more than a mental process. Claim 16 recites wherein the Natural Language Processing methods include language detection, sentence segmentation, and part-of-speech tagging (a person can do language detection, sentence segmentation, and part-of-speech tagging). This additional limitation does not prevent the process from being carried out as a mental process. This judicial exception is not integrated into a practical application and doesn’t describe significantly more because the only additional elements recited is a Natural language technique (described as high level) and this additional element is nothing more than a tool to apply the mental process using a general-purpose software. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim [1, 2, 3, 5] are rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Shanmugam (US20190311036) and in further view of Taubman (US9679568) and in further view of Hemington (US20240320251). Regarding claim 1, Pasupalak discloses a system for enabling the analysis of collected content to search for patterns and generate automated responses, the system comprising: a database configured to store collected content, wherein the collected content comprises conversations between an agent and an end user- [0140 “As mentioned above, Delegate Service 108 may receive user query 302 and may communicate user query 302, relevant metadata and/or a modified user query 302 to other modules/managers/services of the present invention. In one embodiment, Delegate Service 108 directs user query 302 to NLP Engine 114 to extract a representation of the intent of user, an associated command, and one or more parameters. NLP Engine 114 may return the derived information representing the user intent back to the Delegate Service 108 for further processing and/or store the information in the Topic Board 1830”]. [0141 “Topic Board 1830 may be a database, a data structure, instantiated objects, a log file, and the like. Topic Board 1830 may be used by the Delegate Service 108 to store rich information about a user conversation, user session, and/or user history”]. a processing module configured to: classify the content of the conversations to determine whether the sentiment of a message is a question- [0071 “In one embodiment, NLP Engine 114 receives a user query 302 as described below and derives the intention of the user. NLP Engine 114 may identify a domain, a subgroup (also referred to as a subdomain), one or more tasks (also referred to as actions and/or commands) according to the derived intention of the user, and one or more entities (also referred to as parameters) that may be useful to accomplish the one or more tasks. As an example, interaction, a user expresses the query 302 "Find me a flight from Toronto to New York leaving in a week". The above query 302 may be classified by NLP Engine 114 as relating to the domain TRAVEL, the subgroup of flights. NLP Engine 114 may further relate the user query 302 to tasks to be performed such as "find flights" and may be "book flights", and may further identify the entities "Toronto", "New York", as well as the departure date. The process of identifying the domain, subgroup, one or more task, and entities associated with a user query 302 is generally referred to herein as deriving the user intent. NLP Engine 114 may create a representation of the derived user intent by creating a software object such as a template 719 and/or by saving the intent to temporary and/or permanent memory. As described further in this specification, the Conversational Agent 150 may attempt to elicit additional entity information from the user, such as in this example interaction, a particular airline, the return date, the class of the ticket, number of tickets, number of stops allowed, time of the departure and return flights, and the like”]; [0072 “Dialogue driver 306 (i.e. Delegate Service 108), which may be a component of Dialogue Manager 116, receives user query 302 for processing and provides user query 302 to question type classifier 314”]. a processing module configured to analyze the classified content based on a plurality of assumptions – [0108 “In one of the analyses (Naïve Bayes classifier 608), the user query 302 is provided to a Bayes-theorem based classifier with strong independence assumptions to perform document classification. The naïve Bayes classifier determines a probability that a particular user query (set of features) belongs (i.e. is associated with) a particular class (i.e. command). The classifier naïve Bayes classifier may be trained using a training set of known queries and associated commands an output module configured to provide the generated responses to the user - [0139 “Dialogue Manager 116 maintains conversation/system state and generates responses (output 304) based on the state of the conversation, the current domain being discussed by the user, entities that may need to be filled (by eliciting clarification questions), response from services 118,120, and the like”]; [ 0082 ” Dialogue Manager 116 and Display Manager 142 provide output 304 for smartphone 102 also as described below. Smartphone 102 may have a queue manager 107 that receives output 304 from cloud-based service infrastructure 104”]. and a communication interface configured to retrieve data from an instant communication channel via an API – [[0169 “In the process, NLP Engine 114 may make assumptions and/or refer to user preferences regarding other entities such as FROM city, LUXURY, CARRIER, and the like. Services Manager 130 places template 319 onto the Topic Board 1830 at entry 4310 before interfacing with external service 118. Services Manager 130 then interfaces (via an API) with external service 118, which does not immediately return a result”]. However, Pasupalak does not teach analyzing the classified content based on a plurality of assumptions including selection of the most appropriate answers where the end-user rated the response as helpful, including selection of the answers after which the user closed the communication, and selection of the answers after which the response from the user is immediate. However, Shanmugam teaches selection of the most appropriate answer where the end user rated the response as helpful – [[0053 “Upon operations at 704 communicating to the end user device the chatbot response to the matching intent, operations at 705 can initiate an end user feedback loop, to obtain user input regarding the user's experience. Operations at 705 can include, for example, sending to the end user device an inquiry regarding the user's opinion on the response sent at 704. Example forms for the inquiry can include, but are not limited to, a binary type, e.g., “Thumbs Up/Thumbs Down” or similarly worded pair of checkboxes. User response to that inquiry can be received at 706”]; [0054 “Referring to FIG. 7, one alternative or supplement to the above-described “Thumbs Up/Thumbs Down” user experience query form can include a “That Did Not Help Me” query”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak into the teachings of Shanmugam because incorporating a feedback-based evaluation techniques of Shanmugam into the conversational agent system of Pasupalak would enable the conversational agent system to assess response quality and improve future interactions based on feedback. Pasupalak in view of Shanmugam doesn’t teach selection of the answers after the use closes the communication and selection of the answers after which the response from the user is immediate. However, Taubman does teach selection of the answers after the use closes the communication and selection of the answers after which the response from the user is immediate – [Column 3, lines 50-60 “FIGS. 1A-2E illustrate an overview of example implementations described herein. For example, as shown in FIG. 1A, a user 105 may ask a question, “Who invented the telephone?” to a user device 110. As shown in FIG. 1B, the user device 110 may answer “Alexander Graham Bell.” As shown in FIG. 1C, the user 105 may provide feedback, such as speaking the word “Thanks.” Based on this feedback being characterized as positive feedback, and as shown in FIG. 1D, the user device 110 may store information indicating that the answer “Alexander Graham Bell” is a good answer to the question “Who invented the telephone?”-When the user says 'thanks' and the answer is acceptable or a good answer, then no follow up answer is needed and the conversation ends. Under the broadest reasonable interpretation, the word “thanks” is being understood as providing an immediate answer and closing the communication since the user doesn’t ask a follow up question or continue the conversation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak in view of Shanmugam to incorporate the teaching of Taubman because incorporating the data updating mechanisms of Taubman into the conversational agent of Pasupalak would help assess response quality and update stored conversational data so that future interactions provide improved responses. This would improve response accuracy, provide relevancy and improve overall conversational quality in future interactions. Additionally, Pasupalak as modified above does not teach a processing module configured to transform the classified content into vector form using a sentence-transformer model; cluster the vectorized content based on a similarity function between sentences; generate responses for the clustered content by identifying the most relevant and accurate response within each cluster. However, Hemington does teach this - [0033 “The present application discloses techniques for query resolution that address the abovementioned technical limitations. More particularly, a system for automatically generating query responses using an LLM is described. When an inbound message is received by the system, a query (e.g., issue, question, etc.) is extracted from the message. Clustering is performed on the queries that are received by the system to create clusters of similar queries. An LLM is employed to refine the clusters. Specifically, an LLM may be instructed to verify whether the queries of a same cluster represent the “same” query and to identify any that are deemed to be dissimilar to other queries in the cluster. In this way, an LLM may facilitate distinguishing between queries in a same cluster whose embeddings are close together in a feature space but which may be semantically distinct. The system may generate responses to an incoming query by matching the query to a particular one of the clusters and obtaining response messages based on data associated with the matching cluster”]; [0050 “The transformer 50 may be trained on a text corpus that is labelled (e.g., annotated to indicate verbs, nouns, etc.) or unlabelled. LLMs may be trained on a large unlabelled corpus. Some LLMs may be trained on a large multi-language, multi-domain corpus, to enable the model to be versatile at a variety of language-based tasks such as generative tasks (e.g., generating human-like natural language responses to natural language input”]; [0051 “An example of how the transformer 50 may process textual input data is now described. Input to a language model (whether transformer-based or otherwise) typically is in the form of natural language as may be parsed into tokens”]; [0052 “In FIG. 3, a short sequence of tokens 56 corresponding to the text sequence “Come here, look!” is illustrated as input to the transformer 50. Tokenization of the text sequence into the tokens 56 may be performed by some pre-processing tokenization module such as, for example, a byte pair encoding tokenizer (the “pre” referring to the tokenization occurring prior to the processing of the tokenized input by the LLM), which is not shown in FIG. 9 for simplicity. In general, the token sequence that is inputted to the transformer 50 may be of any length up to a maximum length defined based on the dimensions of the transformer 50 (e.g., such a limit may be 2048 tokens in some LLMs). Each token 56 in the token sequence is converted into an embedding vector 60 (also referred to simply as an embedding). An embedding 60 is a learned numerical representation (such as, for example, a vector) of a token that captures some semantic meaning of the text segment represented by the token 56”]; [0053 “The generated embeddings 60 are input into the encoder 52. The encoder 52 serves to encode the embeddings 60 into feature vectors 62 that represent the latent features of the embeddings.”]; [0055 “Although a general transformer architecture for a language model and its theory of operation have been described above, this is not intended to be limiting. Existing language models include language models that are based only on the encoder of the transformer or only on the decoder of the transformer. An encoder-only language model encodes the input text sequence into feature vectors that can then be further processed by a task-specific layer (e.g., a classification layer). BERT is an example of a language model that may be considered to be an encoder-only language model. A decoder-only language model accepts embeddings as input and may use auto-regression to generate an output text sequence. Transformer-XL and GPT-type models may be language models that are considered to be decoder-only language models”]; [0072 “The clustering module 118 may perform clustering using the vector embeddings that are generated by the embedding module 116. In particular, the clustering module 118 may identify clusters in the embedding space. Clustering operations may be performed by implementing a suitable cluster model (e.g., connectivity model, centroid model, etc.) and clustering algorithm (e.g., DBSCAN, agglomerative clustering, spectral clustering, etc.). The clustering module 118 is configured to output information regarding clustering operations such as, for example, cluster labels, clustering algorithms, distance metric(s), linkage criterion, and cluster membership”]; [0085 “When an incoming query message is received (operation 210), the computing system identifies a query within the incoming message and matches said query to a particular cluster from the first or second clusters, in operation 212. The matched cluster contains previous queries that are semantically similar to the incoming query. The computing system then obtains one or more response messages for the incoming query. More particularly, the computing system may obtain generated responses based on providing, to the LLM, data associated with the matched cluster (operation 214). The cluster data for the matched cluster may include, for example, one or more responses that were previously provided by the computing system in reply to a query associated with the matched cluster. A response may comprise at least one solution to a question/issue. The responses (e.g., solutions) to previous queries may be stored, for example, in the query database in association with the corresponding queries. Additionally, or alternatively, the computing system may provide, to the LLM, one or more solution steps associated with the matched cluster. The computing system may optionally provide, to the LLM, input of text associated with one or more resource documents that are to the query of the matched cluster”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak into the teachings of Hemington because using the technique of clustering vectorized conversational data allows semantically similar queries to be grouped together and enables generation of more relevant responses. Incorporating clustering would enhance the conversational agent by improving its ability to generalize varied phrasings of similar user requests. This modification applies known techniques to improve efficient processing and improved conversational response handling. Regarding Claim 2, Pasupalak discloses a system according to claim 1, wherein the processing module further comprises identifying patterns in user input using machine learning techniques to make decisions and learn from past conversations – [0074 “In an embodiment, the Conversational Agent 150 may map a specific command to one or more words contained in a user query. In the above example, the Conversational Agent 150 may map the word "tell" or the phrase "tell Bob" with one or more commands such as an internal phone service. The Conversational Agent 150 may learn over time the behavior patterns and/or preferences of the user in relation to many commands. The Conversational Agent 150 may also learn the preferences and/or behavior patterns of a user in relation to performing a command for a specific parameter or class of parameters”]; [0084 “At step 404 in one embodiment, user query 302 may be subjected to binary classification such as via a support vector machine (SVM) for analysis. Other known types of binary classification may also be used alone or in combination such as decision trees, Bayesian networks, support vector machines, and neural networks. A support vector machine (SVM) is a concept in statistics and computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis”]; [0248 “The Conversational Agent 150 may include a Learning Manager 128 for updating, training, and/or reinstating any of the modules used by the Conversational Agent 150. Modules that may be modified by the Learning Manager 128 include support vector machines, conditional random fields, naive Bayesian classifiers, random forest classifiers, neural networks, previous query score classifiers and the like”]; [0249 “Learning Manager 128 may update some or all of the intelligent modules of the invention periodically according to a set schedule and/or when initiated by an administrator. The Conversational Agent 150 may gather feedback from users based on their interaction with the Conversational Agent 150 for training purposes. Examples of how the Conversational Agent 150 uses feedback from user interaction are shown in FIGS. 11-17. For example, the Conversational Agent 150 may determine whether each outputted response was useful to the user. In one embodiment, the Learning Manager 128 of Conversational Agent 150 classifies each response as either "correct", "incorrect" and/or "neutral". Learning manager 128 may also assign a weight to each of the above categories such that a response is determined to be a certain percentage "correct" or "incorrect". In an example interaction, the user may express a query 302 of "Find me some French cuisine in St. Louis". ASR service 112 processes the voice query and provides a text representation of the query 302 to NLP Engine 114. NLP Engine 114 provides a template object 319 to Services Manager 130, the template object including the DOMAIN (in this example, RESTAURANTS) and several entities (St. Louis and "French"). Services Manager 130 determines an appropriate service 118 to perform the derived intention of the user calls that service (external service 118 in this example). External service 118 provides a response to Services Manager 130 which is presented to the user by the Ux Manager 103”]. Regarding Claim 3, Pasupalak discloses a system according to claim 1, wherein the processing module further comprises detecting behavioral patterns of the user using Natural Language Processing methods – [0074 “In an embodiment, the Conversational Agent 150 may map a specific command to one or more words contained in a user query. In the above example, the Conversational Agent 150 may map the word "tell" or the phrase "tell Bob" with one or more commands such as an internal phone service. The Conversational Agent 150 may learn over time the behavior patterns and/or preferences of the user in relation to many commands. The Conversational Agent 150 may also learn the preferences and/or behavior patterns of a user in relation to performing a command for a specific parameter or class of parameters. Regarding claim 5, Pasupalak in view of Shanmugam and in further view of Taubman teaches system according to claim 1, wherein the assumptions include at least one of: " selection of the most appropriate answers where the end-user rated the response as helpful; " selection of the answers after which the user closed the communication; and " selection of the answers after which the response from the user is immediate. Claim 5 is rejected for the same reasons as claim 1. Claim [10, 11, 12] are rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Hemington (US-20240320251-A1). Regarding claim 10, Pasupalak discloses a method for enabling of collected content to search for patterns and generate automated responses, the method comprising: collecting a set of input data comprising conversations between an agent and an end user – [0140 “As mentioned above, Delegate Service 108 may receive user query 302 and may communicate user query 302, relevant metadata and/or a modified user query 302 to other modules/managers/services of the present invention. In one embodiment, Delegate Service 108 directs user query 302 to NLP Engine 114 to extract a representation of the intent of user, an associated command, and one or more parameters. NLP Engine 114 may return the derived information representing the user intent back to the Delegate Service 108 for further processing and/or store the information in the Topic Board 1830”]. classifying the content of the conversations to determine whether the sentiment of a message is a question - [0071 “In one embodiment, NLP Engine 114 receives a user query 302 as described below and derives the intention of the user. NLP Engine 114 may identify a domain, a subgroup (also referred to as a subdomain), one or more tasks (also referred to as actions and/or commands) according to the derived intention of the user, and one or more entities (also referred to as parameters) that may be useful to accomplish the one or more tasks. As an example, interaction, a user expresses the query 302 "Find me a flight from Toronto to New York leaving in a week". The above query 302 may be classified by NLP Engine 114 as relating to the domain TRAVEL, the subgroup of flights. NLP Engine 114 may further relate the user query 302 to tasks to be performed such as "find flights" and may be "book flights", and may further identify the entities "Toronto", "New York", as well as the departure date. The process of identifying the domain, subgroup, one or more task, and entities associated with a user query 302 is generally referred to herein as deriving the user intent. NLP Engine 114 may create a representation of the derived user intent by creating a software object such as a template 719 and/or by saving the intent to temporary and/or permanent memory. As described further in this specification, the Conversational Agent 150 may attempt to elicit additional entity information from the user, such as in this example interaction, a particular airline, the return date, the class of the ticket, number of tickets, number of stops allowed, time of the departure and return flights, and the like”]; [0072 “Dialogue driver 306 (i.e. Delegate Service 108), which may be a component of Dialogue Manager 116, receives user query 302 for processing and provides user query 302 to question type classifier 314”]. analyzing the classified content based on a plurality of assumptions – [0108 “In one of the analyses (Naïve Bayes classifier 608), the user query 302 is provided to a Bayes-theorem based classifier with strong independence assumptions to perform document classification. The naïve Bayes classifier determines a probability that a particular user query (set of features) belongs (i.e. is associated with) a particular class (i.e. command). The classifier naïve Bayes classifier may be trained using a training set of known queries and associated command”]. providing the generated responses to the user- [0139 “Dialogue Manager 116 maintains conversation/system state and generates responses (output 304) based on the state of the conversation, the current domain being discussed by the user, entities that may need to be filled (by eliciting clarification questions), response from services 118,120, and the like”]; [ 0082 ” Dialogue Manager 116 and Display Manager 142 provide output 304 for smartphone 102 also as described below. Smartphone 102 may have a queue manager 107 that receives output 304 from cloud-based service infrastructure 104”]. However, Pasupalak does not teach transforming the classified content into vector form using a sentence-transformer model; cluster the vectorized content based on a similarity function between sentences; generate responses for the clustered content by identifying the most relevant and accurate response within each cluster. However, Hemington does teach this - [0033 “The present application discloses techniques for query resolution that address the abovementioned technical limitations. More particularly, a system for automatically generating query responses using an LLM is described. When an inbound message is received by the system, a query (e.g., issue, question, etc.) is extracted from the message. Clustering is performed on the queries that are received by the system to create clusters of similar queries. An LLM is employed to refine the clusters. Specifically, an LLM may be instructed to verify whether the queries of a same cluster represent the “same” query and to identify any that are deemed to be dissimilar to other queries in the cluster. In this way, an LLM may facilitate distinguishing between queries in a same cluster whose embeddings are close together in a feature space but which may be semantically distinct. The system may generate responses to an incoming query by matching the query to a particular one of the clusters and obtaining response messages based on data associated with the matching cluster”]; [0050 “The transformer 50 may be trained on a text corpus that is labelled (e.g., annotated to indicate verbs, nouns, etc.) or unlabelled. LLMs may be trained on a large unlabelled corpus. Some LLMs may be trained on a large multi-language, multi-domain corpus, to enable the model to be versatile at a variety of language-based tasks such as generative tasks (e.g., generating human-like natural language responses to natural language input”]; [0051 “An example of how the transformer 50 may process textual input data is now described. Input to a language model (whether transformer-based or otherwise) typically is in the form of natural language as may be parsed into tokens”]; [0052 “In FIG. 3, a short sequence of tokens 56 corresponding to the text sequence “Come here, look!” is illustrated as input to the transformer 50. Tokenization of the text sequence into the tokens 56 may be performed by some pre-processing tokenization module such as, for example, a byte pair encoding tokenizer (the “pre” referring to the tokenization occurring prior to the processing of the tokenized input by the LLM), which is not shown in FIG. 9 for simplicity. In general, the token sequence that is inputted to the transformer 50 may be of any length up to a maximum length defined based on the dimensions of the transformer 50 (e.g., such a limit may be 2048 tokens in some LLMs). Each token 56 in the token sequence is converted into an embedding vector 60 (also referred to simply as an embedding). An embedding 60 is a learned numerical representation (such as, for example, a vector) of a token that captures some semantic meaning of the text segment represented by the token 56”]; [0053 “The generated embeddings 60 are input into the encoder 52. The encoder 52 serves to encode the embeddings 60 into feature vectors 62 that represent the latent features of the embeddings.”]; [0055 “Although a general transformer architecture for a language model and its theory of operation have been described above, this is not intended to be limiting. Existing language models include language models that are based only on the encoder of the transformer or only on the decoder of the transformer. An encoder-only language model encodes the input text sequence into feature vectors that can then be further processed by a task-specific layer (e.g., a classification layer). BERT is an example of a language model that may be considered to be an encoder-only language model. A decoder-only language model accepts embeddings as input and may use auto-regression to generate an output text sequence. Transformer-XL and GPT-type models may be language models that are considered to be decoder-only language models”]; [0072 “The clustering module 118 may perform clustering using the vector embeddings that are generated by the embedding module 116. In particular, the clustering module 118 may identify clusters in the embedding space. Clustering operations may be performed by implementing a suitable cluster model (e.g., connectivity model, centroid model, etc.) and clustering algorithm (e.g., DBSCAN, agglomerative clustering, spectral clustering, etc.). The clustering module 118 is configured to output information regarding clustering operations such as, for example, cluster labels, clustering algorithms, distance metric(s), linkage criterion, and cluster membership”]; [0085 “When an incoming query message is received (operation 210), the computing system identifies a query within the incoming message and matches said query to a particular cluster from the first or second clusters, in operation 212. The matched cluster contains previous queries that are semantically similar to the incoming query. The computing system then obtains one or more response messages for the incoming query. More particularly, the computing system may obtain generated responses based on providing, to the LLM, data associated with the matched cluster (operation 214). The cluster data for the matched cluster may include, for example, one or more responses that were previously provided by the computing system in reply to a query associated with the matched cluster. A response may comprise at least one solution to a question/issue. The responses (e.g., solutions) to previous queries may be stored, for example, in the query database in association with the corresponding queries. Additionally, or alternatively, the computing system may provide, to the LLM, one or more solution steps associated with the matched cluster. The computing system may optionally provide, to the LLM, input of text associated with one or more resource documents that are to the query of the matched cluster”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak into the teachings of Hemington because using the technique of clustering vectorized conversational data allows semantically similar queries to be grouped together and enables generation of more relevant responses. Incorporating clustering would enhance the conversational agent by improving its ability to generalize varied phrasings of similar user requests. This modification applies known techniques to improve efficient processing and improved conversational response handling. Regarding Claim 11, Pasupalak discloses the method according to claim 10, further comprising identifying patterns in user input using machine learning to make decisions and learn from past conversations -[0074 “In an embodiment, the Conversational Agent 150 may map a specific command to one or more words contained in a user query. In the above example, the Conversational Agent 150 may map the word "tell" or the phrase "tell Bob" with one or more commands such as an internal phone service. The Conversational Agent 150 may learn over time the behavior patterns and/or preferences of the user in relation to many commands. The Conversational Agent 150 may also learn the preferences and/or behavior patterns of a user in relation to performing a command for a specific parameter or class of parameters”]; [0084 “At step 404 in one embodiment, user query 302 may be subjected to binary classification such as via a support vector machine (SVM) for analysis. Other known types of binary classification may also be used alone or in combination such as decision trees, Bayesian networks, support vector machines, and neural networks. A support vector machine (SVM) is a concept in statistics and computer science for a set of related supervised learning methods that analyze data and recognize patterns, used for classification and regression analysis”]; [0248 “The Conversational Agent 150 may include a Learning Manager 128 for updating, training, and/or reinstating any of the modules used by the Conversational Agent 150. Modules that may be modified by the Learning Manager 128 include support vector machines, conditional random fields, naive Bayesian classifiers, random forest classifiers, neural networks, previous query score classifiers and the like”]; [0249 “Learning Manager 128 may update some or all of the intelligent modules of the invention periodically according to a set schedule and/or when initiated by an administrator. The Conversational Agent 150 may gather feedback from users based on their interaction with the Conversational Agent 150 for training purposes. Examples of how the Conversational Agent 150 uses feedback from user interaction are shown in FIGS. 11-17. For example, the Conversational Agent 150 may determine whether each outputted response was useful to the user. In one embodiment, the Learning Manager 128 of Conversational Agent 150 classifies each response as either "correct", "incorrect" and/or "neutral". Learning manager 128 may also assign a weight to each of the above categories such that a response is determined to be a certain percentage "correct" or "incorrect". In an example interaction, the user may express a query 302 of "Find me some French cuisine in St. Louis". ASR service 112 processes the voice query and provides a text representation of the query 302 to NLP Engine 114. NLP Engine 114 provides a template object 319 to Services Manager 130, the template object including the DOMAIN (in this example, RESTAURANTS) and several entities (St. Louis and "French"). Services Manager 130 determines an appropriate service 118 to perform the derived intention of the user calls that service (external service 118 in this example). External service 118 provides a response to Services Manager 130 which is presented to the user by the Ux Manager 103”]. Regarding Claim 12, Pasupalak discloses the method according to claim 10, further comprising detecting behavioral patterns of the user using Natural Language Processing methods [0074 “In an embodiment, the Conversational Agent 150 may map a specific command to one or more words contained in a user query. In the above example, the Conversational Agent 150 may map the word "tell" or the phrase "tell Bob" with one or more commands such as an internal phone service. The Conversational Agent 150 may learn over time the behavior patterns and/or preferences of the user in relation to many commands. The Conversational Agent 150 may also learn the preferences and/or behavior patterns of a user in relation to performing a command for a specific parameter or class of parameters. Claim [4] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479)in view of in view of Shanmugam (US20190311036) and in further view of Taubman (US 9679568) and in further view of Hemington (US20240320251) in view of Rodgers (US20250232124). Regarding claim 4, Pasupalak teaches the system according to claim 1, wherein the processing module further comprises performing sentiment assessment However, Pasupalak doesn’t explicitly talk about performing sentiment analysis to analyze the emotional tone of the user’s behavior. But Rodgers teaches performing sentiment analysis to analyze the emotional tone – [0044 “In one or more embodiments, the response generator 120 considers the intent of the user that represents the purpose or goal behind the user's input. The response generator 120 may also consider the sentiment of the user provided by the sentiment analysis engine 108 that represents the feelings or emotion of the user predicted from the user's input. Use of intent recognition techniques to identify the intention of a user allows the response generator 120 to generate a response that aligns with the service request. Use of sentiment analysis techniques to identify the sentiment of the user allows the response generator 120 to generate a response that accounts for the emotional state of the user”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Rodgers because modifying the conversation agent to incorporate the sentiment aware response generation in order to produce responses that align with user’s request but also with user’s emotional state would thereby improve personalization and user experience. Claim [13] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479)in view of Hemington (US20240320251) in view of Rodgers (US20250232124). Regarding Claim 13, Pasupalak as modified above recites the method according to claim 10, further comprising performing sentiment assessment to analyze the emotional tone of the user's behavior- Claim 13 is rejected for the same reasons as claim 4. Claim [6] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of in view of Shanmugam (US20190311036) and in further view of Taubman (US 9679568) and in further view of Hemington (US20240320251) and in further view of Jibaja (US10452444). Regarding claim 6, Pasupalak discloses a system according to claim 1, further comprising storage means for storing system elements on both cloud storage cloud-based service infrastructure 104 providing a voice-based interface to one or more services. FIG. 2 is a block diagram that shows software architecture of the cloud-based service infrastructure 104 in accordance with one embodiment. In the present example embodiment, cloud-based service infrastructure 104 is configured to permit a user of smartphone 102 to provide speech inputs defining commands to obtain a desired user experience that may include the provision of one or more services. However, Pasupalak as modified above does not disclose storing data on both a cloud storage and on a on-premises physical storage system. But Jibaja teaches storage means for storing elements on both cloud storage and on-premises physical storage – [Column 26, lines 24-45 “Although not explicitly depicted in FIG. 3A, readers will appreciate that additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system 306 and users of the storage system 306. For example, the storage system 306 may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system 306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array 306 and remote, cloud-based storage that is utilized by the storage array 306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider 302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider 302”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Jibaja in order to allow conversational data, session information, and related system data to be stored and managed across both on-premises storage and cloud storage systems, which is a known approach for improving data accessibility, scalability, and system flexibility. Claim [7] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Shanmugam (US20190311036) and in further view of Taubman (US 9679568) and in further view of Hemington (US20240320251) and further in view of Conrad (US 20170364827). Regarding claim 7, Pasupalak discloses the system according to claim 1, further comprising a user interface configured to allow a user to view embodiment, App 101 processes messages received in the input queue 107, and together with a user interface manager 103 (also referred to herein as Ux Manager 103), provides a user interface 105 for displaying a formatted output to the user. Ux Manager 103 may provide the user interface 105 for receiving input from the user (for example, voice, touchscreen, and the like) for receiving input queries and presenting output in an interactive conversational manner. In an embodiment, Ux Manager 103 formats the user interface 105 (including output received from cloud-based service infrastructure) depending on the display capabilities of the smartphone 102”]. However, Pasupalak does not disclose a user interface configured to allow a user to view and evaluate scenario elements and preview scenario statistics. But Conrad teaches a user interface configured to allow a user to view and evaluate scenario elements and preview scenario statistics – [0092 “Continuing onto FIG. 5, an exemplary graphical user interface (GUI) available through the user interface 174 of access device 170 is disclosed. In one implementation, the user interface 174 includes an application interface 500 to present the results of the search. In one implementation, application interface 500 may include one or more sections, i.e., 502a-502c for displaying the proposed strategies based on different scenarios, as generated by prediction module 138 as described in step 312 of FIG. 3. A listing of related cases 504 is presented along with each of the proposed strategy. An award distribution illustration 506 and additional relevant statistics 508, such as but not limited to, the mean duration of a trial, shortest and longest trial length, may also be presented. In yet a further implementation, application interface 500 may include section 510 for case feature adjustment, which may be similar to the selections 408-420 available in FIG. 4 and are utilized to further revise the searching parameters”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Conrad because it allows users interacting with the conversational agent to analyze potential outcomes or system actions and preview associated scenarios statistics before committing to an action. This improves user decision making and improves system transparency. Claim [15] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Hemington (US20240320251) and further in view of Conrad (US 20170364827). Regarding Claim 15, it recites the method according to claim 10, further comprising allowing a user to view and evaluate scenario elements and preview scenario statistics. Claim 15 is rejected for the same reasons as claim 7. Claim [8] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Shanmugam (US20190311036) and in further view of Taubman (US 9679568) and in further view of Hemington (US20240320251) and further in view of Suh Young Kyoon (KR 102373146). Regarding Claim 8, Pasupalak teaches the system according to claim 1, wherein the machine learning techniques include the use of Naive Bayesian Classifier naive Bayesian classifiers, random forest classifiers, neural networks, previous query score classifiers and the like”]. However, Pasupalak doesn’t teach the machine learning techniques that include clustering algorithms based on cosine similarity. But Suh Young Kyoon teaches using clusters based on cosine similarity – [0012 “The present invention is intended to solve the problem of the above technical task, and its purpose is to provide a duplicate document removal device and method that clusters preprocessed documents and calculates similarity between documents using cosine similarity.”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Suh Young Kyoon because it would help organize semantically similar conversational content and reduce duplication prior to or in conjunction with classification, thereby improving accuracy and system efficiency. Clustering using cosine similarity is a well-known technique for organizing text and improving classification performance. Claim [17] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Hemington (US20240320251) and further in view of Suh Young Kyoon (KR 102373146). Regarding Claim 17, it recites the method according to claim 10, wherein the machine learning techniques include the use of Naive Bayesian Classifier and clustering algorithms based on cosine similarity. Claim 17 is rejected for the same reasons as claim 8. Claim [9] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) Shanmugam (US20190311036) and in further view of Taubman (US 9679568) and in further view of Hemington (US20240320251) and in further view of Rosu (US11907863). Regarding claim 9, Pasupalak does not teach the system according to claim 5, wherein upon assuming an answer is helpful, the system updates the dataset to enhance the likelihood of selecting similar responses in future interactions. However, Rosu teaches upon assuming an answer is helpful, the system updates the dataset to enhance the likelihood of selecting similar responses in future interactions – [Column 10, lines 38-67, Column 11, lines 1-59 -In an embodiment, the continued interaction provides implicit negative feedback. If the user wants to terminate the interaction, the process continues with collection of feedback (310), which in an embodiment includes a collection of questions and corresponding responses (310) for which final feedback is solicited from the user (312). The final feedback may be in positive or negative feedback, and corresponds to the final answer provided by the chatbot platform. Examples of feedback include ‘the answer is good’ or ‘the answer is helpful’ as forms of positive feedback, and ‘the answer is bad’ or ‘the answer is not helpful’ are examples of negative feedback. In an exemplary embodiment, in the case of negative feedback, multi-choice questions may be utilized to qualify the scope of error, such as ‘content is incorrect’ or ‘content not found’. The feedback, e.g. final feedback, is stored in a repository at (314), and leveraged by system starting at step (316) to enrich the domain knowledge. The enrichment may be responsive to positive feedback or negative feedback. Learning, also referred to herein as learning interactions, is shown herein bifurcated into feedback interaction (316) and knowledge enrichment interaction (318). Feedback interaction leverages the chatbot platform to facilitate and enable system interaction to collect domain specific relations from correct responses, such as ground truth, or positive feedback. These relations are used to augment the domain knowledge. Feedback interaction (316) is followed by generation of an explanation prompt (320), details which include a knowledge gap assessment, as shown and described in FIG. 5”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Rosu because it would enable the system to assess the quality of responses and refine future interactions based on user feedback. This would predictably improve system performance by allowing the conversational agent to identify helpful responses and then refine behavior based on user feedback, thereby improving accuracy and overall user satisfaction during future interactions. Claim [14] are rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Hemington (US20240320251) and in further view of Rosu (US11907863). Regarding claim 14, it recites the method according to claim 10, further comprising updating the dataset upon assuming an answer is helpful to enhance the likelihood of selecting similar responses in future interactions. Claim 14 is rejected for the same reasons as claim 9. Claim [16] is rejected under 35 U.S.C. 103 as being unpatentable over Pasupalak (US20150066479) in view of Hemington (US-20240320251-A1) and further in view of view of Lu Yan (CN 109255113A). Regarding claim 16, Pasupalak teaches the method according to claim 10, wherein the Natural Language Processing methods include language detectionUser interfaces for electronic and other devices are evolving to include speech-based inputs in a natural language such as English. A user may voice a command to control the operation of a device such as a smartphone, tablet computer, personal computer, appliance, television, robot and the like. Natural language processing, a type of machine learning using statistics, may be used to interpret and act upon speech inputs. Speech recognition may convert the input to text. The text may be analyzed for meaning to determine the command to be performed”]. However, Pasupalak doesn’t teach natural language methods which include sentence segmentation and part of speech tagging. But Lu Yan teaches both sentence segmentation and speech tagging – [ See attached document- Under contents of invention, step (1) (page 3) “Firstly, performing word segmentation processing and part-of-speech tagging on a question input by a user”]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Pasupalak as modified above into the teachings of Lu Yan because it would improve classification accuracy. Sentence segmentation improves handling of multi-sentence queries. Part of speech tagging improves feature extraction. This modification represents a predictable enhancement of natural language processing within conversational systems. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEZA ABDUL AZIZ whose telephone number is (571)272-9610. The examiner can normally be reached Monday-Friday 7:30am-5pm Alternate Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 05, 2024
Application Filed
Mar 10, 2026
Non-Final Rejection — §101, §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month