DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 12/03/2025.
Claim(s) 1-20 are pending and have been examined. Hence, this action has been made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments and Amendments
Amendments to the claims by the Applicant have been considered and addressed below.
With respect to the 35 USC § 112, 101, 102, and 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below.
35 USC § 112(b) rejection(s)
Arguments on page(s) 9 of the Remarks filed on 12/03/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to the 35 USC § 112 rejection(s) have been fully considered and are persuasive. The 35 USC § 112 rejection(s) of claim(s) 1-20 have been withdrawn.
35 USC § 101 rejection(s)
Arguments on page(s) 9-14 of the Remarks filed on 12/03/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to the 35 USC § 101 rejection(s) have been fully considered and are persuasive. The 35 USC § 101 rejection(s) of claim(s) 1-20 have been withdrawn.
35 USC § 102/103 rejection(s)
Arguments on page(s) 14-16 of the Remarks filed on 12/03/2025.
Examiner’s Response to Arguments:
Applicant’s arguments with respect to claim(s) 1, 9, and 17 under 35 U.S.C. § 102, and 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Beaver (US 20200387673 A1) and further in view of Rodrigo Cavalin et al. (US 20230084688 A1).
For more details, please refer to updated 35 U.S.C. § 103 rejections for claims 1-20, below.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The Examiner notes that there is no clear support in the Specification for the last two limitations of the independent claims, as amended:
receiving, via the user interface, confirmation to generate the new intent; and
training the LM for solution recommendations based on the confirmation to generate the new intent.
Hence, dependent claims 2-8, 10-16, and 18-20 are also rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beaver (US 20200387673 A1) and further in view of Rodrigo Cavalin et al. (US 20230084688 A1).
As to independent claim 1, Beaver teaches:
1. A method (see ¶ [0005]: “In an implementation, a method comprises receiving conversation data; …”) comprising:
identifying a plurality of conversation logs between a digital assistant and a respective user device of a plurality of user devices (see Fig. 1 (105: user computing device, 155: assistant computing device, 110: computing device) and ¶ [0022]: “…Although only one user computing device 105, one assistant computing device 155, and one computing device 110 are shown in FIG. 1, there is no limit to the number of computing devices 105, 155, 110 that may be supported.” ¶ [0025]: “The data sources 160 may comprise one or more of conversation logs 162, live chat logs 164, call center audio transcripts 165, website search queries 166, and customer service channels 168, for example.” and ¶ [0048]: “At 410, data is stored in data sources (e.g., the data sources 160). For example, human-IVA conversations (e.g., conversation data between a user 102 and an IVA 157) are stored in the conversation logs 162 by the assistant computing device 155 and/or the computing device 110.”),
wherein each of the plurality of conversation logs include text that was exchanged between the digital assistant and the respective user device (see ¶ [0022, 0025, and 0048] citations as in limitation above and further ¶ [0024]: “In some implementations, the computing device 110 is in communication with, and is configured to receive data from, one or more data sources 160. Some or all of the data in the data sources 160 may be data collected from user interaction with the IVA 157 and/or any gathered and/or processed data from the IVA 157 such as from artificial intelligence (AI), machine learning (ML), advanced speech technologies (such as NLU, NLP, natural language generation (NLG)), and simulated live and unstructured cognitive conversations for involving voice, text, and/or digital interactions, for example. The IVA 157 may cover a variety of media channels in addition to voice, including, but not limited to, social media, email, SMS/MMS, IM, etc.”);
identifying one or more intents of the digital assistant (see ¶ [0007]: “In an implementation, a method comprises processing conversation data with one or more natural language processing techniques to identify an intent for the conversation data, a measure of confidence that the intent is correctly identified for the user input,…”, ¶ [0019]: “The user 102 may communicate with the assistant computing device 155 via the user computing device 105 and an intelligent virtual assistant (IVA) 157 of the assistant computing device 155. The user computing device 105 (and thus the user 102) may interact with the IVA 157 using natural language processing (NLP) associated with, or implemented by, the IVA 157…”),
wherein the digital assistant is configured to respond to requests from the plurality of user devices corresponding to any of the one or more intents (see ¶ [0019] citation as in limitation above and further ¶ [0020-0021]: “[0020] …The computing device 110 may be in communication with the assistant computing device 155 and/or the user computing device 105 to monitor the speech in a voice call (i.e., the conversation) or other communication between the user computing device 105 and the assistant computing device 155 (e.g., the IVA 157). The computing device 110 may be implemented in, or embodied in, a desktop analytics product or in a speech analytics product, in some implementations. [0021] The computing device 110 may include a natural language understanding (NLU) component 112, an automated review module 114, a risk score module 116, and an interface module 118. In some implementations, the computing device 110 may be comprised within the assistant computing device 155. ”, ¶ [0029-0030]: “[0029]…For machine-learned models, identification of incorrect understanding can highlight confusion within the model and prioritize areas of further training. The main focus of misunderstanding detection is on intent classification. It is in the NLU component 112 that the breakdown of communication will begin, assuming adequate Automatic Speech Recognition (ASR), if speech is used as an interface. The detection of ASR error and recovery is well known. [0030] IVAs for customer service are deployed in a specific language domain such as transportation, insurance, product support, or finance, for example. In known IVA refinement processes, reviewers are given a sample of recent conversations collected from a live IVA for quality assurance.”, and ¶ [0034]: “The automated review module 114 analyzes data from the data sources 160, such as the conversation logs 162, to identify conversations, where the IVA 157 is misunderstanding the user 102 (e.g., intent classification errors in the conversation) …”);
identifying a subset of conversation logs from the plurality of conversation logs where a fallback state was detected using a fallback dictionary (see ¶ [0019-0021, 0029-0030 and 0034] citations as in limitation(s) above. More specifically: ¶ [0034]: “The automated review module 114 analyzes data from the data sources 160, such as the conversation logs 162, to identify conversations, where the IVA 157 is misunderstanding the user 102 (e.g., intent classification errors in the conversation) …” and ¶ [0073-0075]: “[0073] Indicators of intent error that the automated review module 114 and/or the risk score module 116 test for in some implementations include conversation level features and turn level features. [0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.”),
wherein the fallback state comprises a response by the digital assistant indicating an inability to fulfill a request from the respective user device (see ¶ [0019-0021, 0029-0030, 0034, and 0073-0075] citations as in limitation(s) above and further ¶ [0074-0075]: “[0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.”), and
wherein the fallback dictionary identifies the subset of conversation logs (see ¶ [0019-0021, 0029-0030, 0034, and 0073-0075] citations as in limitation(s) above. More specifically: ¶ [0034]: “The automated review module 114 analyzes data from the data sources 160, such as the conversation logs 162, to identify conversations, where the IVA 157 is misunderstanding the user 102 (e.g., intent classification errors in the conversation) …”
and ¶ [0074-0075]: “[0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.” );
categorizing, using a language model (LM), the subset of conversation logs associated with the fallback state in accordance with the one or more intents using conversation identifiers (see ¶ [0019-0021, 0029-0030, 0034, and 0073-0075] citations as in limitation(s) above and further ¶ [0028-0029 and 0033]: “[0028] The NLU component 112 maps user inputs, or conversational turns, to a derived semantic representation commonly known as the intent, an interpretation of a statement or question that allows one to formulate the ‘best’ response. The collection of syntax, semantics, and grammar rules that defines how input language maps to an intent within the NLU component 112 is referred to as a language model 140. The language model 140 may be trained through machine learning methods or manually constructed by human experts. [0029] …The main focus of misunderstanding detection is on intent classification. It is in the NLU component 112 that the breakdown of communication will begin, assuming adequate Automatic Speech Recognition (ASR), if speech is used as an interface. The detection of ASR error and recovery is well known. [0033] The methods and systems described herein are configured to predict and/or identify intent classification errors in conversational turns using the language model 140. This reduces the human burden and costs in maintaining conversational agents.”
and ¶ [0061 and 0067]: “[0061] In the left-hand column 610, the intent 612 the reviewer is currently voting on is displayed along with additional information to give insight. The label of the intent 612 is displayed at the top, followed by a text description 614 of its purpose, which is maintained by the domain experts 152. [0067] To help domain experts quickly analyze the voting results and voter consensus, the analysis interface 700 provides the tabular view shown in FIG. 7. Filters may be implemented to provide the ability to explore the results from many angles such as per intent, per voter, date range, recommended action, etc. In the left hand column 710, the original user turn text is displayed. In the next column 720 is the intent that the reviewers evaluated the text against. The “Input Type” column 730 shows whether the intent evaluated was from the current NLU or a different source, such as regression tests used in developing the language model or live chat logs. Misunderstanding analysis may be performed on any textual data labeled with intent or topic. The “Voting Results” column 740 provides a visual indicator of the voting outcome and inter-reviewer agreement. The final column 750 on the right hand side is the recommended action, e.g., from Table 1. Filtering this table by an action type will quickly surface all turns where a particular action should be performed.”),
wherein each of the conversation identifiers correspond to a conversation log of the subset of conversation logs (see ¶ [0019-0021, 0028-0030, 0033-0034, 0061, 0067and 0073-0075] citations as in limitation(s) above. More specifically ¶ [0061]: “In the left-hand column 610, the intent 612 the reviewer is currently voting on is displayed along with additional information to give insight. The label of the intent 612 is displayed at the top, followed by a text description 614 of its purpose, which is maintained by the domain experts 152. and further
Fig. 6 (diagram of an example voting interface for use by reviewers according to some implementations)
and ¶ [0062]: “Next, a set of sample questions 618 that have been previously human-validated to belong to this intent 612 are displayed. This is to give the reviewer some intuition on the language intended for the current intent 612. Following that is a list of related intents 620 to help the reviewer decide if a more suitable intent exists in the language model 140. Both lists are searchable to speed analysis. Controls to navigate through the intents to be reviewed are provided. At the bottom, metrics 622 on how many turns have been completed by the current reviewer and all reviewers combined on the displayed intent are shown.”);
generating, using the LM, a solution to a respective fallback state for a first conversation log of the subset of conversation logs (see Fig. 6 and ¶ [0019-0021, 0028-0030, 0033-0034, 0061-0062, 0067 and 0073-0075] citations as in limitation(s) above. More specifically: ¶ [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.”…”
and further ¶ [0026]: “…The systems and methods provide users (e.g., IVA designers, administrators, etc.) with suggested actions to fix errors in IVA understanding, prioritizes areas of language model repair, and automates the review of conversations where desired…”),
wherein the first conversation log was categorized into an unknown intent category of the one or more intents (see Fig. 6 and ¶ [0019-0021, 0028-0030, 0033-0034, 0061-0062, 0067 and 0073-0075] citations as in limitation(s) above. More specifically: ¶ [0074-0075]: “[0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.”
and further ¶ [0006]: “…determining whether each of the conversational turns has a misunderstood intent; …”, ¶ [0028]: “The NLU component 112 maps user inputs, or conversational turns, to a derived semantic representation commonly known as the intent, an interpretation of a statement or question that allows one to formulate the ‘best’ response.”, ¶ [0034]: “The automated review module 114 analyzes data from the data sources 160, such as the conversation logs 162, to identify conversations, where the IVA 157 is misunderstanding the user 102 (e.g., intent classification errors in the conversation).” and ¶ [0106]: “In an implementation, a method is provided. The method includes receiving conversation data; determining that the conversation data has a misunderstood intent; determining an action to adjust a language model responsive to determining that the conversation data has the misunderstood intent; and providing the suggested action to an output device or to the language model.”),
providing the solution for improving the digital assistant to a user interface (see ¶ [0006, 0019-0021, 0028-0030, 0033-0034, 0061-0062, 0067, and 0073-0075] citations as in limitation(s) above. More specifically: Fig. 6 and further ¶ [0026]: “…The systems and methods provide users (e.g., IVA designers, administrators, etc.) with suggested actions to fix errors in IVA understanding, prioritizes areas of language model repair, and automates the review of conversations where desired…” and ¶ [0106]: “In an implementation, a method is provided. The method includes receiving conversation data; determining that the conversation data has a misunderstood intent; determining an action to adjust a language model responsive to determining that the conversation data has the misunderstood intent; and providing the suggested action to an output device or to the language model.”),
receiving, via the user interface, confirmation to generate the new intent (see ¶ [0006, 0019-0021, 0028-0030, 0033-0034, 0061-0062, 0067, and 0073-0075] citations as in limitation(s) above. More specifically: Fig. 6 (“Does this intent answer this user question?”, 612: intent and 644: voting buttons)); and
training the LM for solution recommendations based on the confirmation to generate the new intent (see ¶ [0006, 0019-0021, 0028-0030, 0033-0034, 0061-0062, 0067, and 0073-0075] citations as in limitation(s) above. More specifically: Fig. 6 (“Does this intent answer this user question?”, 612: intent and 644: voting buttons) and further ¶ [0065]: “After the risk analysis and voting processes are complete, voting data and additional recommendations are provided to the domain experts 152 to facilitate development and/or adjustment of the language model 140…”
and Table 1: “Voting outcomes and recommended actions” (e.g., first two rows: intent correct: “These are added as training and regression samples”; intent incorrect: “Fix or retrain the language model to prevent the turn from reaching the associated intent”)).
PNG
media_image1.png
549
518
media_image1.png
Greyscale
PNG
media_image2.png
331
438
media_image2.png
Greyscale
However, Beaver does not explicitly teach, but Rodrigo Cavalin et al. does teach:
wherein the solution comprises a recommendation to generate a new intent that exceeds a defined threshold of occurrences in the unknown intent category (see ¶ [0041-43 and 0059]: “[0041] At 406, topics can be classified by relevance. For example, relevant or similar topics may be classified into a same classification. In an aspect, a knowledge graph or database 410 can be used for topic relevance and similarity checking. The knowledge graph 410, for example, can store structured knowledge such as an ontology of related concepts. The knowledge graph 410, for example, can also include information used in determining semantic similarity such as one or more threshold values for use in determining similarity. [0042] At 408, a potential intents database 412 may be accessed to retrieve potential intents, for example, generated by clustering the chatbot logs, for example, as described with reference to FIG. 3. Semantic similarity between the intents and the topics determined at 406 are computed. For instance, intents in chatbot logs and trending topics can be determined to be semantically similar based on meeting a threshold of similarity, for example, 90% similar, 80% similar. The threshold of similarity can be predefined or configured and can be adjustable. [0043] At 414, the intents and the topics meeting a similarity threshold are correlated. The similarity threshold can be predefined or preconfigured. The chatbot logs from which the correlated intents are extracted are also identified. The correlated intents and associated chatbot logs can be stored as suggested new intents and training examples, for example, on a storage or memory device 416. [0059] In an embodiment, the system and/or method may consider a configurable sliding window (“Past Window” shown in FIG. 8) that inspects the past when detecting new intents. Each time a new trending topic is detected, the system and/or method may observe the growth of that topic within the window. If the volume lowers abruptly in the window (e.g., drops by more than a threshold value which can be configured), the system and/or method can consider this topic is ephemeral and, therefore, drop it from the list of potential new intents. Otherwise, this topic can potentially be considered. The system and/or method may also use the sliding window to analyze the persistence of new topics in the call logs. The system and/or method can use a knowledge graph to identify the topic context, and a historical database to predict whether or not this topic tends to persist over time. If the topic is predicted to tend to persist and the topic interest also increases in the chatbot log, the system and/or method may signal the creation of the new intent.
[0061] In one or more embodiments, a threshold (topic interest evaluation trigger) that determines whether a topic will become an intent or not can be used as a cut-off value. In an embodiment, this value can be set manually by the SME. In another embodiment, this value can be automatically optimized based on the acceptance rate of proposed intents validated by the SME. For instance, the shape of the frequency curve of the topics related to new intents, and stored in the knowledge graph, can be analyzed. The system and/or method can then evaluate the integral of the topic interest level along the time and calculate the total amount of relevance of this topic. Then, the system and/or method can find the time in which the topic reaches a given amount of relevance for this topic (for instance 10% of the area under the curve) to estimate the topic interest default level. That time can then be adaptively adjusted in accordance with the proposed intents that are accepted and/or rejected by the SME.”)
Beaver and Rodrigo Cavalin et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in chat-bot/user dialogue systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beaver to incorporate the teachings of Rodrigo Cavalin et al. of wherein the solution comprises a recommendation to generate a new intent that exceeds a defined threshold of occurrences in the unknown intent category which provides the benefit of improving automated conversational systems ([0003] of Rodrigo Cavalin et al.).
As to independent claim 9, Beaver further teaches:
9. A system (see ¶ [0006]: “In an implementation, a system comprises one or more processors; storage storing conversation data representing at least one conversation, and one or more indicators of intent error; memory communicatively coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts…”) comprising:
a memory (see ¶ [0006] citation as in preamble above.); and
at least one processor coupled to the memory and configured to perform operations (see ¶ [0006] citation as in preamble above.) comprising:
[the limitations as in claim 1 as taught by Beaver in view of Rodrigo Cavalin et al., above]
As to independent claim 17, Beaver further teaches:
17. A non-transitory computer-readable medium having instructions stored thereon (see ¶ [0102]: “Computing device 800 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the device 800 and includes both volatile and non-volatile media, removable and non-removable media.”) that, when executed by at least one computing device, cause the at least one computing device to perform operations (see ¶ [0006] citation as in claim 9, above and ¶ 0102 as in preamble above.”) comprising:
[the limitations as in claim 1 as taught by Beaver in view of Rodrigo Cavalin et al., above]
Regarding claims 2, 10, and 18, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 1, 9, and 17, above.
Beaver further teaches:
2, 10, and 18. The method/system/non-transitory computer-readable medium of claims 1, 9, and 17, wherein the conversation logs include a textual transcript of audible exchanges of a conversation (see ¶ [0024]: “In some implementations, the computing device 110 is in communication with, and is configured to receive data from, one or more data sources 160. Some or all of the data in the data sources 160 may be data collected from user interaction with the IVA 157 and/or any gathered and/or processed data from the IVA 157 such as from artificial intelligence (AI), machine learning (ML), advanced speech technologies (such as NLU, NLP, natural language generation (NLG)), and simulated live and unstructured cognitive conversations for involving voice, text, and/or digital interactions, for example. The IVA 157 may cover a variety of media channels in addition to voice, including, but not limited to, social media, email, SMS/MMS, IM, etc.”).
Regarding claims 3, 11, and 19, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 1, 9, and 17, above.
Beaver further teaches:
3, 11, and 19. The method/system/non-transitory computer-readable medium of claims 1, 9, and 17, wherein the fallback state comprises a predetermined output by the digital assistant (see ¶ [0019-0021, 0029-0030, 0034, and 0073-0074] citations as in claims 1, 9, and 17 above. More specifically: ¶ [0074-0075]: “[0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.”).
Regarding claims 4, 12, and 20, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 3, 11, and 19, above.
Beaver further teaches:
4, 12, and 20. The method/system/non-transitory computer-readable medium of claims 3, 11, and 19, wherein the identifying the subset comprises detecting the predetermined output within each conversation log in the subset of conversation logs (see ¶ [0019-0021, 0029-0030, 0034, and 0073-0074] citations as in claims 1, 9, and 17 above. More specifically: ¶ [0073-0075]: “[0073] Indicators of intent error that the automated review module 114 and/or the risk score module 116 test for in some implementations include conversation level features and turn level features. [0074] … Conversation level features include, for example, I Don't Know (IDK) in conversation, same intent(s) hit, tie in conversation, user rating scores, conversation should escalate, and sentiment change over time. [0075] An IDK occurs when the language model does not find an intent that satisfies the user query with a high enough confidence. The IVA may respond with something like “I'm sorry, I didn't understand you.” If a conversation contains one or more IDK responses, this may indicate that the user is talking about some subject the IVA has no knowledge of.”).
Regarding claims 5 and 13, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 1 and 9, above.
Beaver further teaches:
5 and 13. The method/system of claims 1 and 9, wherein at least the categorizing and the determining are performed by the LM incorporating one of artificial intelligence or machine learning technologies (e ¶ [0019-0021, 0026, 0029-0030, 0033-0034, and 0073-0074] citations as in claims 1 and 9 above. More specifically: ¶ [0033]: “The methods and systems described herein are configured to predict and/or identify intent classification errors in conversational turns using the language model 140.” and further ¶ [0024 and 0028]: “[0024] In some implementations, the computing device 110 is in communication with, and is configured to receive data from, one or more data sources 160. Some or all of the data in the data sources 160 may be data collected from user interaction with the IVA 157 and/or any gathered and/or processed data from the IVA 157 such as from artificial intelligence (AI), machine learning (ML), advanced speech technologies (such as NLU, NLP, natural language generation (NLG)), and simulated live and unstructured cognitive conversations for involving voice, text, and/or digital interactions, for example. [0028] The NLU component 112 maps user inputs, or conversational turns, to a derived semantic representation commonly known as the intent, an interpretation of a statement or question that allows one to formulate the ‘best’ response. The collection of syntax, semantics, and grammar rules that defines how input language maps to an intent within the NLU component 112 is referred to as a language model 140. The language model 140 may be trained through machine learning methods or manually constructed by human experts.”).
Regarding claims 6 and 14, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 1 and 9, above.
Rodrigo Cavalin et al. further teaches:
6 and 14. The method/system of claims 1 and 9, further comprising:
receiving a rejection to a second solution (see ¶ [0044]: “[0044] …For example, at 418, a GUI can present the new intent, and allow a developer to analyze, revise, accept and/or reject the new intent that is suggested. If the developer accepts the new intent at 420, the system (e.g., a computer processor running the logic of the method shown in FIG. 4) at 422 may update the chatbot 424, to be able to handle the new intent. For instance, the chatbot 424 can be trained further based on the new intent and the associated example dialogs in the chatbot log, to be able to handle questions or messages associated with the new intent.”),
wherein upon a subsequent processing of a conversation log, the solution is not provided in response to the respective fallback state (see ¶ [0044] and Fig. 4 (418: Analyze, revise, accept/reject suggestion, 420: accept? (yes or no) (wherein yes implies updating the chatbot intents / virtual assistant and no implies no changes)).
Beaver and Rodrigo Cavalin et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in chat-bot/user dialogue systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beaver to incorporate the teachings of Rodrigo Cavalin et al. of receiving a rejection to the solution and wherein upon a subsequent processing of a conversation log, the solution is not provided in response to the respective fallback state which provides the benefit of improving automated conversational systems ([0003] of Rodrigo Cavalin et al.).
Regarding claims 7 and 15, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 1 and 9, above.
Rodrigo Cavalin et al. further teaches:
7 and 15. The method/system of claims 1 and 9, wherein the determining the solution (see ¶ [0023]: “In an embodiment, a system in an aspect can help chatbot developers or subject matter experts, update chatbot's content, by detecting chatbot log posts related to external events. The system can provide or point out to the developer or SMEs which questions or messages, extracted from the chatbot logs, are related to a given event or external event. A developer may be triggered or prompted to inspect such messages, e.g., to determine whether to include an automatically created new intent and/or which messages to use as training examples. The system can also aid in compilations of texts that can be useful to create answering content. The system may accelerate the process of curating and updating chatbot content, by proposing new intents based on chatbot logs and external events.”) comprises:
identifying the new intent to incorporate into the digital assistant (see ¶ [0023] citation as in limitation above. More specifically: “…a system in an aspect can help chatbot developers or subject matter experts, update chatbot's content…”); and
wherein the providing comprises providing a notification that the new intent has been identified (see ¶ [0023] as in limitations above and further ¶ [0028]: “Via the GUI 114, an SME 112 or the like, may be able to visualize the possible new topics and associated chatbot log samples that can be used to create a new intent. The GUI 114 can allow a user, e.g., an SME 112, to revise the results and add a new intent. For example, as shown at 116, the SME 122 via the GUI 114 can decide to add the new intent detected by the module 102 and presented to the user via the GUI 114. If the new intent is to be added, the system can update the chatbot (also referred to as a virtual assistant) 108, for example, in real time.”).
Beaver and Rodrigo Cavalin et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in chat-bot/user dialogue systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beaver to incorporate the teachings of Rodrigo Cavalin et al. of wherein the determining the solution comprises: identifying a new intent to incorporate into the digital assistant; and wherein the providing comprises providing a notification that a new intent has been identified which provides the benefit of improving automated conversational systems ([0003] of Rodrigo Cavalin et al.).
Regarding claims 8 and 16, Beaver in view of Rodrigo Cavalin et al. teaches the limitations as in claims 7 and 15, above.
Rodrigo Cavalin et al. further teaches:
8 and 16. The method/system of claims 7 and 15, wherein the identifying the new intent (see ¶ [0023, 0028, and 0044] citations as in claims 6-7 and 14-15 above.) comprises:
identifying a new subset of the subset of conversation logs comprising a plurality of conversation logs that could not be categorized in accordance with the one or more intents (see ¶ [0023, 0028, and 0044] citations as in claims 6-7 and 14-15 above and further ¶ [0042-0043 and 0061]: “[0042] At 408, a potential intents database 412 may be accessed to retrieve potential intents, for example, generated by clustering the chatbot logs, for example, as described with reference to FIG. 3. Semantic similarity between the intents and the topics determined at 406 are computed. For instance, intents in chatbot logs and trending topics can be determined to be semantically similar based on meeting a threshold of similarity, for example, 90% similar, 80% similar. The threshold of similarity can be predefined or configured and can be adjustable. [0043] At 414, the intents and the topics meeting a similarity threshold are correlated. The similarity threshold can be predefined or preconfigured. The chatbot logs from which the correlated intents are extracted are also identified. The correlated intents and associated chatbot logs can be stored as suggested new intents and training examples, for example, on a storage or memory device 416.);
and from the new subset, identifying a second new intent based on a threshold number of conversation logs including one or more key words associated with the second new intent (see ¶ [0023, 0028, and 0044] citations as in claims 6-7 and 14-15 above and ¶ [0042-0043] citations as in limitation above and further ¶ [0061]: “In one or more embodiments, a threshold (topic interest evaluation trigger) that determines whether a topic will become an intent or not can be used as a cut-off value. In an embodiment, this value can be set manually by the SME. In another embodiment, this value can be automatically optimized based on the acceptance rate of proposed intents validated by the SME. For instance, the shape of the frequency curve of the topics related to new intents, and stored in the knowledge graph, can be analyzed. The system and/or method can then evaluate the integral of the topic interest level along the time and calculate the total amount of relevance of this topic. Then, the system and/or method can find the time in which the topic reaches a given amount of relevance for this topic (for instance 10% of the area under the curve) to estimate the topic interest default level. That time can then be adaptively adjusted in accordance with the proposed intents that are accepted and/or rejected by the SME.”).
Beaver and Rodrigo Cavalin et al. are considered to be analogous to the claimed invention because they are in the same field of endeavor in chat-bot/user dialogue systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Beaver to incorporate the teachings of Rodrigo Cavalin et al. of wherein the identifying the new intent comprises: identifying a new subset of the subset of conversation logs comprising a plurality of conversation logs that could not be categorized in accordance with the one or more; and from the new subset, identifying a new intent based on a threshold number of conversation logs including one or more key words associated with the new intent which provides the benefit of improving automated conversational systems ([0003] of Rodrigo Cavalin et al.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Keisha Y Castillo-Torres whose telephone number is (571)272-3975. The examiner can normally be reached Monday - Friday, 9:00 am - 4:00 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Keisha Y. Castillo-Torres
Examiner
Art Unit 2659
/Keisha Y. Castillo-Torres/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659