DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/01/2025 have been fully considered but they are not persuasive. Regarding the rejection of claims 1-7 and 12-20 under 35 U.S.C. § 103, Applicant argues:
“Regarding the rejection of Claim 1 under 35 U.S.C. § 103, Applicant respectfully traverses the Office Action's assertion that Gao cures Office Action-admitted deficiencies in Wu-namely that paragraphs [0066] and [0083]-[0084] of Gao teach ‘selecting a placeholder as the candidate, the placeholder comprising information corresponding to a data item from the user; and retrieving, based on the information in the placeholder, user data; and replacing the placeholder with the user data,’ as recited in previously presented Claim 1. See Office Action at page 6. As discussed below, paragraphs [0066] and [0083]-[0084] of Gao merely discuss template slot-filling with knowledge- graph entities and optimization of which entities to fill into the slots, but do not teach or suggest, inter alia, selecting a placeholder comprising information identifying a user's data item (e.g., account information and/or account activity), retrieving the data item based on the information in the placeholder, and replacing the placeholder with the user data, as recited in amended Claim 1.“
Regarding applicant’s arguments, the examiner respectfully disagrees. The examiner contends that the claimed subject matter of a “placeholder” appears to be no more than just a data container that points to user information. Under broadest reasonable interpretation, a “placeholder” may simply correspond to a slot that can be filled with knowledge graph data, because a slot, in this context, is also a data container for a template, that is to be associated with certain information. Although the slot itself is not “selected… as the candidate”, it is the data from the knowledge graph that is to be associated with the slot, the information that is “selected… as the candidate.” As shown in p. 0066:
The popularity of different subject matter options (candidates) can be ranked according to user interest or on a per user basis. The popularity can take both user interest and overall popularity into account.
Moreover, the recited “at least one of account information or account activity related to the user” appears to be a broad statement about what kind of data is comprised in the “data item.” The examiner contends that Gao’s contextual information about the user that is used to rank the options for slot selection covers this language because this contextual information is used to select data that is more relevant to the user. Thus, the examiner contends that Gao does teach the amended limitations as presented in the arguments of the Remarks.
Furthermore, regarding the arguments towards the recited language of “the mediatory content compris[es] at least one of a suggestion to transfer the user to a live agent or a gift offer for the user”, the examiner contends that the arguments are moot in view of the new grounds of rejection in view of added reference Erhart, who’s disclosure covers this newly added clarifying language. Although the language appears to be imported from previously recited (and now canceled) claim 12, the language now specifies that the “offers” are “gift offers,” which is language that is not taught by any of Wu, Gao or Venkataraman, thus requiring the new grounds of rejection. Therefore, the examiner submits new grounds of rejection of claims 1-7 and 12-20 under 35 U.S.C. § 103 in view of Wu, Gao, Venkataraman and Erhart.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US PG Pub 20200159997) in view of Gao (US PG Pub 20180349755) and further in view of Venkataraman (US PG Pub 20180032513) and further in view of Erhart (US PG Pub 20210029246).
As per claims 1, 13 and 17, Wu discloses:
A method, non-transitory computer-readable storage medium and computing device comprising:
a processor, configured to: obtain, by the computing device, user input from a user via a conversational user interface of an application (Wu; Fig. 11, item 1104; p. 0114 - The framework 1100 may comprise a current message module 1104. At the current message module 1104, a current message q.sub.5 that is currently input by the user may be processed); obtain, by the computing device, a user emotion and user intent using the user input (Wu; Fig. 11, items 1116, 1108 & 1140; p. 0030 - In some embodiments, dynamic memory network (DMN) technique may be adopted for generating the responses. Fact memory vectors may be reasoned out by a DMN from fact vectors of a current session and candidate fact responses, wherein the candidate fact responses may refer to candidate responses determined at a fact basis. Moreover, emotion memory vectors may be reasoned out by a DMN from emotion vectors of the current session and candidate emotion responses, wherein the candidate emotion responses may refer to candidate responses determined at an emotion basis. The fact and emotion memory vectors may contain information about an attention point. An intention vector may be generated based on the fact and emotion memory vectors, which may contain information about both the attention point and an intention. A response to a current message may be generated based at least on the intention vector and the fact and emotion memory vectors); obtain by the computing device, candidate probabilities for a fragment of a response to the user input using the obtained user emotion, the obtained user intent and the user input, a candidate probability associated with the fragment indicating a suitability of the candidate for the fragment (Wu; Fig. 11, items 1112, 1118; p. 0115-0118 – obtaining candidate fact responses based on a ranked list of candidates using candidate probabilities; also see p. 0141-0147 - The framework 1100 may comprise an emotion response ranking model 1114 for determining candidate emotion responses 1118. Herein, the candidate emotion responses 1118 may be candidate responses having emotional trends. Inputs to the emotion response ranking model 1114 may comprise at least one of: emotion vectors of the current session generated by the input model 1102, an emotion vector of the current message generated by the current message module 1104, an emotion-topic knowledge graph 1116, and QA pairs in the pure chat index set 1110. The emotion-topic knowledge graph 1116 may be used for providing information for guiding optimal emotions in a final response to the current message. The emotion response ranking model 1114 may compute scores of responses existing in the pure chat index 1110 based on the inputs, and determine the top-ranked one or more responses as the candidate emotion responses 1118. In some implementations, a GBDT model may be adopted by the emotion response ranking model 1114 for scoring two sequences. For example, the GBDT may take a current message q and a candidate emotion response Q as inputs, and output similarity scores of the candidate emotion response Q compared to the current message q; also see p. 0149 - The framework 1100 may comprise an intention prediction module 1140.… The intention may be represented as a vector. Assuming that there are N intentions in total, the vector will be an N-dimension vector, with each dimension being a probability score of a corresponding intention…); select, by the computing device, a candidate from a number of candidates for the fragment using the candidate probabilities obtained for the fragment (Wu; Fig. 11, item 1160; p. 0153-0156 - The response generation module 1160 may decide a response word-by-word, wherein the response will be provided to the user as a reply to the current message from the user. When deciding each word in the response, the response generation module 1160 may desire to refer to the fact memory vectors, the emotion memory vectors or the intention vector. The attention mechanism module 1150 may be used for determining selection of the fact memory vectors, the emotion memory vectors and the intention vector for use by the response generation module 1160); and communicate, by the computing device, the paired response to the user via the conversational user interface of the application (Wu; p. 0053 - The responses in the response queue or response cache 234 may be further transferred to the UI 210 such that the responses can be displayed to the user in the chat window; Fig. 3 & p. 0056 - The presentation area 310 displays messages and responses (pairs) in a chat flow).
Wu, however, fails to disclose selecting a placeholder as the candidate, the placeholder comprising information identifying a data item of the user, the data item comprising at least one of account information or account activity related to the user; retrieving, based on the information in the placeholder, the data item; and replacing the placeholder with the data item. Gao does teach selecting a placeholder as the candidate, the placeholder comprising information identifying a data item of the user, the data item comprising at least one of account information or account activity related to the user (Gao; p. 0066 - The dialogue generator 290 uses the selected characteristic to generate a dialogue response for the user. A response dialogue can be formulated by combining a template response with entities linked to selected characteristics within the knowledge graph. A template response question for subject matter could read, “Are you interested in <Entity 1> books about <slot 1>, <slot 2>, <slot 3>, or <slot 4>?” Each slot (placeholder) would be filled with a subject from the knowledge graph related to books about Entity 1. The knowledge graph may include more subject matter entities that can be practically asked about in a question to the user. In this circumstance, the technology can optimize the entities slotted into the pre-formulated response query based on popularity. Popularity for an entity can be determined a number of different ways including entity occurrence within queries received by a search engine. In this way, the slots are filled with the most popular entities. The popularity of entities can be determined using contextual data about the user. The contextual data can be used to determine user interest. The popularity of different subject matter options can be ranked according to user interest or on a per user basis. The popularity can take both user interest and overall popularity into account; see also p. 0083-0084); retrieving, based on the information in the placeholder, the data item (Gao; p. 0066 - The popularity of different subject matter options can be ranked according to user interest or on a per user basis. The popularity can take both user interest and overall popularity into account; see also p. 0083-0084); and replacing the placeholder with the data item (Gao; p. 0066 – replacing the slots with the ranked subject matter options (candidates); see also p. 0083-0084). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method of Wu to include selecting a placeholder as the candidate, the placeholder comprising information identifying a data item of the user, the data item comprising at least one of account information or account activity related to the user; retrieving, based on the information in the placeholder, the data item; and replacing the placeholder with the data item, as taught by Gao, in order to allow an interactive program to leverage a knowledge graph to maximize the likelihood of successfully understanding the user's query and at the same time minimize the number of turns taken to understand the user. A turn is the exchange of a question and response with the user. A goal of the technology described herein is to formulate response queries that have a probability of completing the user's requested task accurately while issuing the fewest number of response queries to the user before determining the intended task. In order to accomplish this, the technology combines a reinforced learning mechanism with a knowledge-graph simulation score to determine the optimal response query to pose to the user. Response queries are used when a large number of entities within the knowledge graph are consistent with the initial query (Gao; p. 0004).
Furthermore, Wu in view of Gao fail to disclose determining, by the computing device, mediatory content based on the selected candidate; generating, by the computing device, the response based on the selected candidate and the mediatory content, such that the response includes at least the data item and the mediatory content. Venkataraman does teach determining, by the computing device, mediatory content based on the selected candidate; generating, by the computing device, the response based on the selected candidate and the mediatory content, such that the response includes at least the data item and the mediatory content (Venkataraman; Fig. 3 & 4; p. 0042-0046 – supplemental functions to be performed are mapped to multiple query templates based on the user profile data… The control circuitry determines that the generic profile includes a query template matching the user's query and a corresponding supplemental function. The control circuitry executes the supplemental function to generate subsequent information 306, “Your provider now also has episodes available from TV MAX.” This may be because in past usage of the interactive media guidance application multiple users have typically searched for availability of latest TV episodes from TV MAX in addition to TV ONLINE). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method of Wu and Gao to include determining, by the computing device, mediatory content based on the selected candidate; generating, by the computing device, the response based on the selected candidate and the mediatory content, such that the response includes at least the data item and the mediatory content, as taught by Venkataraman, because by monitoring the user's usage patterns and learning from the user's actions in conjunction with the user's natural language queries, the interactive media guidance application may provide a smarter and more efficient user experience and minimize the need for pre-programmed responses (Venkataraman; p. 0022). Furthermore, Wu, Gao and Venkataraman fail to disclose the mediatory content comprising at least one of a suggestion to transfer the user to a live agent or a gift offer for the user. Erhart does teach the mediatory content comprising at least one of a suggestion to transfer the user to a live agent or a gift offer for the user (Erhart; p. 0178 - As an example, a customer having a relatively higher status level (e.g., Platinum customer or Gold customer) may be entitled to certain additional service benefits from the contact center 108 than a customer not having the same status level. This may be particularly true if the customer having the higher status level paid/pays for the benefit of that status level. In some embodiments, a customer 116 having a relatively higher status level may be entitled to more interactions with a human agent 172 whereas a customer 116 having a relatively lower status level may be required to interact with a chatbot engine 148 for a longer period of time before being transferred to a human agent 172… Transferring to a human agent based on user’s status level (data item of the user)). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method of Wu, Gao and Venkataraman to include the mediatory content comprising at least one of a suggestion to transfer the user to a live agent or a gift offer for the user, as taught by Erhart, in order to provide for tracking the periods of digital engagement to facilitate the best times for proactive notifications or offers to resume conversation (Erhart; p. 0013).
As per claims 2, 14 and 18, Wu in view of Gao, Venkataraman and Erhart discloses: The method, non-transitory computer-readable storage medium and computing device of claims 1, 13 and 17, further comprising: for each of multiple fragments of the response, the computing device, iteratively obtaining candidate probabilities and selecting a candidate from the number of candidates using the candidate probabilities (Wu; Fig. 11, items 1112, 1118; p. 0115-0118 – obtaining candidate fact responses based on a ranked list of candidates using candidate probabilities; also see p. 0141-0147 - The framework 1100 may comprise an emotion response ranking model 1114 for determining candidate emotion responses 1118. Herein, the candidate emotion responses 1118 may be candidate responses having emotional trends. Inputs to the emotion response ranking model 1114 may comprise at least one of: emotion vectors of the current session generated by the input model 1102, an emotion vector of the current message generated by the current message module 1104, an emotion-topic knowledge graph 1116, and QA pairs in the pure chat index set 1110. The emotion-topic knowledge graph 1116 may be used for providing information for guiding optimal emotions in a final response to the current message. The emotion response ranking model 1114 may compute scores of responses existing in the pure chat index 1110 based on the inputs, and determine the top-ranked one or more responses as the candidate emotion responses 1118. In some implementations, a GBDT model may be adopted by the emotion response ranking model 1114 for scoring two sequences. For example, the GBDT may take a current message q and a candidate emotion response Q as inputs, and output similarity scores of the candidate emotion response Q compared to the current message q; also see p. 0149 - The framework 1100 may comprise an intention prediction module 1140.… The intention may be represented as a vector. Assuming that there are N intentions in total, the vector will be an N-dimension vector, with each dimension being a probability score of a corresponding intention…); and generating the response further comprising assembling the response using the candidate selected for each fragment (Wu; Fig. 11, item 1160; p. 0153-0156 - The response generation module 1160 may decide a response word-by-word, wherein the response will be provided to the user as a reply to the current message from the user. When deciding each word in the response, the response generation module 1160 may desire to refer to the fact memory vectors, the emotion memory vectors or the intention vector. The attention mechanism module 1150 may be used for determining selection of the fact memory vectors, the emotion memory vectors and the intention vector for use by the response generation module 1160).
As per claims 3, 15 and 19, Wu in view of Gao, Venkataraman and Erhart discloses:
The method, non-transitory computer-readable storage medium and computing device of claims 1, 13 and 17, obtaining a user emotion further comprising: providing, by the computing device, the user input to a trained emotion classifier and receiving the user emotion and an emote score as output from the trained emotion classifier, the emote score representing an intensity of the user emotion; providing, by the computing device, the user input to a trained intent classifier and receiving the user intent and an intent probability as output from the trained emotion classifier, the intent probability representing a likelihood of the user intent; and obtaining, by the computing device, the candidate probabilities for the fragment of the response to the user input using the user emotion, emote score, user intent, intent probability and user input (Wu; Fig. 11, items 1112, 1118; p. 0115-0118 – obtaining candidate fact responses based on a ranked list of candidates using candidate probabilities; also see p. 0141-0147 - The framework 1100 may comprise an emotion response ranking model 1114 for determining candidate emotion responses 1118. Herein, the candidate emotion responses 1118 may be candidate responses having emotional trends. Inputs to the emotion response ranking model 1114 may comprise at least one of: emotion vectors of the current session generated by the input model 1102, an emotion vector of the current message generated by the current message module 1104, an emotion-topic knowledge graph 1116, and QA pairs in the pure chat index set 1110. The emotion-topic knowledge graph 1116 may be used for providing information for guiding optimal emotions in a final response to the current message. The emotion response ranking model 1114 may compute scores of responses existing in the pure chat index 1110 based on the inputs, and determine the top-ranked one or more responses as the candidate emotion responses 1118. In some implementations, a GBDT model may be adopted by the emotion response ranking model 1114 for scoring two sequences. For example, the GBDT may take a current message q and a candidate emotion response Q as inputs, and output similarity scores of the candidate emotion response Q compared to the current message q; also see p. 0149 - The framework 1100 may comprise an intention prediction module 1140.… The intention may be represented as a vector. Assuming that there are N intentions in total, the vector will be an N-dimension vector, with each dimension being a probability score of a corresponding intention…).
As per claims 4, 16 and 20, Wu in view of Gao, Venkataraman and Erhart discloses:
The method, non-transitory computer-readable storage medium and computing device of claims 3, 15 and 19, obtaining the candidate probabilities for the fragment of the response further comprising: providing, by the computing device, the user emotion, emote score, user intent, intent probability and user input to a trained attention-based neural network model and receiving the candidate probabilities for the fragment of the response as output from the trained attention-based neural network model (Wu; Fig. 11, item 1160; p. 0153-0156 - The response generation module 1160 may decide a response word-by-word, wherein the response will be provided to the user as a reply to the current message from the user. When deciding each word in the response, the response generation module 1160 may desire to refer to the fact memory vectors, the emotion memory vectors or the intention vector. The attention mechanism module 1150 may be used for determining selection of the fact memory vectors, the emotion memory vectors and the intention vector for use by the response generation module 1160).
As per claim 5, Wu in view of Gao, Venkataraman and Erhart discloses:
The method of claim 4, wherein the trained emotion classifier, intent classifier and attention-based neural network model are components of a conversational response generator executed by the computing device (Wu; Fig. 11, item 1100; p. 0110 - FIG. 11 illustrates an exemplary framework 1100 for generating responses through DMN according to an embodiment. The framework 1100 may reason out fact memory vectors and emotion memory vectors through DMN, obtain an intention vector based on the fact and emotion memory vectors, and further generate a response to a current message based at least on the attention vector and the fact and emotion memory vectors).
As per claim 6, Wu in view of Gao, Venkataraman and Erhart discloses:
The method of claim 5, further comprising tuning the conversational response generator using a number of user input and response pairings, each user input and response pairing comprising user input received by the conversational response generator and a response generated by the conversation response generator as a reply, each user input and response pairing further comprising the emotion, emote score, intent and intent probability generated using the user input of the pairing (Wu; p. 0156 - The last generated word may be concatenated to the current vector as input at each time step. The generated output by the response generation module 1160 may be trained with a cross-entropy error classification of a correct sequence attached with a “</s>” tag at the end of the sequence; also see p. 0149).
As per claim 7, Wu in view of Gao, Venkataraman and Erhart discloses:
The method of claim 3, further comprising generating the trained emotion classifier using training examples from one or more data sources selected from the following: user interaction data, audio data, video data, user value data and conversation data (Wu; Fig. 4; p. 0069-0070 - The last generated word may be concatenated to the current vector as input at each time step. The generated output by the response generation module 1160 may be trained with a cross-entropy error classification of a correct sequence attached with a “</s>” tag at the end of the sequence).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record and not relied upon includes:
Wen (US PG Pub 20220351634) where the invention relates to intent classification of questions provided to a question answering, QA, system. A proposed method identifies negative emotion of the user, and, responsive to identifying negative emotion of the user, identifies an incorrect answer provided to the user. The incorrect answer and its associated question is analyzed to determine whether incorrect classification of the associated question's intent is responsible 5 for the incorrect answer. Either an intent classification algorithm of the QA system or a QA algorithm selection process of the QA system is then modified accordingly (Wen; Abstract).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rodrigo A Chavez whose telephone number is (571)270-0139. The examiner can normally be reached Monday - Friday 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached on 5712727602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RODRIGO A CHAVEZ/Examiner, Art Unit 2658
/RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658