Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending. Claims 1, 8, and 15 are independent.
This Application was published as US 20240203445.
Apparent priority is 14 December 2022.
The instant Application is directed to a method of detecting high emotions in conversations and providing an intervention.
Examiner’s Note
[0015] of the specification defines “computer readable storage medium” as non-transitory.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Step 1: The independent Claims are directed to statutory categories:
Claim 1 is a method claim and directed to the process category of patentable subject matter.
Claim 8 is a computer program product claim and is directed to the machine or manufacture category of patentable subject matter.
Claim 15 is a device claim and directed to the machine or manufacture category of patentable subject matter.
Step 2A, Prong One: Does the Claim recite a Judicially Recognized Exception? Abstract Idea? Are these Claims nevertheless considered Abstract as a Mathematical Concept (mathematical relationships, mathematical formulas or equations, mathematical calculations), Mental Process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion), or Certain Methods of Organizing Human Activity (1-fundamental economic principles or practices (including hedging, insurance, mitigating risk), 2-commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), 3- managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and fall under the judicial exception to patentable subject matter?)
The rejected Claims recite Mental Processes.
Step 2A, Prong Two: Additional Elements that Integrate the Judicial Exception into a Practical Application? Identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element(s) or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application.
The rejected Claims do not include additional limitations that point to integration of the abstract idea into a practical application and are therefore directed to a Mental .
Claim 1 is a generic automation of a mental process because a human agent can monitor a conversation, score each utterance, and intervene when necessary. Prong Two of step 2A in the 101 analysis asks whether the abstract idea is integrated with a practical application. The answer is no in this instance because there is no technological solution in the Claim that “integrates” the abstract idea. The Claim only suggests that the abstract idea be applied. It does not describe an application.
1. A method for regulating emotions in conversations, the method comprising:
monitoring a conversation between participants; (Counselor listens to conversation)
dividing the conversation into a plurality of utterances; (Counselor divides the conversation by speaker turns)
calculating an emotion score for each utterance; (Counselor scores each turn from intensity 1-3)
determining whether an emotion score of an utterance exceeds a threshold; and (Counselor determines if intensity is greater than 2)
intervening in the conversation in the event the emotion score exceeds the threshold. (Counselor intervenes when the intensity of a turn is 3)
Step 2B: Search for Inventive Concept: Additional Element Do not amount to Significantly More: There are no additional elements that cause the Claim to amount to significantly more than the underlying abstract idea.
The Dependent Claims do not add limitations that could help the Claim as a whole to amount to significantly more than the Abstract idea identified for the Independent Claim:
2. The method of claim 1, wherein monitoring comprises monitoring in real time. (Counselor listens live)
3. The method of claim 1, wherein intervening comprises notifying at least one of the participants in the event the emotion score exceeds the threshold. (Counselor tells one participant that he is getting angry)
4. The method of claim 1, wherein intervening comprises notifying at least one of the participants of a recommended course of action. (Counselor suggests the participant take deep breaths)
5. The method of claim 4, wherein the recommended course of action is based on at least one of a type of argument in the utterance, a user profile of a participant that produced the utterance, a topic of the utterance, intonation of a participant that produced the utterance, and a physiological response of a participant that produced the utterance. (Counselor suggests this from notes from previous sessions with the individual)
6. The method of claim 1, further comprising modeling the utterances as nodes in a conversation graph. (Counselor draws the conversation as a graph with nodes to show that it is off-topic)
7. The method of claim 1, further comprising evaluating responses of the participants to the intervention. (Counselor asks if deep breathing helped either participant)
The additional limitations introduced by the Dependent Claims are not sufficient as additional elements that integrate the judicial exception into a practical application or as additional elements that cause the Claim as a whole to amount to substantially more than the underlying abstract idea.
With respect to Independent Claim 8 and independent Claim 15, which have limitations similar to the limitations of Claim 1, the limitations of “computer-readable storage medium” and “at least one processor ,” and “at least one memory” are expressed parenthetically and lack nexus to the Claim language and as such are a separable and divisible mention to a machine. Accordingly, they do not include additional limitations that cause the Claim as a whole to amount to more than the underlying abstract idea.
The Dependent Claims 9-14 and 16-20 are similar to claims 2-7 and do not add limitations that could integrate the judicial exception into a practical application or help the Claim as a whole to amount to significantly more than the Abstract idea identified for the Independent Claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 7-12, and 14-19. is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Can (US 20240073321 A1).
Regarding claim 1, Can discloses: 1. A method for regulating emotions in conversations, the method comprising: monitoring a conversation between participants; ("[0053] Interface module 204 may serve as an entry point or user interface through which one or more utterances, such as spoken words/sentences (speech), may be entered for subsequent recognition using an automatic speech recognition model 322..." - see also Fig. 1" )
dividing the conversation into a plurality of utterances; ("[0086]... The system performs aspect-based sentiment classification on a turn level (e.g., every time a new utterance is available). Performing this in real-time enables the system to track how sentiment changes over the course of a call..." ; see also "[0109]... The information may include utterances with low sentiment scores that come right before utterances with high sentiment scores. Those utterances are positive examples." – the conversation is divided into utterances by each turn.)
calculating an emotion score for each utterance; ("[0109]... In a non-limiting example, the Sentiment Predictive Model 324 scores every utterance of a transcript (sentiment score)..." )
determining whether an emotion score of an utterance exceeds a threshold; and intervening in the conversation in the event the emotion score exceeds the threshold. ("[0154] In 414, call center system 100 may determine if assistance is needed. For example, if a sentiment score exceeds a predetermined threshold, an automated agent assistance module 122 may be triggered to assist the agent in handling the complaint." )
Regarding claim 2, Can discloses: 2. The method of claim 1, wherein monitoring comprises monitoring in real time. ("[0048] The system measures customer satisfaction using, but not limited to, two main metrics: a satisfaction score (e.g., NPS) and a sentiment score. Other satisfaction measures are considered within the scope of the technology described herein. Both are predicted in real-time at an utterance level..." )
Regarding claim 3, Can discloses: 3. The method of claim 1, wherein intervening comprises notifying at least one of the participants in the event the emotion score exceeds the threshold. ("[0156] In 416, call center system 100 may also provide one or more alerts to a screen of the call agent. For example, alerts may be designated in various categories, such as, important, critical, helpful support or manager needed. The alerts may be based on the sentiment score crossing various thresholds, triggering varying levels of assistance and associated alerts..." )
Regarding claim 4, Can discloses: 4. The method of claim 1, wherein intervening comprises notifying at least one of the participants of a recommended course of action. ("[0156] ...A call agent may subsequently select the alert (e.g., with cursor) and receive suggested phrasing to assist the customer. ..." )
Regarding claim 5, Can discloses: 5. The method of claim 4, wherein the recommended course of action is based on at least one of a type of argument in the utterance, a user profile of a participant that produced the utterance, a topic of the utterance, intonation of a participant that produced the utterance, and a physiological response of a participant that produced the utterance. ("[0156] ...In another non-limiting example, the call center system 100 may suggest trigger words associated with the specific caller's anger or how long before the specific caller typically rises to anger or complaints. In one non-limiting example, the alert may include a caller's average length of call or average time to resolution. While providing a few non-limiting examples above, any user preference related information may be used to warn, inform or otherwise assist the call agent to improve an outcome of the call and improve user experience." ; see also "[0043] Automated System Assistance 122 provides notifications (e.g., alerts), phrasing and redirection to a manager to provide automated assistance during a call based on the customer profile 116..." )
Regarding claim 7, Can discloses: 7. The method of claim 1, further comprising evaluating responses of the participants to the intervention. ("[0043] ...In one embodiment, Automated System Assistance 122 uses a similar customer predictive model 332 to find customers similar to the current customer in order to drive insights into what actions resolved their complaints and what language helped increase customer satisfaction..." – finding the language that helped increase satisfaction would read on evaluating the responses; see also "[0048] The system measures customer satisfaction using, but not limited to, two main metrics: a satisfaction score (e.g., NPS) and a sentiment score. Other satisfaction measures are considered within the scope of the technology described herein. Both are predicted in real-time at an utterance level. Call agents may be provided with both while talking to customers, making it easy to absorb information, so that the call agents can focus on the call itself. To simplify information consumption, the metrics may be presented, for example, in the form of a color-coded gauge that changes throughout the call." – changing the color as the agent responds also reads on evaluating response of the participant ; see also "[0095] As can be seen in the interaction above, the sentiment against the app has changed over the course of the call. The sentiment predictive model's turn level analysis may also capture that the agent's instructions did not resolve the customer's issue." – not resolving is also an evaluation of the response.)
Claim 8 is a computer program product claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, “computer-readable storage medium” of the Claim are taught by Can (Main Memory 1408; Second Memory 1410, Fig. 14)
Claim 9 is a computer program product claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale.
Claim 10 is a computer program product claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale.
Claim 11 is a computer program product claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale.
Claim 12 is a computer program product claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale.
Claim 14 is a computer program product claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale.
Claim 15 is a system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, “processor” and “memory” of the Claim are taught by Can (Processor 1404; Main Memory 1408; Second Memory 1410, Fig. 14)
Claim 16 is a system claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale.
Claim 17 is a system claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale.
Claim 18 is a system claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale.
Claim 19 is a system claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 6, 13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Can in view of Ghosal et al. ("DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation").
Regarding claim 6, Can does not disclose utterances as nodes in a conversational graph.
Ghosal discloses: 6. The method of claim 1, further comprising modeling the utterances as nodes in a conversation graph. (Fig. 2 shows utterances U as nodes in a conversation graph. See also: "First, we introduce the following notation: a conversation having N utterances is represented as a directed graph… Vertices: Each utterance in the conversation is represented as a vertex vi ∈ V in G. " pg. 4, paras 2-3)
Can and Ghosal are considered analogous art to the claimed invention because they disclose methods of recognizing emotion. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Can with the teaching of Ghosal to use a conversation graph to detect emotion. Doing so would have been beneficial because Ghosal’s method outperforms state of the art (Ghosal pg. 7, para 6) and improves context understanding (Ghosal pg. 8, para 1).
Claim 13 is a computer program product claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale.
Claim 20 is a system claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ripa et al. (US 20140140497 A1). Ripa discloses a method for monitoring an emotion score for call segments and alerting based on a threshold. (Figs. 2B and 3)
Murali et al. (US 20200092419 A1). Murali discloses a method for sensing emotion in calls and dynamically changing suggestions. Murali discloses checking if the suggestion satisfied the customer and concluding or continuing the call (Fig. 1)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JON C MEIS whose telephone number is (703)756-1566. The examiner can normally be reached Monday - Thursday, 8:30 am - 5:30 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JON CHRISTOPHER MEIS/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654