DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 7/31/2024 and 12/10/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Status of Claims
Claims 1-20 are pending in this application.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 7, 9-10, 12, 14-15, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shon et al. (U.S. Patent 11,521,639) in view of Thoniparambil et al. (U.S. Patent Application Publication 2023/0196035).
As per claims 1, 9 and 15, Shon et al. discloses:
A system (Column 8, lines 38-45), comprising:
a memory subsystem (Column 8, lines 38-45); and
processing circuitry configured to execute instructions stored in the memory subsystem (Column 8, lines 38-45) to:
obtain, at a first device of a contact center agent, speech content from a second device of a contact center user during a contact center engagement between the contact center agent and the contact center user (Abstract & Column 8, lines 12-27 – Perform real time monitoring of agent customer interactions implies monitoring speech content from customer calls);
determine, using an artificial intelligence model accessible to the first device, that the speech content meets a threshold (Abstract & Column 7, line 65 – Column 8, line 11 and Column 8, lines 28-36 – The sentiment in the speech content is classified (i.e. meets a threshold) using a neural network (i.e. AI model));
based on the speech content meeting the threshold, generate, using the artificial intelligence model, a transcription of the speech content (Column 8, lines 18-19 – Segment and filter agent-customer transcripts based on initial sentiment, final sentiment, or sentiment change.); and
Shon et al. fails to explicitly disclose, but Thoniparambil et al. in the same field of endeavor teaches:
output, in place of the speech content and during the contact center engagement, the transcription of the speech content at the first device (Paragraph [0053] – the associated portion of the text transcript that includes negative sentiment is displayed).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. with the transcript output of Thoniparambil et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention and Shon et al. is the output of the transcript, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
Claim 1 is directed to the method of using the system of claim 15, so is rejected for similar reasons.
Claim 9 is directed to a non-transitory computer readable medium containing instructions to cause a processor to act as the system of claim 15, so is rejected for similar reasons.
As per claim 2, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. Shon et al. in the combination further discloses:
determining that a negative emotional tone used by the contact center user within the speech content meets the threshold (Column 1, lines 27-32 and Column 8, lines 12-27).
As per claim 5, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. Thoniparambil et al. in the combination further discloses:
outputting, in connection with the transcription of the speech content, an indication of a negative emotional state of the contact center user (Paragraph [0053]).
As per claim 7, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. Thoniparambil et al. in the combination further discloses:
The artificial intelligence model is trained for sentiment analysis using contact center engagement data associated with at least one past contact center engagement for each of multiple contact center agents and the threshold is used with the contact center agent and other contact center agents (Paragraph [0014], [0016], [0026] & [0060]).
As per claim 10, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 9 above. Shon et al. in the combination further discloses:
the threshold corresponds to one or more of a negative emotional tone, an amount of profanity, or a speech volume (Column 1, lines 27-32 and Column 8, lines 12-27).
As per claim 12, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 9 above. Thoniparambil et al. in the combination further discloses:
an indication of a negative emotional state of the contact center user is output in connection with the transcription of the speech content (Paragraph [0053]).
As per claim 14, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 9 above. Shon et al. in the combination further discloses:
the speech content is obtained over a synchronous communication modality (Column 8, lines 12-27 - Perform real time monitoring of agent-customer interactions for contact center management. Empowering agents (e.g., alerting agent at the start of a call that a customer may be frustrated, protecting agents from abusive customers, and motivating agents to achieve higher sentiment scores). Both of these require synchronous speech communication).
As per claim 18, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 15 above. Thoniparambil et al. in the combination further discloses:
based on the speech content meeting the threshold, indicate a negative emotional state of the contact center user at the first device (Paragraph [0053]).
As per claim 20, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 15 above. Thoniparambil et al. in the combination further discloses:
the contact center engagement is facilitated over a telephony modality or a video conferencing modality (Paragraph [0026]).
Claims 3-4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Shon et al. (U.S. Patent 11,521,639) and Thoniparambil et al. (U.S. Patent Application Publication 2023/0196035) in view of Arora et al. (U.S. Patent 12,062,368).
As per claim 3, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. The combination fails to disclose but Arora et al. in the same field of endeavor teaches:
determining that an amount of profanity used by the contact center user within the speech content meets the threshold (Column 48, lines 17-30 & Column 49, lines 6-12 – Calls have a swear word flag and swear word notification).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity detection of Arora et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the profanity detection of Arora et al,, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
As per claim 4, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. The combination fails to disclose but Arora et al. in the same field of endeavor teaches:
determining that a speech volume used by the contact center user within the speech content meets the threshold (Column 48, lines 17-30 & Column 48, line 53 - Column 49, line 4 – Calls are checked for yelling and loud volume).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity detection of Arora et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the loud volume detection of Arora et al,, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
As per claim 17, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 15 above. The combination fails to disclose but Arora et al. in the same field of endeavor teaches:
determine that the speech content cumulatively meets the threshold over multiple periods of time during the contact center engagement (Column 48, lines 17-30 & Column 48, line 53 - Column 49, line 4 – Calls are checked for loud volume and flagged for occurring 3 times over the course of the call ).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity detection of Arora et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the loud volume detection of Arora et al,, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
Claims 6, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Shon et al. (U.S. Patent 11,521,639) and Thoniparambil et al. (U.S. Patent Application Publication 2023/0196035) in view of Western (U.S. Patent Application Publication 2017/0094215).
As per claim 6, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. The combination fails to disclose but Western in the same field of endeavor teaches:
based on the speech content meeting the threshold, muting an audio channel of the contact center user to prevent an output of the speech content or additional content at the first device during at least some remaining amount of the contact center engagement (Paragraph [0040] – Audio containing profanity is automatically muted).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity muting of Western because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the profanity muting of Western, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
As per claim 13, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 9 above. The combination fails to disclose but Western in the same field of endeavor teaches:
audio from the second device is muted at the first device based on the speech content meeting the threshold (Paragraph [0040] – Audio containing profanity is automatically muted).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity muting of Western because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the profanity muting of Western, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
As per claim 19, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 15 above. The combination fails to disclose but Western in the same field of endeavor teaches:
based on the speech content meeting the threshold, mute audio of the second device (Paragraph [0040] – Audio containing profanity is automatically muted).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the profanity muting of Western because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the profanity muting of Western, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
Claims 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Shon et al. (U.S. Patent 11,521,639) and Thoniparambil et al. (U.S. Patent Application Publication 2023/0196035) in view of Smith-Mickelson et al. (U.S. Patent 12,107,819).
As per claim 8, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 1 above. The combination fails to disclose but Smith-Mickelson et al. in the same field of endeavor teaches:
The artificial intelligence model is trained for sentiment analysis using contact center engagement data limited to the contact center agent and the threshold is specific to the contact center agent (Column 17, lines 32-64).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the agent specific thresholds of Smith-Mickelson et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the agent specific thresholds of Smith-Mickelson et al., one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
As per claim 11, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 9 above. The combination fails to disclose but Smith-Mickelson et al. in the same field of endeavor teaches:
the threshold is specific to the contact center agent (Column 17, lines 32-64).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the agent specific thresholds of Smith-Mickelson et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the agent specific thresholds of Smith-Mickelson et al., one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Shon et al. (U.S. Patent 11,521,639) and Thoniparambil et al. (U.S. Patent Application Publication 2023/0196035) in view of Dumaine et al. (International Patent Application Publication WO 2017/210633, listed in IDS dated 12/10/2025).
As per claim 16, the combination of Shon et al. and Thoniparambil et al. teaches all of the limitations of claim 15 above. The combination fails to disclose but Arora et al. in the same field of endeavor teaches:
determine that the speech content meets the threshold for a threshold period of time during the contact center engagement (Paragraph [0076]).
It would be obvious for a person having ordinary skill in the art at the effective filling date of the invention to modify the method, system and computer readable medium of Shon et al. and Thoniparambil et al. with the threshold duration of Dumaine et al. because it is a case of combining prior art elements according to known methods to yield predictable results. The difference between the claimed invention of Shon et al. and Thoniparambil et al. is the threshold duration of Dumaine et al,, one of ordinary skill in the art could have combined the elements by known methods and would have known that the results of the combination were predictable.
Examiner Notes
The Examiner cites particular columns and line numbers in the references as applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully considers the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or as disclosed by the Examiner.
Communications via Internet e-mail are at the discretion of the applicant and require written authorization. Should the Applicant wish to communicate via e-mail, including the following paragraph in their response will allow the Examiner to do so:
“Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.”
Should e-mail communication be desired, the Examiner can be reached at Edwin.Leland@USPTO.gov
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWIN S LELAND III whose telephone number is (571)270-5678. The examiner can normally be reached 8:00 - 5:00 M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWIN S LELAND III/Primary Examiner, Art Unit 2654