DETAILED ACTION
Notice to Applicant
The following is a NON-FINAL Office action upon examination of application number 17/515,472, filed on 10/31/2021, in response to Applicant’s Request for Continued Examination (RCE) filed on January 04, 2026. Claims 1-11 and 13-14 are pending in the application and have been examined on the merits discussed below.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In the response filed January 04, 2026, Applicant did not amend any claim and did not cancel any claim. No new claims were presented for examination.
In the Office action mailed 10/28/2024, allowable subject matter was indicated. Specifically, the Office action stated that claim 13 was objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed January 04, 2026, have been fully considered.
Applicant submits “The current Application provides an evaluation of the agents and makes sure agents are not assigned coaching packages incorrectly. The truthfulness of a call disposition, e.g., DTS may ensure accurate assessment of agent skill as regards to the call disposition that is evaluated. A supervisor application is configured to initiate the supervisor agent communication based on the Disposition Truthfulness Score (DTS), which is related to the agent's performance. The current application does not just compute or store values; it executes a supervisory function that triggers real-time communication. This process reduces human intervention, enhances accuracy, and improves efficiency in supervisor-agent interactions. The current application recites a specific and concrete implementation of score-based communication between a supervisor and the agent which is not a general concept.” [Applicant’s Remarks, 01/04/2026, page 8]
The Examiner respectfully disagrees. The Applicant’s assertion that the claim recites a “specific and concrete implementation” appears to be and argument under Step 2A Prong 2, suggesting that the claim is eligible bevues it triggers real-time supervisor-agent communication, reduces human intervention, and improves efficiency. However, this argument is not persuasive. The claim is directed ton abstract idea (i.e., evaluating performance and triggering supervisory communication) and uses generic computer technology to implement it. Applicant’s statement concerning efficiency and accuracy describe benefits of the abstract idea, and do not provide a technological improvement, inventive concepts, or improvement to the computer or underlying technology itself. Merely linking the abstract idea to a supervisor application or workflow automation is insufficient to confer patent eligibility. For the reasons above, this argument is found unpersuasive.
Applicant submits “Valid call dispositions are an effective means for a contact center evaluation. The current application improves the system's performance by reducing resource consumption as a result of running a more effective sales campaign. For example, when the DTS is below a preconfigured disposition truthfulness threshold, it means that the sales campaign may be ineffective, as the sales campaign is based on the accuracy and reliability of the disposition.” [Applicant’s Remarks, 01/04/2026, page 8]
In response to Applicant’s argument that “the current application improves the system's performance by reducing resource consumption as a result of running a more effective sales campaign,” the Examiner points out there is no actual improvement to another technology or technical field, no improvement to the functioning of the computer itself, and no meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment evident in the claims. The steps recited in a claim could be programmed to be performed on a variety of different computer platforms. While the claim limitations are implemented by a computer, the computer is nothing more than a general purpose computer and the claims do not include improvements to another technology or technical field; nor do they include improvements to the functioning of the computer itself. The Examiner emphasizes that nowhere in Applicant’s Specification is there any discussion or suggestion that the problem or solution is a technical one, nor is there even a hint of any contemplated improvement to technology. The Examiner further points out there is no actual improvement to another technology or technical field, no improvement to the functioning of the computer itself, and no meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment evident in the claims. It is not clear how the claimed limitations provide an actual improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment evident in the claims.
Furthermore, it is noted that the claim does not clearly recite how the computer or technology itself is improved, or how the claimed functions provide a technological improvement beyond the abstract idea or evaluating dispositions and triggering follow-up actions. References to the DTS and campaign effectiveness describe a result of using the abstract idea, not an improvement to underlying computer or technology. For the reasons above, this argument is found unpersuasive.
Applicant submits “As previously argued, the Applicant respectfully asserts that the claim element of configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions renders the rejections moot. Therefore, Applicant respectfully requests that the rejection be withdrawn, and the claims 1 and 14 be placed in condition for allowance. As to Step 2A Prong 2, the Applicant asserts that the alleged abstract idea is used in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Specifically, the calculated DTS which is sent to the supervisor application, e.g., as shown in Fig. 7, is used to initiate a supervisor agent communication.” [Applicant’s Remarks, 01/04/2026, pages 8-9]
The Examiner respectfully disagrees. Applicants submits that “the claim element of configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions renders the rejections moot. Therefore, Applicant respectfully requests that the rejection be withdrawn, and the claims 1 and 14 be placed in condition for allowance. As to Step 2A Prong 2, the Applicant asserts that the alleged abstract idea is used in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.” In response, it is noted that the additional elements recited in exemplary claim 1 are: an Artificial Intelligence (Al) model, a data aggregator module on a database, a disposition truthfulness calculator module, one or more applications, and a supervisor application, which merely serve to tie the abstract idea to a particular technological environment (automated or computer-based operating environment) via generic computing hardware, software/instructions, which is not sufficient to amount to a practical application, as noted in MPEP 2106. See also, Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). See also, Benson, 409 U.S. 63 (holding that merely implementing a mathematical principle on a general purpose computer is a patent ineligible abstract idea); Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044 (Fed. Cir. 2017) (using a computer as a tool to process an application for financing a purchase).
Furthermore, it is noted that Applicant’s claims are devoid of any discernible change, transformation, or improvement to a computer (software or hardware) or any existing technology. Applicant has not shown that any specific technological improvement is achieved within the scope of the claims. It bears emphasis that no Artificial Intelligence model, database, module, supervisor application, or technological elements are modified or improved upon in any discernible manner. Instead, the result produced by the claims is simply information indicating a disposition truthfulness score, which is not a technical result or improvement thereof. Nevertheless, even assuming arguendo that an improvement was achieved, improving the generation of a disposition truthfulness score, at most, an improvement to a business process (using generic computing elements, such that any incidental improvement achieved by automating the claim steps would come from the capabilities of a general-purpose computer rather than the sequence of steps/activities recited in the method itself, which does not materially alter the patent eligibility of the claim. See Bancorp Servs., L.L.C. v. Sun Life Assurance Co. of Can. (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”) (cited in the Federal Circuit's FairWarning decision).
Lastly, in response to Applicant’s argument that “the claim element of configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions renders the rejections moot, the Examiner respectfully disagrees. This element represents a generic post-processing step and does not amount to a practical application of the abstract idea. The claim merely automates a business rule (i.e., initiating a communication based on a score) using conventional technology. It does not integrate the abstract idea to a specific or meaningful technological improvement, nor does it recite any unconventional use of computer functionality. For the reasons above, this argument is found unpersuasive.
Applicant submits “Volkov in paragraph [0123] describes checking a worker answer in an interaction but does not teach or suggest a disposition provided for the interaction, for example, by the agent that conducted the interaction, as the current application does.” [Applicant’s Remarks, 01/04/2026, page 10]
As best understood by Examiner, Applicant argues that Volkov does not teach or suggest “a disposition provided for the interaction, by the agent that conducted the interaction.” In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., a disposition provided for the interaction by the agent that conducted the interaction) are not recited in the rejected claim(s). Although the claims are interpreted in light of the Specification, limitations from the Specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Regardless, Volkov teaches evaluations of worker answers and determination of interaction outcomes (paragraphs 0013-0014, 0123, 0126, 0224), which constitute interaction dispositions under the broadest reasonable interpretation.
Applicant submits “Volkov does not teach or suggest calculating two disposition confidence scores by the AI model and based on it calculating the DTS. One of the disposition confidence scores is related to the specific agent that conducted the interaction and one of the disposition confidence scores is related to all agents. The disposition confidence score may have a different value when evaluated relative to the specific agent than relative to a general agent, e.g., a higher or lower confidence score of the disposition truthfulness.” [Applicant’s Remarks, 01/04/2026, page 10]
In response to Applicant’s statement that “Volkov does not teach or suggest calculating two disposition confidence scores by the AI model and based on it calculating the DTSA,” as best understood by the Examiner, Applicant appears to argues that Volkov does not teach or suggest “providing the received interaction transcript and related disposition to an Artificial Intelligence (AI) model to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents.” First, it is noted that this argument is a mere allegation of patentability by the Applicant with no supporting rationale or explanation. Merely stating that the claims do not teach a feature does not offer any insight as to why the specific sections of the prior art relied upon by the Examiner fail to disclose the claimed features. Applicant's arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Nonetheless, it is maintained that in at least paragraphs 0013, 0014, 0123, 0126, 0144, 0224, Volkov teaches the disputed limitation. In particular, Volkov’s method for worker assessment, which encompasses estimating worker accuracy is reasonably understood as teaching the disputed limitation. For instance, in paragraph 0013, Volkov’s describes an adjudication module that manages results submitted by a worker for a task and utilizes one or more adjudication rules or acceptance criteria to assess the correctness of these results. This approach suggests calculating a "disposition confidence score" in relation to the worker (i.e., agent) by determining the degree of confidence in the correctness of their result. This process aligns with the concept of providing a confidence score for the disposition of the agent's performance. Moreover, in paragraph 0014, Volkov’s worker fitness module assesses answers to flag potentially incorrect responses. This module is focused on evaluating the performance of individual workers, further suggesting that confidence in the disposition of an individual agent’s work can be measured. This also supports the idea of calculating a confidence score for an individual agent's performance. It is further noted that Volkov describes various methods for detecting and learning errors in worker responses, in paragraph 0126, this involves both supervised and unsupervised learning approaches, with the possibility of using statistical comparisons against correct/incorrect characteristics at both the individual worker and crowd level. These processes, which involve analyzing individual worker performance over time, further support the idea of calculating individual disposition confidence scores based on historical data and behaviors. Last, Volkov specifically teaches an algorithm that tracks the error rate of a stream of answers from individual workers or all workers for a given task, in order to determine the overall quality of data. Volkov’s citations directly support the concept of calculating two disposition confidence scores. Thus, given the broadest reasonable interpretation consistent with the specification in construing the claimed invention, it is Examiner’s position that the disclosure of Volkov teaches and at least suggests “providing the received interaction transcript and related disposition to an Artificial Intelligence (AI) model to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents.” Accordingly, this argument is found unpersuasive.
Furthermore, assuming that Applicant argues that Volkov does not teach providing scores to calculate a DTS, it is noted that the claim recites merely providing the scores to a calculator module to calculate a DTS, without specifying how the calculation is performed. Volkov similarly discloses generating individual and group score and combining them in a model to produce a score for each worker or task (paragraphs 0126 0129, 0224, 0244, 0282), which aligns with the claim limitation under the broadest reasonable interpretation. While the claim recites providing the scores to a calculator module, it does not describe how the DTS is actually calculated. Additionally, while the Applicant argues “based on it calculating the DTS,” the claim itself does not recite this language. For the reasons above, this argument is found unpersuasive.
Applicant submits “The Examiner has indicated in the Office Action that DeFilippo and Volkov combined does not explicitly teach configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions. As to Tapuhi, Tapuhi doesn't cure the deficiencies of DeFilippo and Volkov alone or combined. I paragraph [00118], Tapuhi merely calculates an overall evaluation score of an interaction. Instead, the current application is identifying truthfulness of a call disposition which is the outcome of the call. Also, Tapuhi doesn't initiate a supervisor agent communication based on the DTS, as the one or more follow-up actions, as the current application does.” [Applicant’s Remarks, 01/04/2026, page 10]
In response to Applicant’s argument that “Tapuhi merely calculates an overall evaluation score of an interaction. Instead, the current application is identifying truthfulness of a call disposition which is the outcome of the call,” as best understood by the Examiner, Applicant appears to argue that Tapuhi does not teach “providing the disposition confidence score related to the agent, the general disposition confidence score related to all agents and the data related to the agent to a disposition truthfulness calculator module to calculate a Disposition Truthfulness Score (DTS).” However, Tapuhi was not asserted as disclosing the disputed limitation. Accordingly, this arguments is deemed moot.
In response to Applicant’s argument that “Tapuhi doesn't initiate a supervisor agent communication based on the DTS, as the one or more follow-up actions, as the current application does,” it is first noted that this argument is a mere allegation of patentability by the Applicant with no supporting rationale or explanation. Merely stating that the claims do not teach a feature does not offer any insight as to why the specific sections of the prior art relied upon by the Examiner fail to disclose the claimed features. Applicant's arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Nonetheless, the Examiner maintains that Tapuhi teaches and at least suggest the disputed limitation. The citations of Tapuhi describe a system that supports configuring automated follow-up actions, such as supervisor-agent communication, based on performance evaluation metrics. Paragraphs 0102, 0105, and 0110 explain how the quality monitoring system regularly collects and analyzes agent performance data, identifying when an agent is underperforming either relative to peers or to their own historical metrics. When such issues are detected, actions like side-by-side coaching session may be triggered. Paragraph 0122 further details how specific underperforming quality metrics are compared against threshold values to determine if performance is sub-par, and how those results lead to tailored coaching session. Paragraph 0091 adds that a manager can configure trigger points to specific evaluation questions, such that when a metric (like a disposition truthfulness score) falls below a defined threshold, a particular action, such as initiating communication with the agent, is automatically taken. Taken together, the citations of Tapuhi suggest configuring a supervisor application to automatically initiate a supervisor-agent communication based specifically on a low disposition truthfulness score, as one of the system’s rule-based follow-up action. Thus, given the broadest reasonable interpretation consistent with the specification in construing the claimed invention, it is Examiner’s position that the disclosure of Tapuhi teaches and at least suggests “configuring the supervisor application to Initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions.” For the reasons above, this argument is found unpersuasive.
11. Applicant’s remaining arguments either logically depend from the above-rejected arguments, in which case they too are unpersuasive for the reasons set forth above, or they are directed to features which have been newly added via amendment. Therefore, this is now the Examiner's first opportunity to consider these limitations and as such any arguments regarding these limitations would be inappropriate since they have not yet been examined. A full rejection of these limitations in view of the prior art will be presented later in this Office Action.
Claim Rejections - 35 USC § 101
12. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
13. Claims 1-11 and 13-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more.
14. Claims 1-11 and 13-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The eligibility analysis in support of these findings is provided below, in accordance with MPEP 2106.
With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the method (claims 1-11 and 13) and system (claim 14) are directed to at least one potentially eligible category of subject matter (i.e., process and machine, respectively). Thus, Step 1 of the Subject Matter Eligibility test for claims 1-11 and 13-14 is satisfied.
With respect to Step 2A Prong One, it is next noted that the claims recite abstract ideas that fall into the (1) “Certain Methods of Organizing Human Activity” by setting forth steps for managing commercial interactions (e.g., marketing or sales activities or behaviors; business relations); and (2) “Mathematical Concepts” such as mathematical relationships, formulas and calculations, as set forth in the enumerated groupings of abstract ideas set forth in MPEP 2106. With respect to independent claim 1, the limitations reciting the abstract idea are indicated in bold below: (a) receiving an interaction transcript and related disposition of an interaction between an agent and a customer; (b) providing the received interaction transcript and related disposition to an Artificial Intelligence (All) model to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents; (c) operating a data aggregator module on a database to aggregate data related to the agent; (d) providing the disposition confidence score related to the agent, the general disposition confidence score related to all agents and the data related to the agent to a disposition truthfulness calculator module to calculate a Disposition Truthfulness Score (DTS); (e) sending the DTS to one or more applications, to take one or more follow-up actions based on the DTS, when the DTS is below a preconfigured disposition truthfulness threshold, wherein one application of the one or more applications is a supervisor application, and (f) configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions. These limitations recite steps which encompass activity for managing personal behavior or relationships or interactions (e.g., following rules or instructions), and also recites limitations falling within the Mathematical Concepts abstract idea grouping.
Because the above-noted limitations recite steps falling within the Certain methods of organizing human activity and Mathematical Concepts abstract idea groupings of MPEP 2106, they have been determined to recite at least one abstract idea when evaluated under Step 2A Prong One of the eligibility inquiry. Independent claim 14 recites similar limitations as the above-noted limitations recited in claim 1 and are therefore found to recite the same abstract idea.
With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. With respect to the independents claims, the additional elements recited in the claims are: an Artificial Intelligence (Al) model, a data aggregator module on a database, a disposition truthfulness calculator module, one or more applications, and a supervisor application (claim 1); one or more processors, a database, a memory an Artificial Intelligence (AI) model, a data aggregator module, a disposition truthfulness calculator, and one or more applications, and a supervisor application (claim 14). These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application. These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). Even if the step for receiving is not deemed part of the abstract idea, this steps is at most directed to insignificant extra-solution data gathering activity, which is not sufficient to amount to a practical application. See MPEP 2106.05(g). Even if the step for receiving is not deemed part of the abstract idea, this steps is at most directed to insignificant extra-solution data gathering activity, which is not sufficient to amount to a practical application. See MPEP 2106.05(g). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception.
With respect to Step 2B, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to the independents claims, the additional elements recited in the claims are: an Artificial Intelligence (Al) model, a data aggregator module on a database, a disposition truthfulness calculator module, one or more applications, and a supervisor application (claim 1); one or more processors, a database, a memory an Artificial Intelligence (AI) model, a data aggregator module, a disposition truthfulness calculator, and one or more applications, and a supervisor application (claim 14). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment and does not amount to significantly more than the abstract idea itself. Notably, Applicant’s Specification suggests that virtually any type of computing device under the sun can be used to implement the claimed invention (Specification at paragraph [0063]). Accordingly, the generic computer involvement in performing the claim steps merely serves to generally link the use of the judicial exception to a particular technological environment, which does not add significantly more to the claim. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976.).
Even if the step for receiving is not deemed part of the abstract idea, this step is at most directed to insignificant extra-solution data gathering activity, which has been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
Even if the Artificial Intelligence model was evaluated as an element beyond software/code for a generic computer to execute, it is noted that that the claimed use of artificial intelligence is recited at a high level of generality these elements amount to well-understood, routine, and conventional activity in the art, which fails to add significantly more to the claims. See, e.g., Magdon-Ismail et al., US 2009/0055270 (paragraph 39: “Both local and central engines may incorporate analysis techniques, such as artificial intelligence, machine learning and other techniques, which are well known in the art”).
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself.
Dependent claims 2-11 and 13 recite the same abstract ideas as recited in the independent claims, and have been found to either recite additional details that are part of the abstract idea itself (when analyzed under Step 2A Prong One) along with, at most, additional elements that fail to integrate the abstract idea into a practical application or add significantly more. In particular, dependent claims 2-11 and 13 further narrow the abstract ideas recited in independent claim 1 by reciting additional details or steps that set forth mathematical relationships, formulas and calculations, which therefore fall under the “Mathematical Concepts” group; and also recite limitations that fall under the “Certain methods of organizing human activity” abstract idea grouping. For example, dependent claims 2-11 and 13 recite “(i) retrieving interactions transcripts and related dispositions during a preconfigured period; (ii) preprocessing the retrieved interactions transcripts and related disposition; (iii) providing the preprocessed interactions transcripts and related disposition to tokenize the preprocessed interactions transcripts into tokens and encode the tokens; and (iv) using the encoded tokens,” “wherein the related disposition is manually entered by an agent at the end of the interaction by selecting from a list of options,” “wherein the disposition confidence score related to the agent for the received interaction transcript of the interaction is calculated based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by the agent and the received interaction transcript and related disposition of the interaction between the agent and the customer,” “wherein the general disposition confidence score related to all agents for the received interaction transcript of the interaction is calculated…based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by all agents and the received interaction transcript and related disposition of the interaction between the agent and the customer,” “wherein the aggregated data related to the agent for the received interaction transcript of the interaction is agents sentiment score for die interaction, occupancy rate of the agent for a specified period, skills, ratings and duty cycle factor for a specified period,” “wherein the one or more follow-up actions based on the disposition truthfulness score is assigning a coaching program by an evaluator,” “wherein the one or more follow-up actions based on the disposition truthfulness score includes an optimized assignment to agents,” “displaying the disposition confidence score related to the agent on a supervisor dashboard,” “wherein the one or more follow-up actions based on the disposition truthfulness score includes a supervisor agent communication, “wherein the DTS is calculated based on formula I: (I) DTS = DCS +AIS +AOF-DCF whereby: DCS is calculated based on formula II Disposition Confidence Score = ((MEDCS +GDCS) / 2) F1 whereby: MEDCS is a Manually Entered DCS, which is the calculated disposition confidence score related, to the agent, GDCS is a General DCS, which is the calculated disposition confidence score related to all agents, and F1 is a weight; AIS is calculated based on formula III: (III) Agent Interaction Specifics = ((AS + ASS)/2) x F2 whereby: AS is Agent's sentiments score for the interaction, ASS is Agent's skills score, F2 is a weight; AOF is calculated based on formula IV: (IV) Agent Other Factors = ((AOR+AR)/2) X F3 whereby: AOR is Agents Occupancy Rate for a specified period, and AR is Agent ratings; F3 is a weight; and DCF is calculated based on formula V: (V) Duty Cycle Factors = RDCF X F4 whereby: RDCF is Raw Duty Cycle Factor for a specified period, and F4 is a weight,” however, these steps can be accomplished via mathematical calculations and are also directed to “certain methods of organizing human activity.” As described above, dependent claims 2-11 and 13 further narrow the abstract ideas recited in independent claim 1 by reciting additional details or steps that set forth mathematical relationships, formulas and calculations and steps/details directly in support of organizing human activity by managing interactions between people by following rules, or instructions. Dependent claims 2, 4-5, and 7-12 recite “the Al model is prebuilt,” “a Natural language Processing (NLP) module,” “to build and train the Al model,” “a Manually Entered Disposition Confidence Score (MEDCS) module,” “a General Disposition Confidence Score (GDCS) module,” “a Quality Management (QM) application,” “a Workforce Management (WFM) application,” “a supervisor application,” “a display unit,” however the use of an AI model, an NLP module, a Manually Entered Disposition Confidence Score (MEDCS) module, a General Disposition Confidence Score (GDCS) module, a Quality Management (QM) application, a Workforce Management (WFM) application, a display unit is recited at a high level of generality and fails to impose meaningful limitation on the claim, which does not amount to a practical application. When evaluated under Step 2A Prong Two and Step 2B, these additional elements do not amount to a practical application or significantly more since they merely require generic computing devices (or computer-implemented instructions/code) which as noted above in the discussion of the independent claims above is not enough to render the claims as eligible. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea itself.
For more information, see MPEP 2106.
Claim Rejections - 35 USC § 103
15. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
16. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
17. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
18. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
19. Claims 1, 3-5, 7-12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over DeFilippo et al., Pub. No.: US 2022/0366277 A1, [hereinafter DeFilippo], in view of Volkov et al., Pub. No.: US 2017/0185941 A1, [hereinafter Volkov], in further view of Tapuhi et al., Pub. No.: US 2018/0096617 A1, [hereinafter Tapuhi].
As per claim 1, DeFilippo teaches a computerized-method for identifying truthfulness of a disposition, in a contact center, the computerized-method (paragraph 0004) comprising:
(a) receiving an interaction transcript and related disposition of an interaction between an agent and a customer (paragraph 0054, discussing that a customer service agent may be handling multiple, simultaneously customer service cases (for example, chats) at once. Even though the time is overlapping for each of the associated customers, the workforce analytics system can determine how much of their time is actually spent on each customer. The time that is tracked includes not only how much time the customer service agent is chatting with that customer, but how much time the customer service agent is spending working on that customer versus working on actions associated with another customer; paragraph 0107, disusing that the ML (Machine Learning) engine can also receive different types of parameters that can be used, for example, for processing of interaction logs by an interaction log processor; paragraph 0108, discussing that the interaction log processor can locate interaction data for cases that have been identified as being associated with a good or bad case result; paragraph 0137, discussing that the model builder can analyze interaction log entries received from the interaction log processor or an administrator that have been flagged as corresponding to a good or bad case result; paragraph 0150, discussing that the interaction log processor can determine which representatives are associated with those determined interaction log entries; paragraph 0152, discussing that the action engine can record various information about actions and recommendations, such whether or not a representative performs a recommended action, which actions were performed automatically by the action engine or by one of the other action performers, how and when recommendations are presented, etc. Additionally, the time/interaction tracker can continue to record the remainder of the representative's actions for the case session, until a case outcome is eventually recorded; paragraphs 0039, 0068, 0126);
(b) providing the received interaction transcript and related disposition to an Artificial Intelligence (Al) model to calculate: (i) a score related to the agent (paragraph 0111, discussing that after the ML engine has been trained, an interaction analyzer/recommendation engine can receive interaction data for the support representative for interactions occurring in multiple software services used by the user during handling of a case. The interaction analyzer/recommendation engine can identify a machine learning model built by the model building and learning engine that includes learned model interaction behavior for a case type of the case. The interaction analyzer/recommendation engine can compare interaction data for the user to the learned model interaction behavior and generate action data that includes an interaction behavior improvement recommendation that is determined based on the comparing of the interaction data for the user to the learned model interaction behavior for the case type; paragraph 0126, discussing an example of performing learned coaching actions determined by a machine learning engine…An operator performs an example action of moving (e.g., changing) a case status to a value of “solved” even though the operator had not yet sent a message to the customer being serviced regarding resolution of the case…A ML engine, based on training data and feedback, determines that the combination of changing a case status to solved without having notifying the customer being serviced is likely to result in a negative outcome of the customer re-contacting customer service about the status of the issue for which the case occurred…The ML engine performs corrective analysis to determine next best action(s) to perform to avoid the negative outcome. For example, the ML engine can determine that a next best set of actions includes changing the case status back to an unsolved state, sending an update message to the customer informing the customer that the case is solved (e.g., with details regarding the resolution), and then setting the case status to solved after the customer has been notified…Information describing the next best actions to perform is automatically displayed to the operator, so that the operator can perform the recommended actions, so that the negative outcome is avoided; paragraph 0129, discussing that located interaction log entries that correspond to either good or bad case results can be provided to the ML engine. The ML engine 1can use the interaction log entries that correspond to either good or bad (or a degree of either good or bad) case results when building interaction model(s)…In some implementations, the interaction log processor can perform other processing using the case result criteria as input. For example, the interaction log processor can determine or identify, from the located interaction log entries that correspond to either good or bad case results, best (or worst) representatives, which can be aggregated by team, site, or other grouping(s), for example; paragraph 0158, discussing that a machine learning model is identified that includes learned model interaction behavior for the case type. The machine learning model can be trained on one or more of specified interaction patterns that are specified as correlating to either a good case result or a bad case result for the case type, ground truth interaction data associated with cases that have been identified as having either a good case result or a bad case result, or ground truth interaction data associated with model users who have been identified as model user performers for the case type; paragraph 0154);
(c) operating a data aggregator module on a database to aggregate data related to the agent (paragraph 0039, discussing tracking operator behaviors within their digital environment and comparing the operator's behaviors to the behaviors of the same individual and/or other individuals performing similar tasks over time or to a prescribed expected set of input behaviors or outcome results for given tasks; paragraph 0068, discussing that the pseudocode links events (e.g., customer service agent actions) to corresponding cases and captures event information (e.g., clicks, customer service agent inputs) for the events, e.g., by stepping through a sequence of events that have occurred. Once the system has analyzed agent events and assigned those events to various cases, the system can provide a variety of useful functions; paragraph 0053, discussing that the workforce analytics system can insure that the customer service agent follows a proper procedure while collecting metadata from each system that the customer service agent accesses and linking the metadata together; paragraph 0170, discussing that the database can be an in-memory, conventional, or a database storing data consistent with the disclosure);
(e) take one or more follow-up actions (paragraph 0112, discussing that an action engine can take action based on the action data. For example, when the analyzed interaction data is real-time interaction data, a real-time recommendation presenter can present the behavior improvement recommendation to the support representative to direct handling of the case (e.g., to increase a likelihood of a good case outcome)…As yet another example, when the analyzed interaction data is historical interaction data, a recommendation report generator can generate a report that can include the behavior improvement recommendation (and possibly other improvement recommendations generated for other cases or other interactions). A feedback engine can be used to further update the machine learning model, based on new interaction data and new case results, performing of behavior recommendations and resultant effect on case results, deviations from or an ignoring of a recommendation, etc.; paragraph 0140, discussing that an interaction analyzer of the ML engine can analyze interaction data and compare the interaction data to an appropriate interaction model. For example, when analyzing historical interactions, the interaction analyzer can analyze historical interactions for a particular representative, team, or site for a particular workflow by comparing the historical interactions for the particular representative, team, or site to an interaction model for the workflow…The interaction analyzer can determine interaction feedback for the representative, team, or site, based on comparing interaction data to the interaction model. The interaction feedback, whether presented in real time or in a later reporting format, can be used for training representatives to use best processes and tools, to avoid known mistakes, to follow best known steps, to be more efficient, and/or to produce better quality results based on prescribed or learned patterns.),
wherein one application of the one or more applications is a supervisor application (paragraph 0141, discussing that for real-time interaction data analyzing, the interaction feedback can be real-time feedback that can be provided, for example, to a representative device and/or a supervisor device; paragraph 0161, discussing that action is taken based on the action data. When the received interaction data is real-time interaction data, taking action can include presenting the behavior improvement recommendation to the user. As another example, taking action can include automatically performing one or more interactions on behalf of the user that have been predicted by the machine learning model to have a positive effect on the case outcome for the case. When the received interaction data is historical interaction data, taking action can include including the behavior improvement recommendation in a report and providing the report to a user or a supervisor of the user), and
(f) configuring the supervisor application to initiate a supervisor agent communication based on the score as the one or more follow-up actions (paragraph 0142, discussing that in addition or alternatively to being presented on the representative device, real-time feedback can be presented on a supervisor device, for example, in a monitoring queue. For some cases, a supervisor can initiate an intervention, escalation, or change of action for the case; paragraph 0161, discussing that action is taken based on the action data. When the received interaction data is real-time interaction data, taking action can include presenting the behavior improvement recommendation to the user. As another example, taking action can include automatically performing one or more interactions on behalf of the user that have been predicted by the machine learning model to have a positive effect on the case outcome for the case. When the received interaction data is historical interaction data, taking action can include including the behavior improvement recommendation in a report and providing the report to a user or a supervisor of the user).
While DeFilippo teaches (b) providing the received interaction transcript and related disposition to an Artificial Intelligence (All) model, it does not explicitly teach that the providing is to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents; (d) providing the disposition confidence score related to the agent, the general disposition confidence score related to all agents and the data related to the agent to a disposition truthfulness calculator module to calculate a Disposition Truthfulness Score (DTS); (e) sending the DTS to one or more applications, to take one or more follow-up actions based on the DTS, when the DTS is below a preconfigured disposition truthfulness threshold; and configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions. Volkov in the analogous art of worker evaluation systems teaches:
(b) providing the received interaction transcript and related disposition to an Artificial Intelligence (All) model to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents (paragraph 0013, discussing that that the adjudication module manages the results provided/submitted by a worker for a task. The adjudication module utilizes one or more adjudication rules or acceptance criteria to ensure that the best results of a task are identified and/or to provide a degree of confidence in the correctness of a result; paragraph 0014, discussing a worker fitness module for analyzing answers to flag potentially incorrect responses is described; paragraph 0123, discussing that the candidate for answer error detection module handles all first-pass evaluation of a worker answer for detection of possible fraudulent, out-of-characteristic, or unreliable behavior, and sends the question directly for extension if required. This module predicts the likelihood of the current answer being incorrect or being submitted with the worker in a spamming state without making any attempt to answer the question correctly. “Spamming” may refer to a worker behavior where the worker submits answers in order to obtain some benefit (e.g., monetary rewards for answering questions) without regard to the correctness of the provided answers; paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0144, discussing that the candidate fraudulent answer detection module uses supervised model. The trained supervisor uses instances of confirmed fraud and worker behavioral features to automatically flag worker submissions as candidate fraud…; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraphs 0082, 0116, 0125, 0128);
(d) providing the disposition confidence score related to the agent, the general disposition confidence score related to all agents and the data related to the agent to a disposition truthfulness calculator module to calculate a Disposition Truthfulness Score (DTS) (paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraph 0244, discussing that the integration of IBCC or WCA with the behavior-based accuracy prediction model will be the first existing model that integrates accuracy and behavior based information about a worker into a single (and consistent) scoring of answer accuracy; paragraph 0282); and
(e) sending the DTS to one or more applications, to take one or more follow-up actions based on the DTS, when the DTS is below a preconfigured disposition truthfulness threshold. (paragraph 0131, discussing that the CAF model identifies out of character worker behavior and flags answers for extension. CAF model additionally prevents flagged or fraudulent answers from being passed through the delivery. In certain embodiments, CAF model provides a framework for statistical improvements to quality, and for analyst review of worker performance (number of candidate fraud, number of confirmed fraud) for evaluating a worker for a hire-fire decision; paragraphs 0230, 0254).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s features for providing the received interaction transcript and related disposition to an Artificial Intelligence (All) model to calculate: (i) a disposition confidence score related to the agent; and (ii) a general disposition confidence score related to all agents, providing the disposition confidence score related to the agent, the general disposition confidence score related to all agents and the data related to the agent to a disposition truthfulness calculator module to calculate a Disposition Truthfulness Score (DTS), and sending the DTS to one or more applications, to take one or more follow-up actions based on the DTS, when the DTS is below a preconfigured disposition truthfulness threshold, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
The DeFilippo-Volkov combination does not explicitly teach configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions. However, Tapuhi in the analogous art of customer service and enterprise workflow management systems teaches this concept. Tapuhi teaches:
configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up actions (paragraph 0004, discussing systems and methods for automatically monitoring, evaluating, and managing the performance of agents of a contact center; paragraph 0102, discussing that a “repeating individual agent issue” refers to sub-par performance that is confined to a particular agent…and it is not the first time that the particular agent has had this issue. Because this may be a more serious issue, one action may be to generate a side-by-side coaching session with a supervisor or a trainer; paragraph 0105, discussing that he quality monitoring system collects and aggregates scores for every agent on a periodic basis, paragraph 0110, discussing that to determine if the problem is an individual agent issue, the quality monitoring system identifies an agent from the set of agents to analyze…The quality monitoring system identifies scores that are low in comparison with other agents or in comparison with previous performance. For example, in some embodiments, “low” can mean lower than a threshold score…In some embodiments, “low” may be compared with the score average of other agents, e.g., if for a certain score almost all agents regularly score 5, then an agent that scores 4.5 might need training. In some embodiments, “low” may refer to decreased performance compared to historical performance...If any of these conditions occur, agent coaching will be assigned to this agent, based on the scores that were low; paragraph 0122, discussing that each of these scores can then be compared to a threshold value to determine whether the agent's performance on the quality metric is satisfactory, or if the agent's performance is sub-par. In other words, one or more underperforming quality metrics can be determined from comparisons of the agents' aggregated quality metrics against the thresholds. A customized coaching session can then be generated for the agent by identifying coaching session “reasons” based on which quality metrics are underperforming, where each coaching session reason may be associated with a training module; paragraph 0091, discussing quality monitoring systems in which a manager can define when to take an action on a specific automatic question from the quality monitoring evaluation form. In one such embodiment, a manager may define a “trigger point” that specifies conditions for taking an action on a particular evaluation question from the evaluation form. For instance, a trigger point may be when a specified question receives a very low score… The manager may also define what message should be presented to the agent or what other action should be taken in response to reaching the trigger point; paragraph 0118).
The DeFilippo-Volkov combination describes features related to worker evaluation. Tapuhi
relates to the field of software for operating contact centers. Therefore they are deemed to be analogous as they both are directed towards worker analysis systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the DeFilippo-Volkov combination with Tapuhi because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying the DeFilippo-Volkov combination to include Tapuhi’s features for including configuring the supervisor application to initiate a supervisor agent communication based on the disposition truthfulness score as the one or more follow-up action, in the manner claimed, would serve the motivation of providing training to agents shortly or immediately after problems occur, thereby tightening the feedback loop and improving the management of agent performance (Tapuhi at paragraph 0118); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 3, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein the related disposition is manually entered by an agent at the end of the interaction by selecting from a list of options (paragraph 0126, discussing that an operator performs an example action of moving (e.g., changing) a case status to a value of “solved” even though the operator had not yet sent a message to the customer being serviced regarding resolution of the case…A ML engine, based on training data and feedback, determines that the combination of changing a case status to solved without having notifying the customer being serviced is likely to result in a negative outcome of the customer re-contacting customer service about the status of the issue for which the case occurred…The ML engine performs corrective analysis to determine next best action(s) to perform to avoid the negative outcome. For example, the ML engine can determine that a next best set of actions includes changing the case status back to an unsolved state, sending an update message to the customer informing the customer that the case is solved (e.g., with details regarding the resolution), and then setting the case status to solved after the customer has been notified…Information describing the next best actions to perform is automatically displayed to the operator, so that the operator can perform the recommended actions, so that the negative outcome is avoided; paragraph 0152, discussing that the action engine can record various information about actions and recommendations, such whether or not a representative performs a recommended action, which actions were performed automatically by the action engine or by one of the other action performers, how and when recommendations are presented, etc. Additionally, the time/interaction tracker can continue to record the remainder of the representative's actions for the case session (and possibly other case sessions), until a case outcome is eventually recorded).
As per claim 4, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein the score related to the agent for the received interaction transcript of the interaction is calculated by the Al module by operating a Manually Entered Disposition Confidence Score (MEDCS) module based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by the agent and the received interaction transcript and related disposition of the interaction between the agent and the customer (paragraph 0135, discussing that an administrator can select, as training data and using the engine configuration application, certain selected cases as being either good or bad (or a degree of satisfactory/unsatisfactory). That is, the administrator can manually select certain cases as ground truth for good or bad case results. The interaction log processor can locate interaction log entries associated with the selected cases and provide the located interaction log entries to the ML (machine learning) engine. In some implementations, an administrator can, for example, using an interaction log browser tool, select certain interaction log entries as either being associated with a good or bad case result. Interaction log entries selected by an administrator can be provided to the ML engine).
Although not explicitly taught by DeFilippo, Volkov in the analogous art of worker evaluation systems teaches wherein the disposition confidence score related to the agent for the received interaction transcript of the interaction is calculated by the Al module by operating a Manually Entered Disposition Confidence Score (MEDCS) module based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by the agent and the received interaction transcript and related disposition of the interaction between the agent and the customer (paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0145, discussing that potential instances of fraud can be confirmed either by assignment to additional automated process instances or by manual confirmation by a business analyst or similar human worker. The term “supervised” may refer to manual (e.g., human) supervision and review and/or automated review using trained algorithms that indicate example instances of candidate error answers or fraudulent answers. For example, a series of answers may include attributes that indicate that they may be fraudulent, a mistake, or otherwise incorrect. These answers can be either manually or programmatically reviewed to confirm that they are or are not verified instances of fraud. These confirmed instances of fraud can be used to train an automated supervisor model. For flagging future answers as potential instances of fraud based on correlation with attributes of the confirmed fraud instances; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraph 0244, discussing that the integration of IBCC or WCA with the behavior-based accuracy prediction model will be the first existing model that integrates accuracy and behavior based information about a worker into a single (and consistent) scoring of answer accuracy).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s feature for including wherein the disposition confidence score related to the agent for the received interaction transcript of the interaction is calculated by the Al module by operating a Manually Entered Disposition Confidence Score (MEDCS) module based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by the agent and the received interaction transcript and related disposition of the interaction between the agent and the customer, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 5, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein the score is calculated by the Al module (paragraph 0111, discussing that after the ML engine has been trained, an interaction analyzer/recommendation engine can receive interaction data for the support representative for interactions occurring in multiple software services used by the user during handling of a case. The interaction analyzer/recommendation engine can identify a machine learning model built by the model building and learning engine that includes learned model interaction behavior for a case type of the case. The interaction analyzer/recommendation engine can compare interaction data for the user to the learned model interaction behavior and generate action data that includes an interaction behavior improvement recommendation that is determined based on the comparing of the interaction data for the user to the learned model interaction behavior for the case type; paragraph 0126, discussing an example of performing learned coaching actions determined by a machine learning engine…An operator performs an example action of moving (e.g., changing) a case status to a value of “solved” even though the operator had not yet sent a message to the customer being serviced regarding resolution of the case…A ML engine, based on training data and feedback, determines that the combination of changing a case status to solved without having notifying the customer being serviced is likely to result in a negative outcome of the customer re-contacting customer service about the status of the issue for which the case occurred…The ML engine performs corrective analysis to determine next best action(s) to perform to avoid the negative outcome. For example, the ML engine can determine that a next best set of actions includes changing the case status back to an unsolved state, sending an update message to the customer informing the customer that the case is solved (e.g., with details regarding the resolution), and then setting the case status to solved after the customer has been notified…Information describing the next best actions to perform is automatically displayed to the operator, so that the operator can perform the recommended actions, so that the negative outcome is avoided; paragraph 0129, discussing that located interaction log entries that correspond to either good or bad case results can be provided to the ML engine. The ML engine 1can use the interaction log entries that correspond to either good or bad (or a degree of either good or bad) case results when building interaction model(s)…In some implementations, the interaction log processor can perform other processing using the case result criteria as input. For example, the interaction log processor can determine or identify, from the located interaction log entries that correspond to either good or bad case results, best (or worst) representatives, which can be aggregated by team, site, or other grouping(s), for example; paragraph 0158, discussing that a machine learning model is identified that includes learned model interaction behavior for the case type. The machine learning model can be trained on one or more of specified interaction patterns that are specified as correlating to either a good case result or a bad case result for the case type, ground truth interaction data associated with cases that have been identified as having either a good case result or a bad case result, or ground truth interaction data associated with model users who have been identified as model user performers for the case type; paragraph 0128).
Although not explicitly taught by DeFilippo, Volkov in the analogous art of worker evaluation system teaches wherein the general disposition confidence score related to all agents for the received interaction transcript of the interaction is calculated by the Al module by operating a General Disposition Confidence Score (GDCS) module and wherein said GDCS module is based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by all agents and the received interaction transcript and related disposition of the interaction between the agent and the customer (paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0145, discussing that potential instances of fraud can be confirmed either by assignment to additional automated process instances or by manual confirmation by a business analyst or similar human worker. The term “supervised” may refer to manual (e.g., human) supervision and review and/or automated review using trained algorithms that indicate example instances of candidate error answers or fraudulent answers. For example, a series of answers may include attributes that indicate that they may be fraudulent, a mistake, or otherwise incorrect. These answers can be either manually or programmatically reviewed to confirm that they are or are not verified instances of fraud. These confirmed instances of fraud can be used to train an automated supervisor model. For flagging future answers as potential instances of fraud based on correlation with attributes of the confirmed fraud instances; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraph 0244, discussing that the integration of IBCC or WCA with the behavior-based accuracy prediction model will be the first existing model that integrates accuracy and behavior based information about a worker into a single (and consistent) scoring of answer accuracy).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s feature for including wherein the general disposition confidence score related to all agents for the received interaction transcript of the interaction is calculated by the Al module by operating a General Disposition Confidence Score (GDCS) module and wherein said GDCS module is based on agents interactions transcripts and dispositions related to interactions conducted in a preconfigured period by all agents and the received interaction transcript and related disposition of the interaction between the agent and the customer, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 7, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein one application of the one or more applications is a Quality Management (QM) application (paragraph 0136, discussing that a model builder of the ML engine can build interaction models, including the interaction model. The model builder can build the interaction model by using ML to learn which patterns of representative behavior correlate to either a good or bad case result. A good or bad case result can be a case for which case handling was efficient, inefficient, high quality, low quality, etc.; paragraph 0142, discussing the real-time feedback can take the form of written, verbal, or symbolic instruction sets comprised of any number of recommended actions, that can be presented in real time to direct representative behavior. The interaction analyzer can, for example, determine that a case or case session may result in a negative outcome, and determine recommended actions that are likely to prevent the negative outcome. Accordingly, when the representative acts on the recommended actions as a next best course of action, the predicted negative outcome can be avoided. The real-time feedback can serve as interventions for preventing undesirable case outcomes such as low quality or low efficiency. In addition or alternatively to being presented on the representative device, real-time feedback can be presented on a supervisor device, for example, in a monitoring queue. For some cases, a supervisor can initiate an intervention, escalation, or change of action for the case; paragraph 0140).
As per claim 8, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 7. DeFilippo further teaches wherein the one or more follow-up actions of the QN application based on the score is assigning a coaching program by an evaluator (paragraph 0040, discussing that personalized recommendations for improved behavior can be generated for the operator based on analyzing the operator's current behavior. The personalized recommendations can be presented to the operator in real time to direct operator behavior, after-the-fact as a review of coaching opportunities, or automatically used as input to automated systems which will perform actions on behalf of the operator. The personalized recommendations can take the form of written, verbal, or symbolic instruction sets comprised of a variety of actions; paragraph 0151, discussing that the reports and/or the other action data can be provided to the other action performers for performing (either as manual or automated actions), various other types of actions, such as personnel actions, training actions, or tool-change actions. Personnel actions can include generation and allocation of awards or recognition, for representatives selected or identified as model representatives. Training actions can include recommendations for training representatives in general, based on determining how the model representatives have been trained and recommending modeling of training of other representatives based on how the model representatives were trained. Training actions can include actions taken to train users how to interact with the learned coaching engine. Tool change actions can include recommendations for installing or reconfiguring different tools, based on interaction patterns that have been determined to correspond to good or bad case outcomes. For example, the ML engine may learn that certain patterns performed using a first tool lead to good outcomes, but that the first tool is not yet available team wide or site wide, for instance. A tool change recommendation can be to further deploy the first tool. As another example, the ML engine may learn that a certain interaction sequence performed with a second tool leads to bad outcomes, even though a third tool may be a preferred tool to use for performing a recommended, alternative interaction sequence. Accordingly, a tool change recommendation can be to remove or restrict access to the second tool).
DeFilippo does not explicitly teach based on the disposition truthfulness score. However, Volkov in the analogous art of worker evaluation systems teaches this concept (paragraph 0013, discussing that that the adjudication module manages the results provided/submitted by a worker for a task. The adjudication module utilizes one or more adjudication rules or acceptance criteria to ensure that the best results of a task are identified and/or to provide a degree of confidence in the correctness of a result; paragraph 0123, discussing that the candidate for answer error detection module handles all first-pass evaluation of a worker answer for detection of possible fraudulent, out-of-characteristic, or unreliable behavior, and sends the question directly for extension if required. This module predicts the likelihood of the current answer being incorrect or being submitted with the worker in a spamming state without making any attempt to answer the question correctly. “Spamming” may refer to a worker behavior where the worker submits answers in order to obtain some benefit (e.g., monetary rewards for answering questions) without regard to the correctness of the provided answers; paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0131, discussing that the CAF model identifies out of character worker behavior and flags answers for extension. CAF model additionally prevents flagged or fraudulent answers from being passed through the delivery. In certain embodiments, CAF model provides a framework for statistical improvements to quality, and for analyst review of worker performance (number of candidate fraud, number of confirmed fraud) for evaluating a worker for a hire-fire decision; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraphs 0144, 0254).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s feature for including based on the disposition truthfulness score, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 9, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein one application of the one or more applications is a Workforce Management (WFM) application (paragraph 0008, discussing an example of a workforce analytics system; paragraph 0042, discussing a workforce analytics manager for recording and managing interactions; paragraphs 0055, 0097).
As per claim 10, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 9. DeFilippo further teaches wherein the one or more follow-up actions of the WFM application based on the score includes an optimized assignment to agents (paragraph 0129, discussing that the case result criteria can be provided to the interaction log processor. The interaction log processor can use the case result criteria to locate interaction log entries that match the case result criteria. For instance, the interaction log processor can locate interaction logs entries that correspond to good case results and bad case results, according to the case result criteria. Located interaction log entries that correspond to either good or bad case results can be provided to the ML engine. The ML engine can use the interaction log entries that correspond to either good or bad (or a degree of either good or bad) case results when building interaction model(s) (e.g., an interaction model), as described in more detail below. In some implementations, the interaction log processor can perform other processing using the case result criteria as input. For example, the interaction log processor can determine or identify, from the located interaction log entries that correspond to either good or bad case results, best (or worst) representatives, which can be aggregated by team, site, or other grouping(s), for example).
DeFilippo does not explicitly teach based on the disposition truthfulness score. However, Volkov in the analogous art of worker evaluation systems teaches this concept (paragraph 0013, discussing that that the adjudication module manages the results provided/submitted by a worker for a task. The adjudication module utilizes one or more adjudication rules or acceptance criteria to ensure that the best results of a task are identified and/or to provide a degree of confidence in the correctness of a result; paragraph 0123, discussing that the candidate for answer error detection module handles all first-pass evaluation of a worker answer for detection of possible fraudulent, out-of-characteristic, or unreliable behavior, and sends the question directly for extension if required. This module predicts the likelihood of the current answer being incorrect or being submitted with the worker in a spamming state without making any attempt to answer the question correctly. “Spamming” may refer to a worker behavior where the worker submits answers in order to obtain some benefit (e.g., monetary rewards for answering questions) without regard to the correctness of the provided answers; paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0131, discussing that the CAF model identifies out of character worker behavior and flags answers for extension. CAF model additionally prevents flagged or fraudulent answers from being passed through the delivery. In certain embodiments, CAF model provides a framework for statistical improvements to quality, and for analyst review of worker performance (number of candidate fraud, number of confirmed fraud) for evaluating a worker for a hire-fire decision; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraphs 0144, 0254).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s feature for including based on the disposition truthfulness score, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 11, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein the computerized-method is further comprising displaying the score related to the agent on a supervisor dashboard of the supervisor application, via a display unit (paragraph 0161, discussing that action is taken based on the action data. When the received interaction data is real-time interaction data, taking action can include presenting the behavior improvement recommendation to the user. As another example, taking action can include automatically performing one or more interactions on behalf of the user that have been predicted by the machine learning model to have a positive effect on the case outcome for the case. When the received interaction data is historical interaction data, taking action can include including the behavior improvement recommendation in a report and providing the report to a user or a supervisor of the user).
DeFilippo does not explicitly teach the disposition confidence score. However, Volkov in the analogous art of worker evaluation systems teaches this concept (paragraph 0013, discussing that that the adjudication module manages the results provided/submitted by a worker for a task. The adjudication module utilizes one or more adjudication rules or acceptance criteria to ensure that the best results of a task are identified and/or to provide a degree of confidence in the correctness of a result; paragraph 0123, discussing that the candidate for answer error detection module handles all first-pass evaluation of a worker answer for detection of possible fraudulent, out-of-characteristic, or unreliable behavior, and sends the question directly for extension if required. This module predicts the likelihood of the current answer being incorrect or being submitted with the worker in a spamming state without making any attempt to answer the question correctly. “Spamming” may refer to a worker behavior where the worker submits answers in order to obtain some benefit (e.g., monetary rewards for answering questions) without regard to the correctness of the provided answers; paragraph 0126, discussing that the specific procedure for making the prediction and learning the comparison between the current answer and historical data may be done in multiple ways. It may be unsupervised and compared against statistical information of correct/incorrect characteristic classes at the crowd and/or individual worker level. Or it may be supervised and be trained against labelled examples of both correct/incorrect data. It could also combine the supervised and unsupervised approaches into a single prediction using ensemble combination approaches; paragraph 0129, discussing that the Candidate Answer Fraud (CAF) model determines the likelihood of a submitted answer as being wrong based on worker characteristics. This determines whether a specific answer to a worker assignment is to be marked as an outlier by identifying assignments that deviate in behavioral features from the “Peer Group”, the group of high performing workers. It uses the Peer Groups to compare the characteristics of the current answer with the historical behavior of known correct answers; paragraph 0131, discussing that the CAF model identifies out of character worker behavior and flags answers for extension. CAF model additionally prevents flagged or fraudulent answers from being passed through the delivery. In certain embodiments, CAF model provides a framework for statistical improvements to quality, and for analyst review of worker performance (number of candidate fraud, number of confirmed fraud) for evaluating a worker for a hire-fire decision; paragraph 0224, discussing that the Answer Confidence Estimation, or ACE, algorithm is used to keep track of the error rate of a stream of answers for either an individual worker or all workers for a given task to determine the overall quality of the data up to and including the current answer. The algorithm makes use of a combination approach, specifically IBCC or other Weak Classifier Aggregation Algorithm, to track the confidence probability of worker's answers over time. It works by checking if the last answer keeps the worker or task over the required AQL level. If the confidence does remain above AQL the last answer is marked as ready for delivery to the client...; paragraphs 0144, 0254).
DeFilippo is directed towards a system and method for tracking operator behaviors within their digital environment. Volkov is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards employee management systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine DeFilippo with Volkov because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying DeFilippo to include Volkov’s feature for including based on the disposition confidence score, in the manner claimed, would serve the motivation of improving the worker pool and efficiency (Volkov at paragraph 0261); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 14 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 1, as discussed above. Further, as per claim 14 the DeFilippo-Volkov- Tapuhi combination teaches a computerized-system for identifying truthfulness of a disposition, in a contact center, the computerized-system comprising: one or more processors; a database; and a memory to store the database (DeFilippo, paragraphs 0162, 0163, 0169, 0170, 0171, 0190).
20. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over DeFilippo in view of Volkov, in view of Tapuhi, in further view of Sivasubramanian, Pub. No.: US 2021/0157834 A1, [hereinafter Sivasubramanian].
As per claim 2, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1. DeFilippo further teaches wherein the Al model is prebuilt by: (i) retrieving interactions transcripts and related dispositions during a preconfigured period (paragraph 0108, discussing that the interaction log processor can locate interaction data for cases that have been identified as being associated with a good or bad case result, including interaction data associated with the model users and interaction data having at least one of the specified interaction patterns. A model builder and learning engine can build machine learning models that each include learned model interaction behavior for a given case type…Machine learning models can also be personalized for a given user; paragraph 0078, discussing customer service agent activity statistics over time; paragraph 0047); and
(ii) preprocessing the retrieved interactions transcripts and related disposition (paragraph 0108, discussing that the interaction log processor can locate interaction data for cases that have been identified as being associated with a good or bad case result, including interaction data associated with the model users and interaction data having at least one of the specified interaction patterns; paragraph 0111, discussing that after the ML engine has been trained, an interaction analyzer/recommendation engine can receive interaction data for the support representative for interactions occurring in multiple software services used by the user during handling of a case. The interaction analyzer/recommendation engine can identify a machine learning model built by the model building and learning engine that includes learned model interaction behavior for a case type of the case. The interaction analyzer/recommendation engine can compare interaction data for the user to the learned model interaction behavior and generate action data that includes an interaction behavior improvement recommendation that is determined based on the comparing of the interaction data for the user to the learned model interaction behavior for the case type; paragraph 0128, discussing that the case result criteria can include threshold values that indicate good or bad case results).
The DeFilippo-Volkov-Tapuhi combination does not explicitly teach (iii) providing the preprocessed interactions transcripts and related disposition to a Natural Language Processing (NLP) module to tokenize the preprocessed interactions transcripts into tokens and encode the tokens; and (iv) using the encoded tokens to build and train the Al model. However, Sivasubramanian in the analogous art of contact center analytics teaches these concepts. Sivasubramanian teaches:
(iii) providing the preprocessed interactions transcripts and related disposition to an NLP module to tokenize the preprocessed interactions transcripts into tokens and encode the tokens (paragraph 0031, discussing techniques utilized to implement systems and methods to analyze contacts data. Contacts data may refer to various types of communications that occur within the context of a contact center; paragraph 0074, discussing that text-based contacts data (e.g., transcripts generated by speech-to-text service or text-based contacts data obtained from client data store) are analyzed using a natural language processing (NLP) service. In at least one embodiment, NLP service 132 is a service of a computing resource service provider. In at least one embodiment…In at least one embodiment, NLP service 132 uses artificial intelligence and/or machine learning techniques to perform sentiment analysis, entity detection, key phrase detection, and various combinations thereof. In at least one embodiment, text-based contacts are organized by turns—for example, turns may alternate based on which party was speaking or typing on a contact. Each sentence spoken may correspond to a turn (e.g., successive turns may be from the same speaker). In at least some embodiments, each turn is analyzed separately for sentiment analysis, entity detection, key phrase detection, and various combinations thereof. In some embodiments, for the text of a turn, sentiment analysis, entity detection, key phrase detection, and various combinations thereof are processed in parallel by NLP service 132. In an embodiment, other natural language processing capabilities offered by NLP service 132 are utilized to analyze text-based contacts data. In at least one embodiment, sentiment analysis 120A, entity detection, key phrase detection, and various combinations thereof are executed as individual event-drive functions on a per-turn basis; paragraph 0241, discussing that the customer may create a role supported by a role management service through an interface console. The interface console may allow the customer to click an appropriate button or consent checkbox in the interface console, and the underlying system may create the role with the necessary permissions. The token service may provide the scaling service with session credentials based on a role or roles specified by the customer. These session credentials may be used by the scaling service to interact with the resource services on behalf of the customer. The token service may provide a token to the scaling service that the scaling service may include with requests that provide evidence that the scaling service has been granted the appropriate role to cause scalable dimensions of a resource in the resource services to be manipulated. The role may be utilized by the automatic scaling service to call a resource service's APIs on behalf of the customer; paragraph 0044); and
(iv) using the encoded tokens to build and train the Al model (paragraph 0044, discussing that contacts analytics service employs machine learning in an unsupervised manner and/or post-processing techniques to extract similar key phrases across conversations, perform intelligent grouping, and display result themes in a ranked order along with a count or severity value that indicates the magnitude of the issue; paragraph 0195, discussing that scaling policies may be stored with the database service by the scaling service backend, and scaling actions may be initiated through a scaling service workflow manager by the scaling service backend. The customer may specify, via a policy/role management service (not shown), a role to be assigned to the scaling service, and the scaling service may obtain a token from a token service as proof that the scaling service has been granted that role. Upon triggering a scaling policy, the scaling service may obtain a resource's current capacity and set the resource's capacity for its respective resource service of the resource services under the specified role; paragraph 0241, discussing that the customer may create a role supported by a role management service through an interface console. The interface console may allow the customer to click an appropriate button or consent checkbox in the interface console, and the underlying system may create the role with the necessary permissions. The token service may provide the scaling service with session credentials based on a role or roles specified by the customer. These session credentials may be used by the scaling service to interact with the resource services on behalf of the customer. The token service may provide a token to the scaling service that the scaling service may include with requests that provide evidence that the scaling service has been granted the appropriate role to cause scalable dimensions of a resource in the resource services to be manipulated. The role may be utilized by the automatic scaling service to call a resource service's APIs on behalf of the customer; paragraph 0242, discussing that interruption of the token service may result in the scaling service being unable to assume a role supported by a role management service, with the scaling service thereby being unable to scale a resource of the customer. In some embodiments, the scaling service caches temporary credentials that the scaling service can use when assuming a role).
The DeFilippo-Volkov-Tapuhi combination describes features related to worker evaluation. Sivasubramanian is directed towards a workforce management system. Therefore they are deemed to be analogous as they both are directed towards worker analysis systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the DeFilippo-Volkov-Tapuhi combination with Sivasubramanian because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying the DeFilippo-Volkov-Tapuhi combination to include Sivasubramanian’s features for including providing the preprocessed interactions transcripts and related disposition to an NLP module to tokenize the preprocessed interactions transcripts into tokens and encode the tokens and using the encoded tokens to build and train the Al model, in the manner claimed, would serve the motivation of improving operational efficiency of organizations' contact centers by extracting actionable insights from customer conversations (Sivasubramanian at paragraph 0034); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
21. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over DeFilippo in view of Volkov, in view of Tapuhi, in further view of Miller et al., Pub. No.: US 2018/0091654 A1, [hereinafter Miller].
As per claim 6, the DeFilippo-Volkov-Tapuhi combination teaches the computerized-method of claim 1, but it does not explicitly teach wherein the aggregated data related to the agent for the received interaction transcript of the interaction is agents sentiment score for the interaction, occupancy rate of the agent for a specified period, skills, ratings and duty cycle factor for a specified period. However, Miller in the analogous art of contact center management systems teaches this concept. Miller teaches:
wherein the aggregated data related to the agent for the received interaction transcript of the interaction is agents sentiment score for the interaction, occupancy rate of the agent for a specified period, skills, ratings and duty cycle factor for a specified period (paragraph 0002, discussing systems and methods for automatically computing scores of agent behavior based on analyzing interactions between customers and agents of a contact center and for managing contact center operations in accordance with the automatically computed scores; paragraph 0149, discussing that the call controller interacts with the routing server to find an appropriate agent for processing the interaction. The selection of an appropriate agent for routing an inbound interaction may be based, for example, on a routing strategy employed by the routing server, and further based on information about agent availability, skills, and other routing parameters provided, for example, by a statistics server; paragraph 0057, discussing that the contact center system may also include a reporting server configured to generate reports from data aggregated by the statistics server. Such reports may include near real-time reports or historical reports concerning the state of resources, such as, for example, average waiting time, abandonment rate, agent occupancy, and the like. The reports may be generated automatically or in response to specific requests from a requestor (e.g. agent/administrator, contact center application, and/or the like; paragraph 0063, discussing systems and methods for automating portions of a quality monitoring process in a contact center. The quality monitoring process may be used to monitor and to evaluate the quality of contact center agents' interactions with customers. The automatic analysis or automatic evaluation may be performed on metadata associated with the interaction (such as the length of the interaction in minutes and the number of transfers between different agents of the contact center) as well as the content of the interaction (e.g., an analysis of the text transcripts of the interaction to detect keywords or phrases), and the automatic evaluation of the interaction may be used to generate one or more evaluation scores representing the agent's performance during the interaction; paragraph 0081, discussing that the interaction features include sentiment analysis to provide information about the customer's emotions as well as the agent's emotions. For example, sentiment analysis may be used to detect whether the customer ended the interaction in a way that expressed pleasure (e.g., that a problem was resolved) or anger (e.g., that the problem could not be solved). Similarly, sentiment detection may be used to detect whether the agent's portion of the conversation was pleasant and confident or angry and defensive.).
The DeFilippo-Volkov-Tapuhi combination describes features related to worker evaluation. Miller is directed towards a system and method for automatic quality management in a contact center environment. Therefore they are deemed to be analogous as they both are directed towards worker analysis systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the DeFilippo-Volkov- Tapuhi combination with Miller because the references are analogous art because they are both directed to solutions for worker analysis, which falls within applicant’s field of endeavor (contact center management), and because modifying the DeFilippo-Volkov-Tapuhi combination to include Miller’s feature for including wherein the aggregated data related to the agent for the received interaction transcript of the interaction is agents sentiment score for the interaction, occupancy rate of the agent for a specified period, skills, ratings and duty cycle factor for a specified period, in the manner claimed, would serve the motivation of detecting deviations in the agent's performance from history, resulting in the automatic scheduling of additional training or improving performance (Miller at paragraph 0120); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Allowable Subject Matter
22. Claim 13 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 13 recites “The computerized-method of claim 1, wherein the DTS is calculated based on formula I: (I) DTS = DCS +AIS +AOF-DCF whereby: DCS is calculated based on formula II Disposition Confidence Score = ((MEDCS +GDCS) / 2) F1 whereby: MEDCS is a Manually Entered DCS, which is the calculated disposition confidence score related, to the agent, GDCS is a General DCS, which is the calculated disposition confidence score related to all agents, and F1 is a weight; AIS is calculated based on formula III: (III) Agent Interaction Specifics = ((AS + ASS)/2) x F2 whereby: AS is Agent's sentiments score for the interaction, ASS is Agent's skills score, F2 is a weight; AOF is calculated based on formula IV: (IV) Agent Other Factors = ((AOR+AR)/2) X F3 whereby: AOR is Agents Occupancy Rate for a specified period, and AR is Agent ratings; F3 is a weight; and DCF is calculated based on formula V: (V) Duty Cycle Factors = RDCF X F4 whereby: RDCF is Raw Duty Cycle Factor for a specified period, and F4 is a weight.” With respect to dependent claim 13, the closest prior art, Dwyer et al., Pub. No.: US 2021/0120126 A1 – describes a real-time automated monitoring system for monitoring and improving live communications, and providing feedback on communications performance. While Dwyer describes a score builder for calculating an agent quality score based on plurality of indicators (paragraph 0120), Dwyer does not teach the claim limitations directed to specific techniques for calculating a Disposition Truthfulness Score based on formula I: (I) DTS = DCS +AIS +AOF-DCF whereby: DCS is calculated based on formula II Disposition Confidence Score = ((MEDCS +GDCS) / 2) F1 whereby: MEDCS is a Manually Entered DCS, which is the calculated disposition confidence score related, to the agent, GDCS is a General DCS, which is the calculated disposition confidence score related to all agents, and F1 is a weight; AIS is calculated based on formula III: (III) Agent Interaction Specifics = ((AS + ASS)/2) x F2 whereby: AS is Agent's sentiments score for the interaction, ASS is Agent's skills score, F2 is a weight; AOF is calculated based on formula IV: (VI) Agent Other Factors = ((AOR+AR)/2) X F3 whereby: AOR is Agents Occupancy Rate for a specified period, and AR is Agent ratings; F3 is a weight; and DCF is calculated based on formula V: (V) Duty Cycle Factors = RDCF X F4 whereby: RDCF is Raw Duty Cycle Factor for a specified period, and F4 is a weight.” Claim 13 is not allowable, however, because claim 13 remains rejected under 35 U.S.C. 101. Furthermore, even if the §101 a rejection of claim 13 is overcome, claim 13 would be objected to as being dependent upon rejected base claim (claim 1).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ellenbogen et al., Pub. No.: US 2017/0099200 A1 – describes a platform for gathering real-time analysis.
Rosenberg et al., Pub. No.: US 2022/0319496 A1 – describes systems and methods are provided for training natural language processing (NLP) models in a contact center.
Ali III, Louis Franklin. "A call center simulation study: Comparing the reliability of cross-trained agents to specialized agents." (2010) – compares specialized agents to cross-trained agents and through the use of simulation and determines which of the two are more efficient and reliable in their ability to service the customer.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARLENE GARCIA-GUERRA whose telephone number is (571) 270-3339. The examiner can normally be reached M-F 7:30a.m.-5:00p.m. EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian M. Epstein can be reached on (571) 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Darlene Garcia-Guerra/
Primary Examiner, Art Unit 3625