DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim(s) 1-20 are pending for examination. Claim(s) 1, 8, and 14 have been amended. This action is Final.
Response to Arguments
Applicant's arguments filed 1/21/2026 with respect to the 35 U.S.C. 101 rejection have been fully considered but they are not persuasive.
Applicant Argues: Claim 1 is not directed to any of the judicial exceptions and therefore claim 1 is not directed to an abstract idea. MPEP §2106.04(a) states: [...].
The Office Action alleges that the claims fall under the "Certain Methods of Organizing Human Activity" grouping of abstract ideas. See, Office Action at page 3. Applicant respectfully disagrees for at least the following reasons. Initially, the invention as a whole is related to "an AI-based auditing mechanism that processes transcripts of communications in real-time to extract relevant features for detecting agent fraud" (para. [0017]), which is not a method of organizing human activities. Particularly, claim 1 recites "detecting, based on a fraud detection model implemented with an artificial neural network, the agent fraud of the agent occurred during the batch period based on the batch feature vector to generate an agent fraud detection result." The above-quoted claim features are related to detecting frauds based on a model implemented with an artificial neural network. These claim features extend far beyond fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions), and thus do not fall within the enumerated group of Certain methods of organizing human activity.
Also, the above-mentioned claim features cannot be performed in the human mind at least because the human mind cannot detect frauds based on a model implemented with an artificial neural network, and thus the claims do not fall within the "Mental processes" grouping of abstract ideas. Further, these claim features are not directed to mathematical concepts.
Accordingly, the claims do not fall into any of the abstract ideas exceptions provided by the Guidance, and thus the claims are patent eligible under Prong One of the Step 2A Analysis of the Guidance.
Examiner’s Response: The examiner respectfully disagrees. The examiner respectfully notes the Claim 1 is directed to an abstract idea. The examiner notes that the claim limitations of extracting real-time features from a communication ...; computing a batch feature vector for the agent ... wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud; detecting, agent fraud of the agent occurred ... based on the batch feature vector to generate an agent fraud detection result; updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent; and auditing service performance of the agent based on the updated evaluation information as claimed in Claim 1 falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas as they recite “commercial interactions" or "legal interactions" in the form of marketing or sales activities or behaviors and/or business relations. The examiner notes that the feature as argued by applicant “...fraud detection model implemented with an artificial neural network...” is noted to be an element recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component and merely invoke such additional elements as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, these additional elements, even in combination, do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, these arguments are not persuasive.
The examiner further notes 35 U.S.C. 101 rejection does not enumerate the claims under Mental Processes; therefore, this argument is not persuasive.
Applicant Argues: Moreover, claim 1 is patent eligible because the claimed concepts are integrated into a practical application. [...].
Initially, as mentioned previously, the Office Action has improperly analyzed claim 1 when determining whether claim 1 recites a judicial exception because claim 1 does not fall into any of the abstract idea exceptions - mathematical concepts, certain methods of organizing human activity, or mental processes.
Even assuming, for the sake of argument, that claim 1 does recite an abstract idea (which the Applicant disagrees), Applicant respectfully submits that claim 1 is patent eligible under Prong Two of the Step 2A Analysis.
The recited features of claim 1 are clearly tied to a practical application, i.e., detecting frauds based on a fraud detection model implemented with an artificial neural network. The claims provide an improvement in the technical field of analysis of real-time communications to detect agent frauds (para. [0017]). "[T]he AI-based auditing platform according to the present teaching enables automated evidence gathering on-the-fly to facilitate evidence-supported detection, self-correction to consequences of agent fraud, and prevention of damages to customer relationships due to cramming activities" (para. [0021]).
Thus, Applicant respectfully submits that, under the Prong Two of the Step 2A Analysis from the Guidance, the claimed concept is integrated into a practical application and therefore is not directed to a judicial exception. Therefore, Applicant respectfully submits that the claims are directed to patent eligible subject matter.
Examiner’s Response: The examiner respectfully disagrees. The examiner notes that a “...fraud detection model implemented with an artificial neural network...” is noted to be an element recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component and merely invoke such additional elements as a tool to perform the abstract idea (i.e., detecting ... the agent fraud of the agent occurred during the batch period based on the batch feature vector to generate an agent fraud detection result). See MPEP 2106.05(f). Thus, such an element is not indicative of integration into a practical application as include this feature (i.e., “...fraud detection model implemented with an artificial neural network...”) falls under “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Therefore, this argument is not persuasive.
Applicant Argues: Further, claim 1 amounts to significantly more than the judicial exception. [...].
In the instant application, the Office Action appears to allege that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Applicant respectfully disagrees with the contentions, and further submits that claim 1 is patent eligible under Step 2B Analysis from the Guidance.
Berkheimer showed that it is not merely the additional elements that are to be viewed for eligibility, but the claimed concept described by the additional elements in conjunction with the non-additional elements. Furthermore, even assuming arguendo that each of the claim limitations individually is abstract, or is performed by or is a generic computer, so too are the BASCOM claim limitations (e.g., BASCOM Global Internet v. AT&TMobility LLC, No. 2015-1763 (Fed. Cir. Jun. 27, 2016)2("BASCOM')).
In addition to failing to consider Applicant's claims as an ordered combination and as a whole, the Office Action has improperly analyzed the claims without considering the "additional element(s)" in combination with the non-additional elements. As a result, the Office Action has also incorrectly and improperly identified that the alleged "additional elements" do not amount to significantly more than the alleged judicial exception.
Accordingly, Applicant respectfully submits that claim 1 is patent eligible under the Step 2B Analysis of the Guidance.
Applicant respectfully submits that claim 1 is directed to patent eligible subject matter. Accordingly, no further analysis is necessary to find claim 1 patent eligible under 35 U.S.C.§ 101.
Examiner’s Response: The examiner respectfully disagrees. The examiner respectfully notes that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more to the exception. The examiner notes the additional elements, as noted in the rejection below, amounts to no more than mere instructions to apply the exception using a generic computer component and do not add anything that is not already present when they are considered individually or in combination. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, there are no meaningful limitations that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself. Therefore, this argument is not persuasive.
Applicant's arguments filed 1/21/2026 with respect to the 35 U.S.C. 103 rejection have been fully considered but they are not persuasive.
Applicant Argues: [...] The Applicant respectfully traverses the rejection. Amended claim 1 recites, in part: wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud. Neither Shaffer nor Cifarelli teaches or suggests the above-quoted claim features for at least the following reasons. The Office Action on page 7 alleges that Shaffer teaches "computing a batch feature vector for the agent with respect to a batch period based on real-time features extracted from communications involving the agent and accumulated during the batch period."
[...]
From the above, the Office Action appears to allege that Shaffer's interaction sequence vector associated with an interaction problem corresponds to the recited "batch feature vector." However, Shaffer does not teach or suggest that the interaction sequence vector "characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud," as required by claim 1. Thus, Shaffer does not teach or suggest claim l's "wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud."
Also, Cifarelli's teachings of raising an alert to a supervisor if a fraud score exceeding a threshold do not teach or suggest a vector that "characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud," as required by claim 1. Thus, Cifarelli still does not teach or suggest claim 1's "wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud."
Claims 8 and 15 recite features similar to the features of claim 1 discussed above.
It is well settled that establishing a prima facie case of obviousness requires that all claim limitations recited in a claim be taught or suggested by the prior art. MPEP § 2143.03. Clearly, the cited references, either alone or in any combination, do not pass this muster. Thus, a prima facie case of obviousness cannot be established with respect to claims 1, 8, and 15, and thus, claims 2-7, 9-14, and 16-20 are not rendered obvious by Shaffer in view of Cifarelli and are patentable.
[...]
Therefore, the Applicant respectfully requests that rejection of claims 1-20 under 35 U.S.C. §103 be withdrawn .
Examiner’s Response: The examiner respectfully disagrees. The examiner respectfully notes that Shaffer does in fact teach the amended limitation of "computing a batch feature vector for the agent with respect to a batch period based on real-time features extracted from communications involving the agent and accumulated during the batch period, wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud," More specifically, Shaffer depicts in FIG. 9 and discloses ⁋⁋ [0090] - The interaction sequence information includes information identifying devices (or a user ID) to which the interaction sequence corresponds, the contact interactions in the sequence and the time information which can be used to determine the order of and/or timing between interactions is stored in memory 312 of the contact center management system and [0091] - Operation proceed from step 915, in which an contact interaction is associated with an interaction sequence, to step 917, in which an interaction sequence vector for the interaction sequence with which an interaction was associated is generated or updated if the interaction sequence is an existing interaction sequence, based on the detected interaction. This may be done, for example, using the information shown in FIG. 6B). Further, the examiner notes Shaffer in ⁋⁋ [0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action. If a detected interaction sequence is determined to correspond to a cluster of interactions associated with not-normal service request, e.g., fraudulent contact or a contact from a novice user who needs help, a corresponding tagging of the contact takes place and a mitigation action is automatically taken by the method. Thus, as construed above interaction sequence vector (i.e., batch feature vector) characterizes an agent fraud based on time information (i.e., frequency) and the contact interactions sequence (i.e., harm). Therefore, the metes and the bounds of the claim have been met based on the reasonable construction from Shaffer’s teaching and a prima facie case of obviousness has been established. Therefore, this argument is not persuasive.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
Step 1: claim(s) 1-20 are directed to process, manufacture, and/or a machine. Therefore, the claims are directed to statutory subject matter under Step 1 (Step 1: YES). See MPEP 2106.03.
Prong 1, Step 2A: claim 1, and similar claim(s) 8 and 15, taken as representative, recites at least the following limitations that recite an abstract idea:
A method, comprising:
extracting real-time features from a communication between a customer and an agent;
computing a batch feature vector for the agent with respect to a batch period based on real-time features extracted from communications involving the agent and accumulated during the batch period, wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud;
detecting,
updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent; and
auditing service performance of the agent based on the updated evaluation information.
The above limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106.04(a)(2)(II), in that they recite "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. The broadest reasonable interpretation of these limitations for claim 1, and similar claim(s) 8 and 15 includes extracting real-time features from a communication ...; computing a batch feature vector for the agent ... wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud; detecting, agent fraud of the agent occurred ... based on the batch feature vector to generate an agent fraud detection result; updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent; and auditing service performance of the agent based on the updated evaluation information, thus, claim 1, and similar claim(s) 8 and 15, falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas as they recite “commercial interactions" or "legal interactions" in the form of marketing or sales activities or behaviors and/or business relations.
Accordingly, these claims recite an abstract idea. (Prong 1, Step 2A: YES). The types of identified abstract ideas are considered together as a single abstract idea for analysis purposes.
Prong 2, Step 2A: Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05(g)), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)). Claim 1, and for similar claim(s) 8 and 15, recite i.e., fraud detection model implemented with an artificial neural network, machine readable/non-transitory medium, machine, system w/ CS platform and AI-based auding platform including processing. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration (see Applicant’s Specification, ⁋⁋ [0047]-[0050]). These elements in the steps are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component and merely invoke such additional elements as a tool to perform the abstract idea. See MPEP 2106.05(f). Accordingly, these additional elements, even in combination, do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
As such, under Prong 2 of Step 2A, when considered both individually and as a whole, the limitations of claim 1, and for similar claim(s) 8 and 15 are not indicative of integration into a practical application (Prong 2, Step 2A: NO). See MPEP 2106.04(d).
Since claim 1, and similar claim(s) 8 and 15recites an abstract idea and fails to integrate the abstract idea into a practical application, claim 1, and similar claim(s) 8 and 15 are “directed to” an abstract idea under Step 2A (Step 2A: YES). See MPEP 2106.04(d).
Step 2B: The recitation of the additional elements is acknowledged, as identified above with respect to Prong 2 of Step 2A. These additional elements do not add significantly more to the abstract idea for the same reasons as addressed above with respect to Prong 2 of Step 2A.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of for claim 1, and for similar claim(s) 8 and 15, i.e., fraud detection model implemented with an artificial neural network, machine readable/non-transitory medium, machine, system w/ CS platform and AI-based auding platform including processing; thus, amounts to no more than mere instructions to apply the exception using a generic computer component and do not add anything that is not already present when they are considered individually or in combination. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, under Step 2B, there are no meaningful limitations in claim 1, and similar claim(s) 8 and 15 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (Step 2B: NO). See MPEP 2106.05.
Accordingly, under the Subject Matter Eligibility test, claim 1, and similar claim(s) 8 and 15 is ineligible.
Regarding Claims 2-7, 9-14, and 16-20, claims 2-7, 9-14, and 16-20 further defines the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above w/ respect to “Certain Methods of Organizing Human Activity” as the claims recite further concepts of “commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations i.e., further features related to fraud detection. These dependent claim does not include any additional elements that integrate the abstract idea into a practical application; as such elements are recited at a high level of generality such that it amounts not more than mere instructions to apply the exception using a generic computer component. Even in combination, these additional elements do not integrate the abstract idea into a practical application and do no not amount to significantly more than the abstract idea itself. Thus, the aforementioned claims are not patent-eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shaffer et al. (US 2023/0120358 A1) in view of Cifarelli et al. (US 2022/0188827 A1).
Regarding Claim 1;
Shaffer discloses a method, comprising:
extracting real-time features from a communication between a customer and an agent ([0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0088]-[0089] - Operation proceeds from start step 905 to monitoring/observation step 910 in which the contact center management system monitors, e.g., observes, interaction corresponding to interaction sequences and optionally also timing of interactions... In at least some embodiments, as previously discussed, individual interaction sequences correspond to interactions between an individual user and one or more contact centers which are being used by the user to obtain information and/or control resources);
computing a batch feature vector for the agent with respect to a batch period based on real-time features extracted from communications involving the agent and accumulated during the batch period, wherein the batch feature vector characterizes an agent fraud of the agent in terms of frequency and harm associated with the agent fraud (FIG. 7 and FIG. 9 and [0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action and [0028] and [0068] and [0089] - Operation proceeds from start step 905 to monitoring/observation step 910 in which the contact center management system monitors, e.g., observes, interaction corresponding to interaction sequences and optionally also timing of interactions and [0090] - The interaction sequence information includes information identifying devices (or a user ID) to which the interaction sequence corresponds, the contact interactions in the sequence and the time information which can be used to determine the order of and/or timing between interactions is stored in memory 312 of the contact center management system and [0091] - Operation proceed from step 915, in which an contact interaction is associated with an interaction sequence, to step 917, in which an interaction sequence vector for the interaction sequence with which an interaction was associated is generated or updated if the interaction sequence is an existing interaction sequence, based on the detected interaction. This may be done, for example, using the information shown in FIG. 6B);
detecting, based on a fraud detection model implemented with an artificial neural network, the agent fraud of the agent occurred during the batch period based on the batch feature vector to generate an agent fraud detection result (FIG. 9 and [0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action. If a detected interaction sequence is determined to correspond to a cluster of interactions associated with not-normal service request, e.g., fraudulent contact or a contact from a novice user who needs help, a corresponding tagging of the contact takes place and a mitigation action is automatically taken by the method and [0092] and [0096] and [0105] - ...mapping interaction vectors into an interaction sequence vector using a recurrent neural network (RNN)).
Shaffer fails to explicitly disclose:
updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent; and
auditing service performance of the agent based on the updated evaluation information.
However, in an analogous art, Cifarelli teaches concepts of:
updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent ([0054] - Real-time fraud score manager 124 has access to the features data store and real time features being provided by collector 123 for a transaction currently being processed on a terminal 110 by a given operator. Fraud score manager 124 computes a fraud score for the real-time transaction and the associated operator based on the current features and historic features associated with the operator. Fraud score manager 124 derives a variety of data structures and metrics from the features. The data structures and metrics are processed to compute or calculate the fraud score. The fraud score is then compared against a threshold or against a deviation from a range and determines whether the current transaction should be inspected for fraud or not inspected for fraud. If a determination of fraud is made, the transaction details, fraud score, operator identifier, and a computed average of the operator's features are sent to fraud system 131 for inspection and an alert may also be raised by score manager 125 to a supervisory terminal or manager terminal for a supervisor or a manager to go inspect the transaction suspected of fraud. The fraud score can be computed in a number of manners); and
auditing service performance of the agent based on the updated evaluation information ([0054] and [0060] -This can be done to compute feature scores for purposes of operator training, operator counseling, and operator performance reviews and Claim 6 – ...audit the transaction...).
Therefore, it would have been obvious to one of ordinarily skill in the art before the effective filing date of the claimed invention to combine the teachings of Cifarelli to the method of Shaffer to include updating evaluation information associated with the agent based on the agent fraud detection result to generate updated evaluation information of the agent; and auditing service performance of the agent based on the updated evaluation information.
One would have been motivated to combine the teachings of Cifarelli to Shaffer to do so as it provides / allows accurately produce fraud scores and/or identify frauds occurring in real time (Cifarelli, [0013]).
Regarding Claim 2;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Shaffer further discloses wherein the extracting the real-time features comprises: analyzing the communication ([0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action and [0089]); detecting one or more transactions performed by the agent and features associated with each of the one or more transactions ([0013] and [0089] - Operation proceeds from start step 905 to monitoring/observation step 910 in which the contact center management system monitors, e.g., observes, interaction corresponding to interaction sequences and optionally also timing of interactions. In step 910 the time an interaction is received or transmitted at the contact center can be documented thereby allowing the order and/or timing between consecutive contact interaction events/messages in a sequence to be determined from the timing information once an interaction is mapped, e.g., identified as corresponding to, a particular interaction sequence); setting a fraud flag, with respect to each of the one or more transactions, based on the features associated therewith ([0089] and [0113] - Note, too, that the mitigation actions may be performed on the ongoing contact interaction in real-time, or afterward (e.g., reporting, flagging, analyzing, etc.); and outputting the features associated with each of the one or more transactions and the corresponding fraud flags as the real-time features of the communication ([0016] - By monitoring individual interaction sequences between customers and contact centers, identifying individual interaction sequences corresponding to interaction sequence clusters known to correspond to fraudulent activities or to activities of inexperienced users, many problems can be identified and corresponding mitigation actions can be, and sometimes are, taken, e.g., in real time while an interaction session is still ongoing, without the need for an agent or a technician to communicate or report a problem and [0089] and [0096] - If the contact interaction sequence vector is determined in the decision step 935 that the contact is a fraudulent contact (Y branch), the contact is directed to step 940 where the fraudulent contact is handled, e.g., by disconnecting the contact or by directing the contact to agent group that specializes in dealing with such contacts, and the method loops back to step 910 where the contacts continue to be observed. However if operation 935 does not determined that the contact was initiated by a fraudulent user (N branch), operation proceeds to step 945 and [0113]).
Regarding Claim 3;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Cifarelli further teaches wherein each of the transactions involves one of: adding a service to the customer; removing an existing service currently provided to the customer; and modifying a term of an existing service currently provided to the customer ([0018] - 4. Price overrides: A cashier manually overrides price for an item. If this happens a lot, an alert should be raised). As construed a price for an item in a transaction is a term of an existing service and manually overriding the price noted to be modifying the term.
Similar rationale and motivation is noted for the combination of Cifarelli to Shaffer in view of Cifarelli, as per claim 1, above.
Regarding Claim 4;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Shaffer further discloses wherein the features associated with each of the transactions include at least one of: an entity corresponding to a service involved in the transaction; an inquiry of the customer regarding the service; a response of the customer regarding the service; and an intent of the customer detected from the communication (FIG, 6A – User entered ID... Request for information about balance... Request for historical information about balance and [0056] and [0063]).
Regarding Claim 5;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Shaffer further discloses wherein the computing the batch feature vector comprises: retrieving the real-time features relating to the communications involving the agent occurred in the batch period ([0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action and [0020] - It should be appreciated that once a user starts an interaction sequence, the interaction sequence may continue with interactions being sent to one or more contact center's devices prior to the individual user starting a new interaction sequence. The detected interaction sequences are stored.); aggregating the retrieved real-time features ([0012] - In particular, methods and apparatus for classifying a contact as being a normal contact, a potential fraudulent contact from a malicious agent, or a contact from a person who requires support, are described herein and [0013] - In various embodiments during use, information exchange including interaction sequences between individual user (via a terminal) and at least one of automated contact handling and a live agent are analyzed to determine if they correspond to predetermined clusters of interaction sequences, with the interaction sequences being represented by interaction sequence vectors, known to correspond to a particular interactions problem and/or mitigation action and [0020] - It should be appreciated that once a user starts an interaction sequence, the interaction sequence may continue with interactions being sent to one or more contact center's devices prior to the individual user starting a new interaction sequence. The detected interaction sequences are stored); obtaining batch features based on the aggregated real-time features (FIG. 6A and [0063]); and creating the batch feature vector based on the batch features (FIG. 6B and [0065])
Regarding Claim 6;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Shaffer further discloses wherein the batch features include at least one of: at least one aggregated fraud flag; an intent detected for each of communication sessions included in the batch period (FIG. 6A – Request for information about balance (i.e., Comment)); and information related to transactions detected in the communication sessions, comprising at least one of: a type of each of the transactions, an impact of each of the transactions, and a number of the transactions (FIG. 6A – Message/Event).
Regarding Claim 7;
Shaffer in view of Cifarelli disclose the method to Claim 1.
Shaffer further discloses wherein the detecting the agent fraud comprises: providing the batch feature vector as an input to a fraud detection model [0018] and 0074] and [0111]); and outputting, by the fraud detection model, the agent fraud detection result, wherein the fraud detection model is pretrained via machine learning based on training data ([0074] and [0111]).
Regarding Claim(s) 8-14; claim(s) 8-14 is/are directed to a/an medium associated with the method claimed in claim(s) 1-7. Claim(s) 8-14 is/are similar in scope to claim(s) 1-7, and is/are therefore rejected under similar rationale.
Regarding Claim(s) 15-20; claim(s) 15-20 is/are directed to a/an system associated with the method claimed in claim(s) 1-7. Claim(s) 15-20 is/are similar in scope to claim(s) 1-7, and is/are therefore rejected under similar rationale. Further, re: claim 15 Shaffer further discloses a customer service platform implemented by a processor and configured for extracting real-time features from a communication between a customer and an agent (FIG. 1 - Contact Management Server); and an artificial intelligence (AI)-based auditing platform implemented by a processor and configured for... (FIG. 1 – Contact Management Server and [0132] - For example, various components and modules may be distributed in manners not specifically described or illustrated herein, but that provide functionally similar results).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASFAND M SHEIKH whose telephone number is (571)272-1466. The examiner can normally be reached Mon-Fri: 7a-3p (MDT).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JESSICA LEMIEUX can be reached at (571)270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASFAND M SHEIKH/ Primary Examiner, Art Unit 3626