Prosecution Insights
Last updated: April 19, 2026
Application No. 18/467,381

AUTOMATED SECURITY MONITORING OF ONLINE AGENT-CUSTOMER INTERACTIONS USING MACHINE LEARNING

Non-Final OA §101§103
Filed
Sep 14, 2023
Examiner
GODBOLD, DAVID GARRISON
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Twilio Inc.
OA Round
3 (Non-Final)
22%
Grant Probability
At Risk
3-4
OA Rounds
2y 1m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
18 granted / 82 resolved
-30.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
34 currently pending
Career history
116
Total Applications
across all art units

Statute-Specific Performance

§101
46.2%
+6.2% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
6.1%
-33.9% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 82 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 4, 2025 has been entered. Status of Claims Claims 1-20 were previously pending and subject to a final rejection dated September 11, 2025. In RCE, submitted December 4, 2025, claims 1, 9, and 17 were amended. Therefore, claims 1-20 are currently pending and subject to the following non-final rejection. Response to Arguments Applicant’s remarks on Pages 8-11 of the Response, regarding the previous rejection of the claims under 35 U.S.C. 101, have been fully considered and are not found persuasive. On Pages 8-9 of the Response, Applicant argues “The amended claims are directed to specific technological solutions for automated security monitoring using machine learning models. … these limitations describe specific technological processes for automated threat detection and response, not abstract human activities. The claimed subject matter is analogous to the patent-eligible claims in McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299 (Fed. Cir. 2016), where the court found claims directed to automatic lip synchronization and facial expression animation to be patent-eligible because they improved existing technological processes through specific rules and techniques.” Examiner notes, the mere presence of additional elements (such as ML-readable feature vectors, and ML models) do not preclude the claims from reciting an abstract idea at Step 2A, Prong One. In the instant case, the abstract ideas recited in the limitations collect and process data in order to determine if an “agent is placing [a] customer data at risk” (claim 1) and to further mitigate any determined risk. These limitation recite commercial interactions classified within the abstract idea category of “certain methods of organizing human activity”. Examiner further notes, in the case of McRO “the court relied on the specification’s explanation of how the particular rules recited in the claim enabled the automation of specific animation tasks that previously could only be performed subjectively by humans, when determining that the claims were directed to improvements in computer animation instead of an abstract idea.” (MPEP, 2106.05(a)). That is, McRO disclosed a specific technical problem within the field of animation that computers were incapable of performing prior to the invention, then provided a specific solution, reflected in the claimed invention, which allowed computers to perform the tasks that were previously impossible for them to perform and integrates the recited abstract idea into a practical application at Step 2A, Prong Two after having been determined to recite an abstract idea at Step 2A, Prong One. No such analogous improvements have been found or demonstrated in the instant case. On Page 9 of the Response, Applicant argues “Even if the claims were directed to an abstract idea, they are integrated into a practical application that imposes meaningful limits on the abstract concept. Claim 1 recites specific technological elements including ‘collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or time of viewing of the one or more stored files’ and ‘process the agent activity data to generate one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data.’ These elements are not merely generic computer components but represent specific technological improvements to computer security systems.” Examiner notes, as discussed further in the detailed rejection below, "collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or time of viewing of the one or more stored files" and "process the agent activity data to generate one or more … feature vectors, each of the one or more … feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data" are recitations of the abstract idea and unhelpful in bringing the claims to eligibility. The additional element of the one or more ML-readable feature vectors serve only to generally link the abstract idea to the field of machine learning. The specification broadly discloses the ML-readable feature factors as “a digital … representation of an information …[that] may further include any suitable embedding and/or a similar data” (specification, para. 48), this high-level disclosure supports the findings of the analysis below, that the ML-readable feature vectors fail to integrate the abstract idea into a practical application. Examiner additionally notes, no technological improvements to computer security systems are presented outside of a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), thus the Examiner cannot determine the claims improve the technology (See MPEP 2106.04(d)(1)). On Page 9 of the Response, Applicant argues “The ‘remotely causing an agent security application operating on a computing device accessible to the agent, to generate a re-authentication request to the agent,’ as recited in claim 1, represents a specific technological solution that automatically responds to detected security threats in real-time. This is not a generic application of computing technology but a specific improvement to computer security systems.” Examiner notes, as discussed further in the detailed rejection below, “remotely causing, accessible to the agent, to generate a re-authentication request to the agent,” is a recitation of the abstract idea and unhelpful in bringing the claims to eligibility. The additional elements of “an agent security application” and “a computer device” receive broad levels of disclosure within the specification (the agent security application simply as “an agent security monitoring (ASM) application” (specification, para. 10), and the “computer device” by way of broad examples (specification, para. 14)) which support the analysis findings that these additional elements are used merely as tools to perform the abstract idea (i.e., “apply it”) and fail to integrate the abstract idea into a practical application. Examiner further notes, certain features upon which applicant relies (i.e., “detect[ing] threats in real time” (emphasis added)) are not recited in the rejected claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). On Page 10 of the Response, Applicant argues “Claims 2, 10, and 18 specifically recite ‘a voice recognition ML model’ that performs the specific function of determining ‘that the voice sample of the agent collected during the live agent-customer interaction does not match one or more stored voice samples of the agent.’ This represents a specific technological application of voice recognition technology for security purposes. Similarly, claims 3 and 11 recite ‘an anomaly detection ML model’ that performs the specific function of detecting ‘an anomaly in the agent activity data,’ and claims 4, 12, and 19 recite ‘an anomaly evaluation ML model’ that processes detected anomalies to generate security risk indications. These represent specific technological implementations, not abstract concepts.” Examiner notes, while these the “voice recognition ML model” and the “anomaly detection ML model” do recite technical elements and not abstract concepts, they do not inherently bring the entirety of the claims to eligibility. Here, the “voice recognition ML model” serves to generally link the abstract idea of “determining ‘that the voice sample of the agent collected during the live agent-customer interaction does not match one or more stored voice samples of the agent’” to the field of machine learning, and the “anomaly detection ML model” similarly serves to generally link the abstract idea of “process[ing] detected anomalies to generate security risk indications” to the field of machine learning. Further, the broad disclosures of these additional elements within the specification serve as support for these findings that the additional elements fail to integrate the abstract idea into a practical application at Step 2A Prong Two. On Page 10 of the Response, Applicant argues “the claims include significantly more than the alleged abstract idea through their specific technological improvements to computer security systems. The combination of ‘collecting agent activity data,’ ‘generating one or more machine learning (ML)-readable feature vectors representative of at least one pattern in the agent activity data,’ and ‘process the agent activity data to generate one or more machine learning (ML)-readable feature vectors,’ as recited in claims 1, 9, and 17, represents a specific technological solution that improves the functioning of computer security systems.” Examiner notes, as discussed above and further discussed in the detailed rejection below, “collecting agent activity data,” “generating one or more … feature vectors representative of at least one pattern in the agent activity data,” and “process[ing] the agent activity data to generate one or more … feature vectors,” are recitations of the abstract idea and unhelpful in bringing the claims to eligibility. Similar to the analysis discussed above with regards to Step 2A Prong Two, at Step 2B, the additional element of machine learning (ML)-readable feature vectors merely amount to generally linking the performance of the abstract idea to the field of machine learning, but fails to amount to “significantly more”. Examiner again notes, no technological improvements to the functioning of computer security systems are presented outside of a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), thus the Examiner cannot determine the claims improve the technology (See MPEP 2106.04(d)(1)). On Page 10 of the Response, Applicant argues “The automatic triggering of remedial actions, including ‘remotely causing an agent security application operating on a computing device accessible to the agent, to generate a re-authentication request to the agent,’ as recited in claims 1, 9, and 17, provides a technological improvement over conventional security systems by enabling real-time automated responses to detected threats. The voice recognition functionality recited in claims 2, 10, and 18, the anomaly detection functionality recited in claims 3 and 11, and the anomaly evaluation functionality recited in claims 4, 12, and 19 each represent specific technological improvements that enhance computer security through automated monitoring and threat detection.” Examiner directs the Applicant to the discussion above discussing these same arguments from the perspective of Step 2A Prong Two, as the same rationale applies here at Step 2B. On Pages 10-11 of the Response, Applicant argues “These technological improvements are analogous to those found patent-eligible in Bascom Glob. Internet Servs. v. AT&TMobility LLC, 827 F.3d 1341 (Fed. Cir. 2016), where the court found that specific technological implementations of content filtering provided sufficient inventive concept even when applied to abstract ideas. The dependent claims 5-8, 13-16, and 20 are patent-eligible for the same reasons as their respective independent claims, as they further specify the technological implementations recited in the independent claims.” Examiner notes, Bascom specifically finds eligibility when the claims are analyzed at Step 2B as a whole/ordered combination. Specifically, by arranging the known components of the filtering system in such a way that certain aspects resided at the ISP side rather than the device side, improving the system by preventing device users from bypassing the components, as they had previously been able to do when the components were located at the device. No analogous technical problems and technical solutions have been put for the in the instant claims or specification. Further, when analyzed as a whole/ordered combination, as discussed in the detailed rejection below, the additional elements still amount to merely “apply it” or generally linking the abstract idea to a field of use. Applicant’s remarks on Pages 11-13 of the Response, regarding the previous rejection of the claims under 35 U.S.C. 103, have been fully considered but are moot in light of the amended claims. Claim Objections Claim 17 is objected to because of the following informalities: limitation 3 recites “stored fi” and should recite “stored files”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Claims 1-8 are directed to a server (i.e., a machine); claims 9-16 are directed to a method (i.e., a process); claims 17-20 are directed toa non-transitory storage medium (i.e., a machine). Therefore, claims 1-20 all fall within the one of the four statutory categories of invention. Step 2A, Prong One Independent claim 1 substantially recites storing customer data associated with the customer, collecting agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or duration of viewing of the one or more stored files; processing the agent activity data to generate one or more feature vectors, each of the one or more feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data; processing the one or more feature vectors to generate an indication that the agent is placing customer data at risk; and automatically causing, responsive to the indication that the agent is placing the customer data at risk, one or more remedial actions to be performed, the one or more remedial actions comprising at least: remotely causing, accessible to the agent, to generate a re-authentication request to the agent. Independent claims 9 and 17 substantially recite collecting agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or duration of viewing of the one or more stored files; processing the agent activity data to generate one or feature vectors, each of the one or more feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in agent activity data; processing the one or more feature vectors using to generate an indication that the agent is placing the customer data at risk; and responsive to the indication that the agent is placing the customer data at risk, automatically causing one or more remedial actions to be performed, the one or more remedial actions comprising at least: remotely causing, accessible to the agent, to generate a re-authentication request to the agent. The limitations stated above are processes/functions that under broadest reasonable interpretation covers “certain methods of organizing human activity” (commercial or legal interactions) of ensuring security of interactions. Therefore, the claim recites an abstract idea. Step 2A, Prong Two The judicial exception is not integrated into a practical application. Claims 1, 9, and 17 as a whole amount to: (i) merely invoking generic components as a tool to perform the abstract idea or “apply it” (or an equivalent), and (ii) generally links the use of a judicial exception to a particular technological environment or field of use. The claim recites the additional elements of: (i) a memory device (claim 1), (ii) one or more processing devices/a processing device (claims 1, 9), (iii) one or more machine learning (ML)-readable feature vectors (claims 1, 9, 17), (iv) one or more ML models (claims 1, 9, 17), (v) an agent security application (claims 1, 9, 17), and (vi) a computing device (claims 1, 9, 17). The additional elements of (i) a memory device, (ii) one or more processing devices/a processing device, (v) an agent security application, and (vi) a computing device are recited at a high level of generality (see [0054] of the Applicant’s specification discussing the memory, [0046] discussing one or more processing devices/a processing device, [0010] discussing the agent security application, and [0014] discussing the computing device) such that, when viewed as whole/ordered combination, it amounts to no more than mere instruction to apply the judicial exception using generic computer components or “apply it” (See MPEP 2106.05(f)). The additional element of (iii) one or more machine learning (ML)-readable feature vectors, (iv) one or more ML models, and are recited at a high level of generality (See [0048] of the Applicant’s specification discussing the one or more machine learning (ML)-readable feature vectors, and [0019, 0026, 0029] discussing the one or more ML models) such that when viewed as whole/ordered combination, do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e., machine learning and mobile applications) (See MPEP 2106.05(h)). Accordingly, these additional elements, when viewed as a whole/ordered combination [See Figures 1 and 2 showing all the additional elements (i) a memory device, (ii) one or more processing devices/a processing device, (iii) one or more machine learning (ML)-readable feature vectors, (iv) one or more ML models, (v) an agent security application, (vi) a computing device in combination], do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, the claim is directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional elements amount to no more than: (i) “apply it” (or an equivalent), and (ii) generally link the use of a judicial exception to a particular technological environment or field of use, and are not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) merely invoking the generic components as a tool to perform the abstract idea or “apply it” (See MPEP 2106.05(f)); and (ii) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, the claims 1, 9, and 17 are ineligible. Dependent Claims 7, 8, 15, 16 merely narrow the previously recited abstract idea limitations. For reasons described above with respect to claims 1 and 7 these judicial exceptions are not meaningfully integrated into a practical application or significantly more than the abstract idea. Thus, claims 7, 8, 15, 16 are also ineligible. Step 2A, Prong Two Dependent Claims 2, 10, and 18 further narrow the previously recited abstract idea limitations and further recite the additional abstract idea limitations of: wherein the feature vectors comprise a representation of a voice sample of the agent collected during the live agent-customer interaction, and determining that the voice sample of the agent collected during the live agent-customer interaction does not match one or more stored voice samples of the agent. Claims 2, 10, and 18 also recites the additional elements of a voice recognition ML model, which is recited at a high-level of generality (See [0026] of the Applicants PG Publication disclosing the voice recognition ML model) such that when viewed as whole/ordered combination, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e., machine learning) (See MPEP 2106.05(h)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: generally linking the use of a judicial exception to a particular technological environment or field of use, and is not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional element of a voice recognition ML model does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claims 2, 10, and 18 are ineligible. Step 2A, Prong Two Dependent Claims 3 and 11 further narrow the previously recited abstract idea limitations and further recite the additional abstract idea limitations of: wherein to process feature vectors, processing feature vectors to detect an anomaly in the agent activity data. Claims 3 and 11 also recites the additional elements of an anomaly detection ML model, which is recited at a high-level of generality (See [0029] of the Applicants PG Publication disclosing the anomaly detection ML model) such that when viewed as whole/ordered combination, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e., machine learning) (See MPEP 2106.05(h)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: generally linking the use of a judicial exception to a particular technological environment or field of use, and is not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional element of an anomaly detection ML model does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claims 3 and 11 are ineligible. Step 2A, Prong Two Dependent Claims 4, 12, and 19 further narrow the previously recited abstract idea limitations and further recite the abstract idea limitations of: wherein to process feature vectors, processing, responsive to the detected anomaly in the agent activity data, a representation of at least a portion of the agent activity data to generate the indication that the customer data is at risk. Claims 4, 12, and 19 also recites the additional elements of an anomaly evaluation ML model, which is recited at a high-level of generality (See [0019] of the Applicants PG Publication disclosing the anomaly evaluation ML model) such that when viewed as whole/ordered combination, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use (i.e., machine learning) (See MPEP 2106.05(h)). Accordingly, the additional elements, when viewed individually and as a whole/ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims are directed to an abstract idea. Step 2B As discussed above with respect to Step 2A Prong Two, the additional element amounts to no more than: generally linking the use of a judicial exception to a particular technological environment or field of use, and is not a practical application of the abstract idea. The same analysis applies here in Step 2B, i.e., (i) generally linking the use of a judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)), does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Therefore, the additional element of an anomaly evaluation ML model does not integrate the abstract idea into a practical application at Step 2A or provide an inventive concept at Step 2B. Thus, even when viewed as a whole/ordered combination, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Thus, claims 4, 12, and 19 are ineligible. Dependent Claims 5, 6, 13, 14, and 20 merely narrow the previously recited abstract idea limitations. For reasons described above with respect to claims 4, 12, and 19 these judicial exceptions are not meaningfully integrated into a practical application or significantly more than the abstract idea. Thus, claims 5, 6, 13, 14, and 20 are also ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-7, 9, 11-15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Davis (US 20160065732) (hereafter Davis) in view of Daruna (US 20220207506) (hereafter Daruna) and further in view of Badhwar (US 20210194883) (hereafter Badhwar). In regards to claim 1, Davis discloses a computing server of an interaction center that supports a live agent-customer interaction involving a customer and an agent of the interaction center, the server comprising: a memory device storing customer data associated with the customer, and (Para. 23-24, 33) (“agents of the contact center interact directly with the customers. For example, a call center illustrates one type of contact center and is configured to facilitate interactions and/or communications between agents and customers through a telephone network, such as handling incoming and outgoing calls. a contact center is able to handle any type of interaction and/or communication between an agent and a customer.” “Computing system 100 broadly represents any single or multi-processor computing device or system capable of executing computer-readable instructions. Examples of computing system 100 include, without limitation, … servers,” “the computing system may execute a client application that is configured to access a database containing information related to a customer (i.e. a memory device storing customer data associated with the customer).”) Davis discloses one or more processing devices communicatively coupled to the memory device, the one or more processing devices to: (Para. 33) (“an example of a computing system 100 capable of implementing embodiments of the present disclosure (i.e. one or more processing devices). Computing system 100 broadly represents any single or multi-processor computing device or system capable of executing computer-readable instructions.” “the computing system may execute a client application that is configured to access a database containing information related to a customer (i.e. one or more processing devices communicatively coupled to the memory device).”) Davis discloses collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or duration of viewing of the one or more stored files; (Para. 76, 95) (“the agent's duration within identified forms or pages of the application (i.e. collect agent activity data comprising duration of viewing of the one or more stored files)” “determining that the first agent is accessing a client based resource at the first time (i.e. collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction). The resource is associated with a client. Again, this determination is made through monitoring of the agent and/or the activities performed on resources of the workstation. As previously described, the resource may be a database containing personal and identifiable information of employees, customers, partners, etc., all in association with a client.”) Davis discloses process the agent activity data to generate representative data of at least one pattern in the agent activity data; (Para. 68, 76) (“determining that a potential fraudulent activity is being conducted in a contact center, wherein the contact center comprises a plurality of workstations attended to by a plurality of agents.” “potential fraudulent activity by an agent occurs when an agent is accessing forms and exhibits characteristics outside of a statistical average including actions of other agents or that particular agent (i.e. process the agent activity data to generate representative data of at least one pattern in the agent activity data). For example, the agent's duration within identified forms or pages of the application is compared to the normal baseline based upon statistical averages of all contact center agents to determine any deviation.”) Davis discloses process the representative data to generate an indication that the agent is placing customer data at risk; and (Para. 77) (“At 420, the method includes determining that the potential fraudulent activity occurs at a first workstation. … identifying information of a computing resource upon which the activity occurred may be cross referenced to determine the corresponding workstation within which the activity was performed. Upon identification, additional information related to the workstation may be gathered, such as determining the agent (i.e. process the representative data to generate an indication that the agent is placing customer data at risk).”) Davis discloses automatically cause, responsive to the indication that the agent is placing the customer data at risk, one or more remedial actions to be performed, (Para. 78) (“the method includes providing an event notification of the potential fraudulent activity. The event notification may include the information used to determine that potential fraudulent activity has occurred, and may include information identifying the workstation, and any additional information identifying the agent involved in the activity.”) Davis discloses a computing device accessible to the agent (Para. 57) (“each workstation (i.e. a computing device accessible to the agent) in a contact center may be organized as a scalable unit, and contains essential components to enable an attending agent to communicate with a customer, and to provide services to that customer on behalf of a client. … workstation (i.e. a computing device accessible to the agent) 300 includes a multi-purpose computing resource 310. In one implementation, computing resource 310 is configured to access client resources. For example, when handling or involved in an interaction with a customer”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses the representative data of at least one pattern in the agent activity data of Davis is one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis; (Para. 112) (“the feature extraction engine 130 may encode an electronic activity and an associated user-specified data (i.e. the representative data of at least one pattern in the agent activity data of Davis) 212 (e.g., a user-specified value or quantity, or other data) into an activity feature vector 243 for use by a machine learning model (i.e. one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis) to predict whether the user-specified data 212 is anomalous”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses process the representative data to generate an indication of Davis is process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis (Para. 6, 112) (“an anomalous attribute classification model to ingest the feature vector to determine an anomaly classification based on learned model parameters, where the anomaly classification includes one of: i) an anomalous user-specified value classification, or ii) a non-anomalous user-specified value classification; generate a dispute graphical user interface (GUI) including an alert message (i.e. process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis) … where the alert message represents the anomalous user-specified value classification (i.e. process the representative data to generate an indication of Davis)” “an activity feature vector 243 for use by a machine learning model to predict whether the user-specified data 212 is anomalous”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) Davis in view of Daruna does not explicitly disclose, however Badhwar, in the same field of endeavor, discloses the one or more remedial actions of Davis comprising at least: remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis. (Para. 55, 58) (“Once an action is detected, the server system assesses the risk of the action of the user on the web application to determine whether a step-up authentication is required to re-authenticate and authorize the user to perform the action (step 214).” “If the action is rejected, a step-up authorization is required and the server system passes the user's action to a step-up authentication workflow 222 (step 216). In this case, the server system also assesses which step-up authentication method to provide to the user device (i.e. remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis) after assessing whether or not the user (i.e. the agent of Davis) needs to complete the step-up authentication workflow 222. For example, if a cumulative risk score of the user action exceeds a threshold risk score, the system determines that a strong step-up authentication method (e.g., a multiple and dynamic step-up authentication method such as identity proofing, authenticators, client re-authentication, or bio-metric authentication) is required. In contrast, if a cumulative risk score of the user action is lower than a threshold risk score, the system determines that a weaker step-up authentication method (e.g., a singular step-up dynamic authentication method such as one time password (OTP) or Captcha) is sufficient to re-authenticate and authorize the user to perform the current action.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna with the step-up authentication of Badhwar in order to improve the invention’s ability to prevent exploitation of weak workflows and loopholes and increase security of customer data. (Badhwar – Para. 19) In regards to claim 3, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 1. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein to process the one or more ML-readable feature vectors, the one or more processing devices of Davis are to: process, using an anomaly detection ML model, the one or more ML-readable feature vectors to detect an anomaly in the agent activity data of Davis. (Para. 111) (“the activity feature vector 243 may be configured for ingestion by a machine learning model to produce an anomaly classification”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) In regards to claim 4, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 3. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein to process the one or more ML-readable feature vectors, the one or more processing devices of Davis are further to: process, responsive to the detected anomaly in the agent activity data of Davis, using an anomaly evaluation ML model, a representation of at least a portion of the agent activity data of Davis to generate the indication that the customer data is at risk of Davis. (Para. 5) (“an anomalous attribute classification model to ingest the feature vector to determine an anomaly classification based on learned model parameters … generating, by the at least one processor, a dispute graphical user interface (GUI) including an alert message and a dispute interface element, where the alert message represents the anomalous user-specified value classification of an incorrect user-specified value of the electronic activity verification”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) In regards to claim 5, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 4. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein the anomaly evaluation ML model is trained using a training dataset comprising a training input and a target output, wherein the training input comprises a representation of a training activity data and the target output comprises a classification output indicative of whether the customer data is at risk of Davis. (Para. 90) (“the attributes may be encoded into a training feature vector 241 by a feature vector generator 240. In some embodiments, the training feature vector 241 may be configured for ingestion by a machine learning model to produce an anomaly classification that indicates whether the user-specified data 211 is anomalous or accurate.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) In regards to claim 6, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 4. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein the anomaly evaluation ML model is pre-trained using one or more initial training datasets prior to a deployment of the anomaly evaluation ML model and re-trained using one more additional training datasets after the deployment of the anomaly evaluation ML model. (Para. 121) (“rather than updating the attribute accuracy model engine 140, the attribute accuracy model engine 140 may be retrained on a rolling window of electronic activities. By retraining rather than updating the model parameters, the attribute accuracy model engine 140 may be trained against a user's recent behaviors.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) In regards to claim 7, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 1. Davis discloses wherein the agent activity data comprises at least one or more of: a record of the agent accessing the customer data in association with the live agent-customer interaction, (Para. 58) (“Workstation 300 also includes monitoring unit 330, which is configured to track activity of components within workstation 300 and/or activity of the corresponding agent … monitoring unit 330 can be configured to track activity performed on the communication resource 320, … Monitoring unit 330 is also able to record conversations held through the communication resource 320. … when computing resource 310 is executing an application of a client, when computing resource 310 is accessing a particular resource of the client, and any other quantifiable activity that is conducted on the computing resource 310.”) In regards to claim 9, Davis discloses a method of automated protection of customer data associated with a customer by an interaction center that supports a live agent-customer interaction involving the customer and an agent of the interaction center, the method comprising: (Para. 9) (“a computer implemented method for detecting fraud is disclosed. In other embodiments, a non-transitory computer readable medium is disclosed having computer-executable instructions for causing a computer system to perform a method for detecting fraud.”) Davis discloses collecting, by a processing device, agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or duration of viewing of the one or more stored files (Para. 76, 95) (“the agent's duration within identified forms or pages of the application (i.e. collecting, by a processing device, agent activity data comprising duration of viewing of the one or more stored files)” “determining that the first agent is accessing a client based resource at the first time (i.e. collecting, by a processing device, agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction). The resource is associated with a client. Again, this determination is made through monitoring of the agent and/or the activities performed on resources of the workstation. As previously described, the resource may be a database containing personal and identifiable information of employees, customers, partners, etc., all in association with a client.”) Davis discloses processing, by the processing device, the agent activity data to generate representative data of at least one pattern in agent activity data; (Para. 68, 76) (“determining that a potential fraudulent activity is being conducted in a contact center, wherein the contact center comprises a plurality of workstations attended to by a plurality of agents.” “potential fraudulent activity by an agent occurs when an agent is accessing forms and exhibits characteristics outside of a statistical average including actions of other agents or that particular agent (i.e. processing the agent activity data to generate representative data of at least one pattern in the agent activity data). For example, the agent's duration within identified forms or pages of the application is compared to the normal baseline based upon statistical averages of all contact center agents to determine any deviation.”) Davis discloses processing the representative data to generate an indication that the agent is placing the customer data at risk; and (Para. 77) (“At 420, the method includes determining that the potential fraudulent activity occurs at a first workstation. … identifying information of a computing resource upon which the activity occurred may be cross referenced to determine the corresponding workstation within which the activity was performed. Upon identification, additional information related to the workstation may be gathered, such as determining the agent (i.e. process the representative data to generate an indication that the agent is placing customer data at risk).”) Davis discloses responsive to the indication that the agent is placing the customer data at risk, automatically causing, by the processing device, one or more remedial actions to be performed, (Para. 78) (“the method includes providing an event notification of the potential fraudulent activity. The event notification may include the information used to determine that potential fraudulent activity has occurred, and may include information identifying the workstation, and any additional information identifying the agent involved in the activity.”) Davis discloses a computing device accessible to the agent (Para. 57) (“each workstation (i.e. a computing device accessible to the agent) in a contact center may be organized as a scalable unit, and contains essential components to enable an attending agent to communicate with a customer, and to provide services to that customer on behalf of a client. … workstation (i.e. a computing device accessible to the agent) 300 includes a multi-purpose computing resource 310. In one implementation, computing resource 310 is configured to access client resources. For example, when handling or involved in an interaction with a customer”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses the representative data of at least one pattern in the agent activity data of Davis is one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis; (Para. 112) (“the feature extraction engine 130 may encode an electronic activity and an associated user-specified data (i.e. the representative data of at least one pattern in the agent activity data of Davis) 212 (e.g., a user-specified value or quantity, or other data) into an activity feature vector 243 for use by a machine learning model (i.e. one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis) to predict whether the user-specified data 212 is anomalous”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses process the representative data to generate an indication of Davis is process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis (Para. 6, 112) (“an anomalous attribute classification model to ingest the feature vector to determine an anomaly classification based on learned model parameters, where the anomaly classification includes one of: i) an anomalous user-specified value classification, or ii) a non-anomalous user-specified value classification; generate a dispute graphical user interface (GUI) including an alert message (i.e. process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis) … where the alert message represents the anomalous user-specified value classification (i.e. process the representative data to generate an indication of Davis)” “an activity feature vector 243 for use by a machine learning model to predict whether the user-specified data 212 is anomalous”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) Davis in view of Daruna does not explicitly disclose, however Badhwar, in the same field of endeavor, discloses the one or more remedial actions of Davis comprising at least: remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis. (Para. 55, 58) (“Once an action is detected, the server system assesses the risk of the action of the user on the web application to determine whether a step-up authentication is required to re-authenticate and authorize the user to perform the action (step 214).” “If the action is rejected, a step-up authorization is required and the server system passes the user's action to a step-up authentication workflow 222 (step 216). In this case, the server system also assesses which step-up authentication method to provide to the user device (i.e. remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis) after assessing whether or not the user (i.e. the agent of Davis) needs to complete the step-up authentication workflow 222. For example, if a cumulative risk score of the user action exceeds a threshold risk score, the system determines that a strong step-up authentication method (e.g., a multiple and dynamic step-up authentication method such as identity proofing, authenticators, client re-authentication, or bio-metric authentication) is required. In contrast, if a cumulative risk score of the user action is lower than a threshold risk score, the system determines that a weaker step-up authentication method (e.g., a singular step-up dynamic authentication method such as one time password (OTP) or Captcha) is sufficient to re-authenticate and authorize the user to perform the current action.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna with the step-up authentication of Badhwar in order to improve the invention’s ability to prevent exploitation of weak workflows and loopholes and increase security of customer data. (Badhwar – Para. 19) In regards to claim 11, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 9. The remainder of this claim is rejected using the same rationale as claim 3. In regards to claim 12, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 11. The remainder of this claim is rejected using the same rationale as claim 4. In regards to claim 13, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 12. The remainder of this claim is rejected using the same rationale as claim 5. In regards to claim 14, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 12. The remainder of this claim is rejected using the same rationale as claim 6. In regards to claim 15, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 9. The remainder of this claim is rejected using the same rationale as claim 7. In regards to claim 17, Davis discloses a non-transitory computer-readable storage medium storing instructions that, when executed by a processing device of an interaction center that supports a live agent-customer interaction involving a customer and an agent of the interaction center, cause the processing device to: (Para. 9) (“a non-transitory computer readable medium is disclosed having computer-executable instructions for causing a computer system to perform a method for detecting fraud.”) Davis discloses collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction, or duration of viewing of the one or more stored fi (Para. 76, 95) (“the agent's duration within identified forms or pages of the application (i.e. collect agent activity data comprising duration of viewing of the one or more stored files)” “determining that the first agent is accessing a client based resource at the first time (i.e. collect agent activity data comprising one or more of: one or more stored files viewed by the agent in association with the live agent-customer interaction). The resource is associated with a client. Again, this determination is made through monitoring of the agent and/or the activities performed on resources of the workstation. As previously described, the resource may be a database containing personal and identifiable information of employees, customers, partners, etc., all in association with a client.”) Davis discloses process the agent activity data to generate representative data of at least one pattern in the agent activity data; (Para. 68, 76) (“determining that a potential fraudulent activity is being conducted in a contact center, wherein the contact center comprises a plurality of workstations attended to by a plurality of agents.” “potential fraudulent activity by an agent occurs when an agent is accessing forms and exhibits characteristics outside of a statistical average including actions of other agents or that particular agent (i.e. process the agent activity data to generate representative data of at least one pattern in the agent activity data). For example, the agent's duration within identified forms or pages of the application is compared to the normal baseline based upon statistical averages of all contact center agents to determine any deviation.”) Davis discloses Davis discloses process the representative data to generate an indication that the agent is placing customer data at risk; and (Para. 77) (“At 420, the method includes determining that the potential fraudulent activity occurs at a first workstation. … identifying information of a computing resource upon which the activity occurred may be cross referenced to determine the corresponding workstation within which the activity was performed. Upon identification, additional information related to the workstation may be gathered, such as determining the agent (i.e. process the representative data to generate an indication that the agent is placing customer data at risk).”) Davis discloses automatically cause, responsive to the indication that the agent is placing the customer data at risk, one or more remedial actions to be performed (Para. 78) (“the method includes providing an event notification of the potential fraudulent activity. The event notification may include the information used to determine that potential fraudulent activity has occurred, and may include information identifying the workstation, and any additional information identifying the agent involved in the activity.”) Davis discloses a computing device accessible to the agent (Para. 57) (“each workstation (i.e. a computing device accessible to the agent) in a contact center may be organized as a scalable unit, and contains essential components to enable an attending agent to communicate with a customer, and to provide services to that customer on behalf of a client. … workstation (i.e. a computing device accessible to the agent) 300 includes a multi-purpose computing resource 310. In one implementation, computing resource 310 is configured to access client resources. For example, when handling or involved in an interaction with a customer”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses the representative data of at least one pattern in the agent activity data of Davis is one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis; (Para. 112) (“the feature extraction engine 130 may encode an electronic activity and an associated user-specified data (i.e. the representative data of at least one pattern in the agent activity data of Davis) 212 (e.g., a user-specified value or quantity, or other data) into an activity feature vector 243 for use by a machine learning model (i.e. one or more machine learning (ML)-readable feature vectors, each of the one or more ML-readable feature vectors corresponding to a point in a multi-dimensional feature space associated and representative of at least one pattern in the agent activity data of Davis) to predict whether the user-specified data 212 is anomalous”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses process the representative data to generate an indication of Davis is process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis (Para. 6, 112) (“an anomalous attribute classification model to ingest the feature vector to determine an anomaly classification based on learned model parameters, where the anomaly classification includes one of: i) an anomalous user-specified value classification, or ii) a non-anomalous user-specified value classification; generate a dispute graphical user interface (GUI) including an alert message (i.e. process the one or more ML-readable feature vectors using one or more ML models to generate an indication of Davis) … where the alert message represents the anomalous user-specified value classification (i.e. process the representative data to generate an indication of Davis)” “an activity feature vector 243 for use by a machine learning model to predict whether the user-specified data 212 is anomalous”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) Davis in view of Daruna does not explicitly disclose, however Badhwar, in the same field of endeavor, discloses the one or more remedial actions of Davis comprising at least: remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis. (Para. 55, 58) (“Once an action is detected, the server system assesses the risk of the action of the user on the web application to determine whether a step-up authentication is required to re-authenticate and authorize the user to perform the action (step 214).” “If the action is rejected, a step-up authorization is required and the server system passes the user's action to a step-up authentication workflow 222 (step 216). In this case, the server system also assesses which step-up authentication method to provide to the user device (i.e. remotely causing an agent security application operating on a computing device accessible to the agent of Davis, to generate a re-authentication request to the agent of Davis) after assessing whether or not the user (i.e. the agent of Davis) needs to complete the step-up authentication workflow 222. For example, if a cumulative risk score of the user action exceeds a threshold risk score, the system determines that a strong step-up authentication method (e.g., a multiple and dynamic step-up authentication method such as identity proofing, authenticators, client re-authentication, or bio-metric authentication) is required. In contrast, if a cumulative risk score of the user action is lower than a threshold risk score, the system determines that a weaker step-up authentication method (e.g., a singular step-up dynamic authentication method such as one time password (OTP) or Captcha) is sufficient to re-authenticate and authorize the user to perform the current action.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna with the step-up authentication of Badhwar in order to improve the invention’s ability to prevent exploitation of weak workflows and loopholes and increase security of customer data. (Badhwar – Para. 19) Claims 2, 10, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Davis in view of Daruna and further in view of Badhwar and even further in view of Wasserblat (20060285665) (hereafter Wasserblat) and even further in view of Austraat (US 20240232765) (hereafter Austraat). In regards to claim 2, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 1. Davis in view of Daruna in view of Badhwar does not explicitly disclose, however Wasserblat, in the same field of endeavor, discloses wherein the one or more ML-readable feature vectors of Daruna comprise a representation of a voice sample of the agent collected during the live agent-customer interaction of Davis, and wherein to process the one or more ML-readable feature vectors of Daruna, the one or more processing devices of Davis are to: determine that the voice sample of the agent collected during the live agent-customer interaction does not match one or more stored voice samples of the agent. (Para. 18, 20) (“data collected prior to, during, or subsequent to the occurrence of interactions in which the participants' voices are captured” “At step 62, the voice of the tested speaker is parameterized, by constructing a sequence of feature vectors, wherein each feature vector relates to a certain point in time, from the enhanced voice, wherein, each feature vector comprises a plurality of characteristics of the voice during a specific time frame within the interaction (i.e. wherein the one or more feature vectors comprise a representation of a voice sample of the agent collected during the live agent-customer interaction). At step 64, one or more previously constructed voice prints are selected from a collection, or a reservoir, such that the parameterized voice sample (i.e. the voice sample) of the tested speaker would be scored against these voice prints. The selected voice prints can include a voice print of the alleged speaker of the current voice sample (i.e. determine that the voice sample of the agent collected during the live agent-customer interaction does not match one or more stored voice samples of the agent), who is a legitimate speaker, in order to verify that the alleged customer is indeed the true customer as recorded”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna and further in view of Badhwar with the fraud detection of Wasserblat in order to improve the system’s security and response to detections. (Wasserblat – Para. 6) Davis in view of Daruna in view of Badhwar and further in view of Wasserblat does not explicitly disclose, however Austraat, in the same field of the endeavor, discloses wherein the one or more ML models of Daruna comprise a voice recognition ML model (Para. 45, 173) (“The input and output system 136 may be configured to obtain and process various forms of authentication via an authentication system to obtain authentication information of a user 110. … voice biometric systems may be used to authenticate a user using speech recognition associated with a word, phrase, tone, or other voice-related features of the user.” “The ASR-NLU system 700 may operate in real-time (e.g., at the same or at a similar rate or perceived by a human to be at the same or a similar rate as a typical conversational process). In particular, the ASR-NLU system 700 may apply one or more trained artificial intelligence models used to map input audio data using an ASR engine 710 and interpret speech patterns using an NLU engine 720. … the ASR engine 710 may be trained using training data 740. In one embodiment, the ASR-NLU system 700 may utilize an ASR engine 710 (i.e., ASR model) that maps utterances/natural language inputs that may include a feature extractor 712 to extract words and/or other features in order to recognize speech from the input audio data.”) Davis in view of Daruna in view of Badhwar and further in view of Wasserblat does not explicitly disclose, however Austraat, in the same field of the endeavor, discloses determine, using the voice recognition ML model, that the voice sample of Wasserblat (Para. 45) (“The input and output system 136 may be configured to obtain and process various forms of authentication via an authentication system to obtain authentication information of a user 110. … voice biometric systems may be used to authenticate a user using speech recognition associated with a word, phrase, tone, or other voice-related features of the user.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna and further in view of Badhwar and even further in view of Wasserblat with the natural language processing of Austraat in order to improve the inventions ability to “provide dynamic risk scoring based on natural language understanding of unstructured data from various communication channels” (Austraat – Para. 4) In regards to claim 10, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 9. The remainder of this claim is rejected using the same rationale as claim 2. In regards to claim 18, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 17. The remainder of this claim is rejected using the same rationale as claim 2. In regards to claim 19, Davis in view of Daruna and further in view of Badhwar and even further in view of Wasserlat and even further in view of Austraat disclose the limitations of claim 18. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein to process the one or more ML-readable feature vectors, the processing device is further to: process, using an anomaly detection ML model, the one or more ML-readable feature vectors to detect an anomaly in the agent activity data; and (Para. 111) (“the activity feature vector 243 may be configured for ingestion by a machine learning model to produce an anomaly classification”) Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses process, responsive to the detected anomaly in the agent activity data, and using an anomaly evaluation ML model, a representation of at least a portion of the agent activity data to generate the indication that the customer data is at risk. (Para. 5) (“an anomalous attribute classification model to ingest the feature vector to determine an anomaly classification based on learned model parameters … generating, by the at least one processor, a dispute graphical user interface (GUI) including an alert message and a dispute interface element, where the alert message represents the anomalous user-specified value classification of an incorrect user-specified value of the electronic activity verification”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) In regards to claim 20, Davis in view of Daruna and further in view of Badhwar and even further in view of Wasserlat and even further in view of Austraat disclose the limitations of claim 19. Davis does not explicitly disclose, however Daruna, in the same field of endeavor, discloses wherein the anomaly evaluation ML model is trained using a training dataset comprising a training input and a target output, wherein the training input comprises a representation of a training activity data and the target output comprises a classification output indicative of whether the customer data is at risk. (Para. 90) (“the attributes may be encoded into a training feature vector 241 by a feature vector generator 240. In some embodiments, the training feature vector 241 may be configured for ingestion by a machine learning model to produce an anomaly classification that indicates whether the user-specified data 211 is anomalous or accurate.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis with the anomalous activity detection of Daruna in order to improve the system’s security and preemptive/proactive measures. (Daruna – Para. 22) Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Davis in view of Daruna and further in view of Badhwar and even further in view of Wasserblat. In regards to claim 8, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 1. Davis in view of Daruna in view of Badhwar does not explicitly disclose, however Wasserblat, in the same field of endeavor, discloses wherein the one or more remedial actions of Davis further comprise one or more of: a warning to the agent of Davis, or a warning to a supervisor of the agent. (Para. 19) (“Preferably, the generated alert is sent to the source of the call or an associated authority, such as the agent who held the call or a supervisor.”) Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the anti-fraud detection of Davis in view of Daruna and further in view of Badhwar with the fraud detection of Wasserblat in order to improve the system’s security and response to detections. (Wasserblat – Para. 6) In regards to claim 16, Davis in view of Daruna and further in view of Badhwar disclose the limitations of claim 9. The remainder of this claim is rejected using the same rationale as claim 8. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID G GODBOLD whose telephone number is (571)272-5036. The examiner can normally be reached M-F 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon S Campbell can be reached at 571-272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID G. GODBOLD/Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
May 06, 2025
Non-Final Rejection — §101, §103
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Aug 13, 2025
Response Filed
Sep 07, 2025
Final Rejection — §101, §103
Nov 10, 2025
Response after Non-Final Action
Dec 04, 2025
Request for Continued Examination
Dec 17, 2025
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §101, §103
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530653
APPARATUS, METHOD, AND SYSTEM FOR GENERATING TRANSPORT VEHICLE DRIVING PLANS
2y 5m to grant Granted Jan 20, 2026
Patent 12488304
DELIVERY SCHEDULING ADJUSTMENT OF A REPLACEMENT DEVICE BASED ON NETWORK BACKUP OF EXCHANGED DEVICE
2y 5m to grant Granted Dec 02, 2025
Patent 12474175
SYSTEM AND METHOD FOR OPTIMIZING DELIVERY ROUTE BASED ON MOBILE DEVICE ANALYTICS
2y 5m to grant Granted Nov 18, 2025
Patent 12475431
Secure Pharmaceuticals Delivery Container and Service
2y 5m to grant Granted Nov 18, 2025
Patent 12462615
SYSTEM AND METHOD FOR COMPUTER VISION ASSISTED PARKING SPACE MONITORING WITH AUTOMATIC FEE PAYMENTS
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
22%
Grant Probability
55%
With Interview (+33.3%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 82 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month