Prosecution Insights
Last updated: April 19, 2026
Application No. 18/765,784

Real-Time AI-Driven Fraud Detection and Prevention System for In-Person Transactions

Non-Final OA §101§103§112
Filed
Jul 08, 2024
Examiner
PATEL, DIVESH
Art Unit
3696
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
53%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
64 granted / 120 resolved
+1.3% vs TC avg
Strong +39% interview lift
Without
With
+39.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
19 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
42.6%
+2.6% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
3.4%
-36.6% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 120 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This action is in reply to the application filed on July 8, 2024. Claims 1–20 are currently pending and have been examined. Information Disclosure Statement The Information Disclosure Statements filed on July 8, 2024 and February 5, 2026 have been considered. Initialed copies of the Forms 1449 are enclosed herewith. Claim Objections Claims 1, 19, and 20 are objected to because of the following informalities: In claims 1, 19, and 20, the term “AI/ML” is an acronym that is used without previously being defined in the claims. This can be corrected by amending the claims to read “artificial intelligence and machine learning (AI/ML)” as it is recited in claim 10. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1–18 are rejected under 35 U.S.C. 112(b), as being indefinite, for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Regarding claims 1 and 10, the phrase “and other relevant data” renders the claims indefinite because the claims include elements not actually disclosed (those encompassed by “and other relevant data”), thereby rendering the scope of these claims unascertainable. See MPEP § 2173.05(d). Claims 2–9 and claims 11–18 are also rejected due to their dependency on claims 1 and 10 respectively. Claim Rejections - 35 USC § 101 The following is a quotation of 35 U.S.C. 101: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1–20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. First of all, claims must be directed to one or more of the following statutory categories: a process, a machine, a manufacture, or a composition of matter. Claims 1–9, 19, and 20 are directed to a process (“An information-security method”), and claims 10–18 are directed to a machine (“An information-security system”). Thus, claims 1–20 satisfy Step One because they are all within one of the four statutory categories of eligible subject matter. Claims 1–20, however, are directed to an abstract idea without significantly more. For claim 1, the specific limitations that recite an abstract idea are: entering a customer’s application details . . ., wherein the application details include personal identification information, account numbers, transaction requests, and other relevant data to initiate a fraud detection process; analyzing application data . . . to identify inconsistencies, falsified information, and unusual requests that might indicate potential fraud, by comparing the application data against a database of legitimate and fraudulent transactions to detect patterns such as mismatched information, unusually large transactions, and requests that deviate from customer typical banking behavior, ensuring comprehensive scrutiny of the application data; simultaneously running a real-time conversation analysis . . . to monitor live conversation between a bank associate and a customer for signs of deceit or fraudulent intent; converting spoken dialogue between the bank associate and the customer into text . . ., wherein . . . performs real-time transcription of the conversation to facilitate detailed examination of verbal interactions; analyzing conversation text . . . to detect suspicious speech patterns, hesitations, inconsistencies in a story, or the use of high-pressure tactics, and identifying keywords and phrases commonly associated with fraudulent activities, including urgent requests for immediate action and reluctance to provide certain information, thereby enhancing the ability to identify potential fraud through linguistic analysis; combining the analysis results from both the application data and the conversation to create a comprehensive risk assessment, wherein dual analysis ensures that both verbal and non-verbal cues are considered to provide a holistic view of the potential fraud, enhancing accuracy and reliability of a fraud detection system; triggering an alert if a high probability of fraud is detected, wherein the alert is sent to the bank associate, security personnel, and other relevant individuals within a bank, and includes detailed information about reasons for suspicion to help staff make informed decisions about how to proceed, ensuring timely and effective response to potential fraud; empowering associates to take immediate action based on the alert to prevent fraudulent transactions, including verifying additional details with the customer, consulting with security personnel, or denying the transaction if necessary, thereby mitigating the risk of fraud and protecting both the bank and the customer from potential financial losses, and enhancing overall security of banking operations; continuously updating the system with new data and threat patterns to enhance detection capabilities . . ., improving its accuracy and detection capabilities over time through a continuous learning process that allows the system to adapt to new fraud tactics, ensuring the system remains effective against evolving fraud techniques; and monitoring subsequent activities on the account if a transaction is flagged but allowed to proceed, including tracking movement of funds, monitoring for unusual withdrawals, and analyzing further interactions with the bank, and alerting a security team if any additional suspicious activities are detected to ensure ongoing protection against fraud, thereby providing a multi-layered defense mechanism that extends beyond initial transaction to safeguard the customer’s account continuously. The claims, therefore, recite detecting and mitigating fraud for a transaction, which is the abstract idea of certain methods of organizing human activity because they recite a commercial interaction and the fundamental economic practice of mitigating risk. The claims also recite transcribing and analyzing conversations, which is the abstract idea of mental processes because it involves observations and evaluations that can be performed by the human mind. The judicial exception recited above is not integrated into a practical application. The additional elements of the claims are various generic technologies and computer components to implement this abstract idea (“AI/ML engine”, “conversation analysis engine”, “associate device”, “advanced speech recognition algorithms”, “supervised learning techniques”, “customer relationship management (CRM) system”, “encrypted communication channels”, “data input module”, “speech analysis module”, “risk assessment module”, “alert generation module”, “action module”, “continuous learning module”, “post-transaction monitoring module”, “centralized database”, “natural language processing (NLP)”, “end-to-end encryption”, and “secure messaging protocols”). The claims also recite “wherein the AI/ML engine is specifically trained” and “wherein the AI/ML engine learns from each interaction”. These additional elements are not integrated into a practical application because the invention merely applies the abstract idea to generic computer technology, using the computer to determine fraudulent activity and modify a transaction. Because the invention is using the computer simply as a tool to perform the abstract idea on, the judicial exception is not integrated into a practical application. Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements in combination are at a high level of generality such that they amount to no more than mere instructions to apply the abstract idea using generic components. Because merely “applying” the exception using generic computer components cannot provide an inventive concept, the additional elements do not recite significantly more than the judicial exception. Thus, claim 1 is not patent eligible. Independent claims 10 and 19 are rejected as ineligible subject matter under 35 U.S.C. 101 for substantially the same reasons as independent method claim 1. There are no additional elements recited in these claims other than the generic technology and computer parts discussed above (“AI/ML engine”, “conversation analysis engine”, “associate device”, “advanced speech recognition algorithms”, “data input module”, “speech analysis module”, “risk assessment module”, “alert generation module”, “action module”, “continuous learning module”, “post-transaction monitoring module”). The only differences are that the steps of claim 1 are performed by a system in claim 10 and implemented by a broader method in claim 19. Thus, because the same analysis should be used for all categories of claims, claims 10 and 19 are also not patent eligible. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2354 (2014). Dependent claims 2–9, 11–18, and 20 have been given the full two part analysis, analyzing the additional limitations both individually and in combination. The dependent claims, when analyzed individually and in combination, are also held to be patent ineligible under 35 U.S.C. 101. For claims 2, 5, 6, 11, 14, and 15, the additional recited limitations of these claims merely further narrow the abstract idea discussed above. These dependent claims only narrow the fraud detection recited in claims 1 and 10 by further specifying how the detection is improved—“uses supervised learning techniques trained on a dataset comprising known legitimate and fraudulent transactions”, “logs all alerts and actions taken by the bank associates for audit and review purposes”, and “feedback from bank associates on the effectiveness of the fraud detection and prevention measures”. The limitations of these claims fail to integrate the abstract idea into a practical application because these claims do not introduce additional elements other than the generic components discussed above (“AI/ML”). These claims do recite supervised learning techniques, but again, this is also merely being used as a tool to detect the fraud. These dependent claims, therefore, also amount to merely using a computer, in its ordinary capacity, as a tool to perform the abstract idea. Finally, the additional recited limitations of these dependent claims fail to establish that the claims provide an inventive concept because claims that merely use a computer, in its ordinary capacity, as a tool to perform the abstract idea cannot provide an inventive concept. For claims 3 and 12, the additional recited limitations of these claims merely further narrow the abstract idea discussed above. These dependent claims only narrow the fraud detection recited in claims 1 and 10 by further specifying how the conversation fraud is detected—“speech patterns associated with stress or nervousness”. The limitations of these claims fail to integrate the abstract idea into a practical application because these claims do not introduce additional elements other than the generic components discussed above (“conversation analysis engine”). These dependent claims, therefore, also amount to merely using a computer, in its ordinary capacity, as a tool to perform the abstract idea. Finally, the additional recited limitations of these dependent claims fail to establish that the claims provide an inventive concept because claims that merely use a computer, in its ordinary capacity, as a tool to perform the abstract idea cannot provide an inventive concept. For claims 4, 7, 13, and 16, the additional recited limitations of these claims merely further narrow the abstract idea discussed above. These dependent claims only narrow the fraud detection recited in claims 1 and 10 by further specifying the alert presented—“includes suggested actions” and “unified view of customer interactions and potential fraud alerts”. The limitations of these claims fail to integrate the abstract idea into a practical application because these claims do not introduce additional elements other than the generic components discussed above (“alert generation module”). These claims do recite a customer relationship management (CRM) system, but again, this is also merely being used as a tool to present information to a user. These dependent claims, therefore, also amount to merely using a computer, in its ordinary capacity, as a tool to perform the abstract idea. Finally, the additional recited limitations of these dependent claims fail to establish that the claims provide an inventive concept because claims that merely use a computer, in its ordinary capacity, as a tool to perform the abstract idea cannot provide an inventive concept. For claims 8, 9, 17, and 18, the additional recited limitations of these claims merely further narrow the abstract idea discussed above. These dependent claims only narrow the fraud detection recited in claims 1 and 10 by further specifying how information is accessed—“multi-factor authentication” and “encrypted communication channels”. The limitations of these claims fail to integrate the abstract idea into a practical application because these claims do not introduce additional elements other than the generic components discussed above. These claims do recite encrypted communication channels, but again, these are also merely being used as a tool to communicate information. These dependent claims, therefore, also amount to merely using a computer, in its ordinary capacity, as a tool to perform the abstract idea. Finally, the additional recited limitations of these dependent claims fail to establish that the claims provide an inventive concept because claims that merely use a computer, in its ordinary capacity, as a tool to perform the abstract idea cannot provide an inventive concept. For claim 20, the additional recited limitations of this claim merely further narrow the abstract idea discussed above. This dependent claim only narrows the fraud detection recited in claim 19 by further specifying the data aggregated—“from multiple branches and external sources”; how the detection is improved—“feedback from bank associates and security personnel”; how the conversation fraud is detected—“utilizing advanced natural language processing (NLP) techniques . . . to detect nuanced linguistic indicators of deception”; the alert presented—“detailed fraud analysis report”; how information is communicated—“secure communication channels . . . utilizing end-to-end encryption”; and how the fraud is mitigated—“conducting periodic training sessions for bank associates”, “dedicated fraud investigation unit . . . collaborate with external law enforcement agencies”, and “deploying automated fraud prevention measures”. The limitations of this claim fail to integrate the abstract idea into a practical application because this claim does not introduce additional elements other than the generic components discussed above (“AI/ML engine” and “conversation analysis engine”). This claim does recite a centralized database, natural language processing (NLP), end-to-end encryption, and secure messaging protocols, but again, these are also merely being used as tools to implement the abstract ideas above. The centralized database is merely being used as a tool to store data, the NLP techniques are merely being used as a tool to detect fraud, and the encryption and secure messaging are merely being used as tools to communicate information more securely. This dependent claim, therefore, also amounts to merely using a computer, in its ordinary capacity, as a tool to perform the abstract idea. Finally, the additional recited limitations of this dependent claim fails to establish that the claim provides an inventive concept because claims that merely use a computer, in its ordinary capacity, as a tool to perform the abstract idea cannot provide an inventive concept. Claim Rejections - 35 USC § 103 In the event that the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for determining obviousness under 35 U.S.C. 103 are summarized as follows: (1) Determining the scope and contents of the prior art. (2) Ascertaining the differences between the prior art and the claims at issue. (3) Resolving the level of ordinary skill in the pertinent art. (4) Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 10, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Kramme et al., U.S. Patent App. No. 2023/0316285 (“Kramme”) in view of Laird et al., U.S. Patent App. No. 2021/0407514 (“Laird”). For claim 1, Kramme teaches: An information-security method for detecting and preventing in-person bank fraud, comprising the steps of (¶ 158: example method): entering a customer’s application details into an AI/ML engine, wherein the application details include personal identification information, account numbers, transaction requests, and other relevant data to initiate a fraud detection process (¶ 53–55, 195–196, 64–65: machine learning generator collects data including account and transaction information); analyzing application data using the AI/ML engine, wherein the AI/ML engine is specifically trained to identify inconsistencies, falsified information, and unusual requests that might indicate potential fraud, by comparing the application data against a database of legitimate and fraudulent transactions to detect patterns such as mismatched information, unusually large transactions, and requests that deviate from customer typical banking behavior, ensuring comprehensive scrutiny of the application data (¶ 67–69: determination if fraud has occurred; ¶ 182-183: inconsistencies with data; ¶ 173: unusual purchases or amounts; ¶ 154: information matched to determine forgeries; ¶ 171: typical purchases); . . . combining the analysis results from both the application data and the conversation to create a comprehensive risk assessment, wherein dual analysis ensures that both verbal and non-verbal cues are considered to provide a holistic view of the potential fraud, enhancing accuracy and reliability of a fraud detection system (¶ 65: various data sources combined to make determination); triggering an alert if a high probability of fraud is detected, wherein the alert is sent to the bank associate, security personnel, and other relevant individuals within a bank, and includes detailed information about reasons for suspicion to help staff make informed decisions about how to proceed, ensuring timely and effective response to potential fraud [“to help staff make informed decisions about how to proceed, ensuring timely and effective response to potential fraud” only recites intended use, and is therefore not given patentable weight.] (¶ 165, 192: fraud alert including transaction information; ¶ 168: reasons for fraud alert; ¶ 51: alerts sent to various individuals); empowering associates to take immediate action based on the alert to prevent fraudulent transactions, including verifying additional details with the customer, consulting with security personnel, or denying the transaction if necessary, thereby mitigating the risk of fraud and protecting both the bank and the customer from potential financial losses, and enhancing overall security of banking operations [“thereby mitigating the risk of fraud and protecting both the bank and the customer from potential financial losses, and enhancing overall security of banking operations” only recites intended use, and is therefore not given patentable weight.] (¶ 70, 79: additional analysis performed for flagged data to determine if fraud); continuously updating the system with new data and threat patterns to enhance detection capabilities of the AI/ML engine, wherein the AI/ML engine learns from each interaction, improving its accuracy and detection capabilities over time through a continuous learning process that allows the system to adapt to new fraud tactics, ensuring the system remains effective against evolving fraud techniques [“ensuring the system remains effective against evolving fraud techniques” only recites intended use, and is therefore not given patentable weight.] (¶ 112, 194: fraud classification rules updated by training machine learning to improve accuracy and precision of fraud analysis); and monitoring subsequent activities on the account if a transaction is flagged but allowed to proceed, including tracking movement of funds, monitoring for unusual withdrawals, and analyzing further interactions with the bank, and alerting a security team if any additional suspicious activities are detected to ensure ongoing protection against fraud, thereby providing a multi-layered defense mechanism that extends beyond initial transaction to safeguard the customer’s account continuously [“thereby providing a multi-layered defense mechanism that extends beyond initial transaction to safeguard the customer’s account continuously” only recites intended use, and is therefore not given patentable weight.] (¶ 70, 79: flagged situations undergo closer investigation; ¶ 168: analysis of fraud alert with additional data to confirm whether false positive or additional fraud is detected). Kramme does not teach: simultaneously running a real-time conversation analysis engine on an associate device, wherein the conversation analysis engine is equipped with advanced speech recognition algorithms to monitor live conversation between a bank associate and a customer for signs of deceit or fraudulent intent; converting spoken dialogue between the bank associate and the customer into text using the speech recognition algorithms, wherein the conversation analysis engine performs real-time transcription of the conversation to facilitate detailed examination of verbal interactions; and analyzing conversation text using the conversation analysis engine to detect suspicious speech patterns, hesitations, inconsistencies in a story, or the use of high- pressure tactics, and identifying keywords and phrases commonly associated with fraudulent activities, including urgent requests for immediate action and reluctance to provide certain information, thereby enhancing the ability to identify potential fraud through linguistic analysis. Laird, however, teaches: simultaneously running a real-time conversation analysis engine on an associate device, wherein the conversation analysis engine is equipped with advanced speech recognition algorithms to monitor live conversation between a bank associate and a customer for signs of deceit or fraudulent intent (¶ 145: example conversation analyzed for deception); converting spoken dialogue between the bank associate and the customer into text using the speech recognition algorithms, wherein the conversation analysis engine performs real-time transcription of the conversation to facilitate detailed examination of verbal interactions (¶ 84: natural language transcript created through speech recognition); and analyzing conversation text using the conversation analysis engine to detect suspicious speech patterns, hesitations, inconsistencies in a story, or the use of high- pressure tactics, and identifying keywords and phrases commonly associated with fraudulent activities, including urgent requests for immediate action and reluctance to provide certain information, thereby enhancing the ability to identify potential fraud through linguistic analysis (¶ 155–160, 170–171: deception detected to determine fraud for conversation, for example through hesitation, negation, hedging, and uncertainty). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme by adding the conversation analysis from Laird. One of ordinary skill in the art would have been motivated to make this modification for the purpose of automatically determining deception from interactions—a benefit explicitly disclosed by Laird (¶ 117: invention provides benefit of continuous assessment of deception). For claim 2, Kramme and Laird teach all the limitations of claim 1 above, and Kramme further teaches: The method of claim 1, wherein the AI/ML engine uses supervised learning techniques trained on a dataset comprising known legitimate and fraudulent transactions to improve the accuracy of fraud detection (¶ 58, 62–63: supervised machine learning techniques used). For claim 10, Kramme teaches: An information-security system for detecting and preventing in-person bank fraud, comprising (¶ 28: example system): a data input module configured to receive customer application details, including personal identification information, account numbers, transaction requests, and other relevant data (¶ 53–55, 195–196, 64–65: machine learning generator collects data including account and transaction information); an artificial intelligence and machine learning (AI/ML) engine configured to analyze application data for inconsistencies, falsified information, or unusual requests by comparing the application data against a vast database of legitimate and fraudulent transactions to detect patterns such as mismatched information, unusually large transactions, or requests that deviate from customer typical banking behavior (¶ 67–69: determination if fraud has occurred; ¶ 182-183: inconsistencies with data; ¶ 173: unusual purchases or amounts; ¶ 154: information matched to determine forgeries; ¶ 171: typical purchases); . . . a risk assessment module configured to combine the analysis results from both the application data and the conversation to create a comprehensive risk assessment, ensuring that both verbal and non-verbal cues are considered to provide a holistic view of a potential fraud (¶ 65: various data sources combined to make determination); an alert generation module configured to trigger an alert if a high probability of fraud is detected, wherein the alert is sent to the bank associate, security personnel, and other relevant individuals within the bank, including detailed information about reasons for suspicion to help staff make informed decisions about how to proceed [“to help staff make informed decisions about how to proceed” only recites intended use, and is therefore not given patentable weight.] (¶ 165, 192: fraud alert including transaction information; ¶ 168: reasons for fraud alert; ¶ 51: alerts sent to various individuals); an action module configured to empower bank associates to take immediate action based on the alert, including verifying additional details with the customer, consulting with security personnel, or denying the transaction if necessary (¶ 70, 79: additional analysis performed for flagged data to determine if fraud); a continuous learning module within the AI/ML engine, configured to update the system with new data and threat patterns, enhancing detection capabilities of the AI/ML engine by learning from each interaction to improve its accuracy and adapt to new fraud tactics (¶ 112, 194: fraud classification rules updated by training machine learning to improve accuracy and precision of fraud analysis); and a post-transaction monitoring module configured to monitor subsequent activities on the account if a transaction is flagged but allowed to proceed, tracking movement of funds, monitoring for unusual withdrawals, and analyzing further interactions with the bank, and alerting a security team if any additional suspicious activities are detected (¶ 70, 79: flagged situations undergo closer investigation; ¶ 168: analysis of fraud alert with additional data to confirm whether false positive or additional fraud is detected). Kramme does not teach: a real-time conversation analysis engine equipped with speech recognition algorithms, configured to run on an associate’s device to monitor live conversation between the bank associate and the customer, converting spoken dialogue into text for further analysis; and a speech analysis module within the conversation analysis engine, configured to detect suspicious speech patterns, hesitations, inconsistencies in a story, high- pressure tactics, and keywords or phrases commonly associated with fraudulent activities, such as urgent requests for immediate action or reluctance to provide certain information. Laird, however, teaches: a real-time conversation analysis engine equipped with speech recognition algorithms, configured to run on an associate’s device to monitor live conversation between the bank associate and the customer, converting spoken dialogue into text for further analysis (¶ 145: example conversation analyzed for deception; ¶ 84: natural language transcript created through speech recognition); and a speech analysis module within the conversation analysis engine, configured to detect suspicious speech patterns, hesitations, inconsistencies in a story, high- pressure tactics, and keywords or phrases commonly associated with fraudulent activities, such as urgent requests for immediate action or reluctance to provide certain information (¶ 155–160, 170–171: deception detected to determine fraud for conversation, for example through hesitation, negation, hedging, and uncertainty). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme by adding the conversation analysis from Laird. One of ordinary skill in the art would have been motivated to make this modification for the purpose of automatically determining deception from interactions—a benefit explicitly disclosed by Laird (¶ 117: invention provides benefit of continuous assessment of deception). For claim 11, Kramme and Laird teach all the limitations of claim 10 above, and Kramme further teaches: The system of claim 10, wherein the AI/ML engine uses supervised learning techniques trained on a dataset comprising known legitimate and fraudulent transactions to improve the accuracy of fraud detection (¶ 58, 62–63: supervised machine learning techniques used). For claim 19, Kramme teaches: An information-security method for detecting and preventing in-person bank fraud, comprising the steps of (¶ 158: example method): entering a customer’s application details into an AI/ML engine (¶ 53–55, 195–196, 64–65: machine learning generator collects data including account and transaction information); analyzing application data for inconsistencies, falsified information, or unusual requests using the AI/ML engine (¶ 67–69: determination if fraud has occurred; ¶ 182-183: inconsistencies with data; ¶ 173: unusual purchases or amounts; ¶ 154: information matched to determine forgeries; ¶ 171: typical purchases); . . . combining the analysis results from both the application data and the conversation to create a comprehensive risk assessment (¶ 65: various data sources combined to make determination); triggering an alert if a high probability of fraud is detected, notifying the associate, security personnel, and other relevant individuals (¶ 165, 192: fraud alert including transaction information; ¶ 168: reasons for fraud alert; ¶ 51: alerts sent to various individuals); empowering associates to take immediate action based on the alert to prevent fraudulent transactions (¶ 70, 79: additional analysis performed for flagged data to determine if fraud); continuously updating a system with new data and threat patterns to enhance detection capabilities of the AI/ML engine (¶ 112, 194: fraud classification rules updated by training machine learning to improve accuracy and precision of fraud analysis); and monitoring subsequent activities on an account if a transaction is flagged but allowed to proceed, and alerting a security team if additional suspicious activities are detected (¶ 70, 79: flagged situations undergo closer investigation; ¶ 168: analysis of fraud alert with additional data to confirm whether false positive or additional fraud is detected). Kramme does not teach: simultaneously running a real-time conversation analysis engine on an associate device; converting spoken dialogue between an associate and the customer into text using speech recognition algorithms; and analyzing conversation text for suspicious speech patterns, hesitations, or keywords often associated with scams. Laird, however, teaches: simultaneously running a real-time conversation analysis engine on an associate device (¶ 145: example conversation analyzed for deception); converting spoken dialogue between an associate and the customer into text using speech recognition algorithms (¶ 84: natural language transcript created through speech recognition); and analyzing conversation text for suspicious speech patterns, hesitations, or keywords often associated with scams (¶ 155–160, 170–171: deception detected to determine fraud for conversation, for example through hesitation, negation, hedging, and uncertainty). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme by adding the conversation analysis from Laird. One of ordinary skill in the art would have been motivated to make this modification for the purpose of automatically determining deception from interactions—a benefit explicitly disclosed by Laird (¶ 117: invention provides benefit of continuous assessment of deception). Claims 3–9 and 12–18 are rejected under 35 U.S.C. 103 as being unpatentable over Kramme et al., U.S. Patent App. No. 2023/0316285 (“Kramme”) in view of Laird et al., U.S. Patent App. No. 2021/0407514 (“Laird”) and Pertrushin, U.S. Patent App. No. 2002/0010587 (“Pertrushin”). For claim 3, Kramme and Laird teach all the limitations of claim 1 above. The combination of Kramme and Laird does not teach: wherein the real-time conversation analysis engine identifies speech patterns associated with stress or nervousness, which may indicate deceitful behavior. Pertrushin, however, teaches: The method of claim 2, wherein the real-time conversation analysis engine identifies speech patterns associated with stress or nervousness, which may indicate deceitful behavior (¶ 150: stress and nervousness detected to prevent fraud; ¶ 167, 170: deception detected). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the stress detection from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. For claim 4, Kramme, Laird, and Pertrushin teach all the limitations of claim 3 above, and Kramme further teaches: The method of claim 3, wherein the alert triggered by the system includes suggested actions for the bank associate to take in response to a suspected fraud, such as additional verification questions or requesting secondary identification (¶ 70, 79: manual or further automated review using additional data sources). For claim 5, Kramme, Laird, and Pertrushin teach all the limitations of claim 4 above, and Kramme further teaches: The method of claim 4, wherein the system logs all alerts and actions taken by the bank associates for audit and review purposes, allowing for continuous improvement of the fraud detection process (¶ 184: alerts and feedback inputted into machine learning for future improvements). For claim 6, Kramme, Laird, and Pertrushin teach all the limitations of claim 5 above, and Kramme further teaches: The method of claim 5, wherein the continuous updates to the AI/ML engine include feedback from bank associates on the effectiveness of the fraud detection and prevention measures, enhancing the engine’s learning capabilities (¶ 184: effectiveness of alerts or indication of false positives applied to learning). For claim 7, Kramme, Laird, and Pertrushin teach all the limitations of claim 6 above, and Kramme further teaches: The method of claim 6, wherein the system integrates with the bank’s existing customer relationship management (CRM) system to provide a unified view of customer interactions and potential fraud alerts (¶ 29, 208: bank system integrated with the fraud detection and alert system). For claim 8, Kramme, Laird, and Pertrushin teach all the limitations of claim 7 above, and Pertrushin further teaches: The method of claim 7, wherein the system employs multi-factor authentication for accessing the fraud detection system to ensure that only authorized personnel can respond to alerts and take action (¶ 309–310: two different authentication performed for controlling access to secured-system). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the authentication from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. For claim 9, Kramme, Laird, and Pertrushin teach all the limitations of claim 8 above, and Pertrushin further teaches: The method of claim 8, wherein the system uses encrypted communication channels to transmit alerts and sensitive customer data to prevent unauthorized access and ensure data integrity (¶ 88: secure communication protocol; ¶ 378: data encoded for transmission). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the authentication from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. For claim 12, Kramme and Laird teach all the limitations of claim 11 above. The combination of Kramme and Laird does not teach: wherein the real-time conversation analysis engine identifies speech patterns associated with stress or nervousness, which may indicate deceitful behavior. Pertrushin, however, teaches: The system of claim 11, wherein the real-time conversation analysis engine identifies speech patterns associated with stress or nervousness, which may indicate deceitful behavior (¶ 150: stress and nervousness detected to prevent fraud; ¶ 167, 170: deception detected). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the stress detection from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. For claim 13, Kramme, Laird, and Pertrushin teach all the limitations of claim 12 above, and Kramme further teaches: The system of claim 12, wherein the alert generation module includes suggested actions for the bank associate to take in response to a suspected fraud, such as additional verification questions or requesting secondary identification (¶ 70, 79: manual or further automated review using additional data sources). For claim 14, Kramme, Laird, and Pertrushin teach all the limitations of claim 13 above, and Kramme further teaches: The system of claim 13, wherein the system logs all alerts and actions taken by the bank associates for audit and review purposes, allowing for continuous improvement of a fraud detection process (¶ 184: alerts and feedback inputted into machine learning for future improvements). For claim 15, Kramme, Laird, and Pertrushin teach all the limitations of claim 14 above, and Kramme further teaches: The system of claim 14, wherein the continuous learning module incorporates feedback from bank associates on effectiveness of the fraud detection and prevention measures, enhancing the engine’s learning capabilities (¶ 184: effectiveness of alerts or indication of false positives applied to learning). For claim 16, Kramme, Laird, and Pertrushin teach all the limitations of claim 15 above, and Kramme further teaches: The system of claim 15, wherein the system integrates with the bank’s existing customer relationship management (CRM) system to provide a unified view of customer interactions and potential fraud alerts (¶ 29, 208: bank system integrated with the fraud detection and alert system). For claim 17, Kramme, Laird, and Pertrushin teach all the limitations of claim 16 above, and Pertrushin further teaches: The system of claim 16, wherein the system employs multi-factor authentication for accessing a fraud detection system to ensure that only authorized personnel can respond to alerts and take action (¶ 309–310: two different authentication performed for controlling access to secured-system). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the authentication from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. For claim 18, Kramme, Laird, and Pertrushin teach all the limitations of claim 17 above, and Pertrushin further teaches: The system of claim 17, wherein the system uses encrypted communication channels to transmit alerts and sensitive customer data to prevent unauthorized access and ensure data integrity (¶ 88: secure communication protocol; ¶ 378: data encoded for transmission). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the authentication from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Kramme et al., U.S. Patent App. No. 2023/0316285 (“Kramme”) in view of Laird et al., U.S. Patent App. No. 2021/0407514 (“Laird”); Pertrushin, U.S. Patent App. No. 2002/0010587 (“Pertrushin”); and Taylor, U.S. Patent App. No. 2003/0182214 (“Taylor”). Kramme and Laird teach all the limitations of claim 15 above, and Kramme further teaches: integrating the AI/ML engine with a centralized database that aggregates data from multiple branches and external sources, including data from other financial institutions, regulatory bodies, and public records, to enhance the comprehensiveness and accuracy of a fraud detection process by providing a broader context for analyzing customer application details and transactional behavior [“to enhance the comprehensiveness and accuracy of a fraud detection process by providing a broader context for analyzing customer application details and transactional behavior” only recites intended use, and is therefore not given patentable weight.] (¶ 53–55, 195–196, 64–65: data collected including account and transaction information; ¶ 55: account records database from financial institution); implementing a feedback loop within the AI/ML engine, wherein the engine receives and processes real-time feedback from bank associates and security personnel regarding the accuracy and effectiveness of detected fraud alerts, allowing the system to continuously refine its algorithms and improve its predictive capabilities [“allowing the system to continuously refine its algorithms and improve its predictive capabilities” only recites intended use, and is therefore not given patentable weight.] (¶ 184: alerts and feedback inputted into machine learning for future improvements, and effectiveness of alerts or indication of false positives applied to learning); . . . providing a detailed fraud analysis report to the bank associate and security personnel when an alert is triggered, wherein the report includes a summary of detected inconsistencies in the application data, the suspicious speech patterns identified in the conversation, and any relevant historical data on customer previous interactions and transaction history, enabling a more informed decision-making process [“enabling a more informed decision-making process” only recites intended use, and is therefore not given patentable weight.] (¶ 165, 192: fraud alert including transaction information; ¶ 168: reasons for fraud alert; ¶ 51: alerts sent to various individuals). The combination of Kramme and Laird does not teach: utilizing advanced natural language processing (NLP) techniques within the real-time conversation analysis engine to detect nuanced linguistic indicators of deception, such as specific syntactic patterns, emotional undertones, and changes in speech tempo or volume, thereby increasing sensitivity and specificity of the fraud detection system; enabling secure communication channels for transmitting fraud alerts and reports to ensure that sensitive information is protected from unauthorized access and tampering, utilizing end-to-end encryption and secure messaging protocols to maintain data integrity and confidentiality; conducting periodic training sessions for bank associates on the latest fraud detection techniques and system updates, wherein the training includes hands-on exercises with simulated fraud scenarios to improve an ability the associate to recognize and respond to potential fraud in real-time; establishing a dedicated fraud investigation unit within a security team, equipped with advanced analytical tools and access to the centralized database, to conduct in-depth investigations of flagged transactions and collaborate with external law enforcement agencies when necessary to address and mitigate fraud risks; and deploying automated fraud prevention measures that can be triggered by the system, such as temporarily freezing the customer’s account or placing a hold on suspicious transactions, to prevent further fraudulent activity while the alert is being reviewed and investigated by the security team Pertrushin, however, teaches: utilizing advanced natural language processing (NLP) techniques within the real-time conversation analysis engine to detect nuanced linguistic indicators of deception, such as specific syntactic patterns, emotional undertones, and changes in speech tempo or volume, thereby increasing sensitivity and specificity of the fraud detection system [“thereby increasing sensitivity and specificity of the fraud detection system” only recites intended use, and is therefore not given patentable weight.] (¶ 150: stress and nervousness detected to prevent fraud; ¶ 167, 170: deception detected; ¶ 169: speaking rate, volume, and other factors indicating stress); and enabling secure communication channels for transmitting fraud alerts and reports to ensure that sensitive information is protected from unauthorized access and tampering, utilizing end-to-end encryption and secure messaging protocols to maintain data integrity and confidentiality (¶ 88: secure communication protocol; ¶ 378: data encoded for transmission; ¶ 309–310: controlled access to secured-system). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme and the conversation analysis in Laird by adding the stress detection from Pertrushin. One of ordinary skill in the art would have been motivated to make this modification for the purpose of applying emotion recognition to detect fraud—a benefit explicitly disclosed by Pertrushin (¶ 3: current methods do not utilize emotion recognition for business purposes; ¶ 4: invention detects nervousness in business environment to prevent fraud). Kramme, Laird, and Pertrushin are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. The combination of Kramme, Laird, and Pertrushin does not teach: conducting periodic training sessions for bank associates on the latest fraud detection techniques and system updates, wherein the training includes hands-on exercises with simulated fraud scenarios to improve an ability the associate to recognize and respond to potential fraud in real-time; establishing a dedicated fraud investigation unit within a security team, equipped with advanced analytical tools and access to the centralized database, to conduct in-depth investigations of flagged transactions and collaborate with external law enforcement agencies when necessary to address and mitigate fraud risks; and deploying automated fraud prevention measures that can be triggered by the system, such as temporarily freezing the customer’s account or placing a hold on suspicious transactions, to prevent further fraudulent activity while the alert is being reviewed and investigated by the security team. Taylor, however, teaches: conducting periodic training sessions for bank associates on the latest fraud detection techniques and system updates, wherein the training includes hands-on exercises with simulated fraud scenarios to improve an ability the associate to recognize and respond to potential fraud in real-time (¶ 33: bank tellers and representatives provided training to utilize system efficiently and effectively); establishing a dedicated fraud investigation unit within a security team, equipped with advanced analytical tools and access to the centralized database, to conduct in-depth investigations of flagged transactions and collaborate with external law enforcement agencies when necessary to address and mitigate fraud risks (¶ 45–47: additional investigation based on fraudulent activity response code including contacting law enforcement); and deploying automated fraud prevention measures that can be triggered by the system, such as temporarily freezing the customer’s account or placing a hold on suspicious transactions, to prevent further fraudulent activity while the alert is being reviewed and investigated by the security team (¶ 45: transaction stopped until additional investigation or review are completed). It would have been obvious for one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the fraud detection in Kramme, the conversation analysis in Laird, and the stress detection in Pertrushin by adding the fraud response from Taylor One of ordinary skill in the art would have been motivated to make this modification for the purpose of better tracking and responding to suspicious activity—a benefit explicitly disclosed by Taylor (¶ 30–32: invention addresses need for identifying and responding to suspicious actions by users) and desired by Kramme (¶ 4: need for improving accuracy of fraud determinations). Kramme, Laird, Pertrushin, and Taylor are all related to fraud detection, so one of ordinary skill in the art would have been motivated to make this detection even more effective by combining these references together. Prior Art Not Relied Upon The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. Those prior art references are as follows: Cousins, U.S. Patent App. No. 2022/0245639, discloses fraud detection through natural language processing. Motaharian et al., U.S. Patent App. No. 2020/0320619, discloses detecting and preventing fraud using machine learning models. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIVESH PATEL whose telephone number is (571) 272–3430. The examiner can normally be reached on Monday and Thursday 10:00 AM–8:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Gart can be reached on (571) 272–3955. The fax phone number for the organization where this application or proceeding is assigned is 571–273–8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIVESH PATEL/Examiner, Art Unit 3696
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §101, §103, §112
Mar 26, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597064
AUTOMATIC INTERACTIVE ELEMENT VISUALIZATIONS IN CONNECTION WITH SERVER OPERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12548002
SYSTEMS AND METHODS FOR AUTOMATED BILL SPLITTING
2y 5m to grant Granted Feb 10, 2026
Patent 12488387
PROFILE BASED VIDEO CREATION
2y 5m to grant Granted Dec 02, 2025
Patent 12456122
FRAUD DETECTION SYSTEM, FRAUD DETECTION DEVICE, FRAUD DETECTION METHOD, AND PROGRAM
2y 5m to grant Granted Oct 28, 2025
Patent 12417434
JAILED ENVIRONMENT RESTRICTING PROGRAMMATIC ACCESS TO MULTI-TENANT DATA
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
53%
Grant Probability
92%
With Interview (+39.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 120 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month