Prosecution Insights
Last updated: April 19, 2026
Application No. 18/752,310

GENERATING A FRAUD PREDICTION UTILIZING A FRAUD-PREDICTION MACHINE-LEARNING MODEL

Non-Final OA §101
Filed
Jun 24, 2024
Examiner
COBB, MATTHEW
Art Unit
3661
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Chime Financial Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
142 granted / 198 resolved
+19.7% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
33 currently pending
Career history
231
Total Applications
across all art units

Statute-Specific Performance

§101
29.5%
-10.5% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 198 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/21/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Status of Claims This Office action is in reply to filing by applicant on 06/24/2024. Claims 21 – 40 are new. Claims 1 – 20 are cancelled. Claims 21 – 40 are currently pending and have been examined. THIS ACTION IS MADE NON-FINAL Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21 – 40 are rejected pursuant to 35 USC 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 21 (method), independent claim 30 (non-transitory computer readable medium), and independent claim 36 (system) are mirrored. The independent claims are directed to eligible statutory categories of invention pursuant to 35 USC 101. (Step 1: YES, the independent claims all fall within a statutory category). Independent claims 21, 30, and 36 recite: receiving, from a client device associated with a user account, a digital dispute claim indicating a disputed network transaction associated with the user account; identifying one or more features associated with the digital dispute claim; providing the one or more features to a fraud detection machine-learning model that was trained using training fraud predictions corresponding to training digital dispute claims, wherein the fraud detection machine-learning model is trained by comparing the training fraud predictions to fraud action labels using a loss function to generate losses that are used to adjust one or more parameters of the fraud detection machine-learning model; generating, utilizing the fraud detection machine-learning model, a fraud prediction indicating a probability that the digital dispute claim is fraudulent; receiving a fraud action label for the digital dispute claim verifying whether the digital dispute claim was fraudulent or not fraudulent; and updating the one or more parameters of the fraud detection machine-learning model using the fraud action label by comparing the fraud action label to the fraud prediction. Several dependent claims further refine the abstract idea of claims 21, 30, and 36: providing, for display, within a graphical user interface of an administrator device, a fraud risk level indicator in relation to the user account based on a fraud risk level corresponding to the fraud prediction. (claim 22); determining the fraud risk level for the fraud prediction by comparing the fraud prediction to a low-risk fraud prediction threshold, a moderate-risk fraud prediction threshold, and a high-risk fraud prediction threshold.(claim 23); issuing a credit to the user account associated with the disputed network transaction based on the fraud prediction indicating the digital dispute claim as not fraudulent. (claims 24, 31, and 37); identifying one or more additional user accounts associated with the user account; and generating one or more fraud-propensity labels for the one or more additional user accounts and the user account based on the fraud prediction indicating the digital dispute claim as fraudulent. (claims 25 and 32); suspending the user account based on the fraud prediction indicating the digital dispute claim as fraudulent. (claims 26, 33, and 38); identifying one or more additional digital dispute claims related to the user account or one or more user accounts associated with the user account; and generating potentially fraudulent activity labels for the one or more additional digital dispute claims based on the fraud prediction indicating the digital dispute claim as fraudulent. (claims 27, 34, and 39); suspending one or more network transactions of the user account based on the fraud prediction indicating the digital dispute claim as fraudulent. (claims 28, 35, and 40); identifying the one or more features associated with the digital dispute claim by determining one or more of: an account feature, an automated-clearing-house feature, a computing device feature, a demand draft feature, a transaction feature, a peer-to-peer-payment feature, an identity verification feature, a shared device feature, a shared-internet-protocol-address feature, a customer-service-contact feature, a failed login feature, a password reset feature, a personal identifiable-information-change feature, a linked-claim-dispute feature, a dispute history feature, or a merchant feature. (claim 29). The claims as above recite the abstract idea of: detecting a potentially fraudulent transaction. The above claim limitations, under their broadest reasonable interpretation, cover performance of the limitation(s) as certain methods of organizing human activity which include a fundamental economic principal or practice and/or a commercial / legal interaction. As such, the limitations fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. (Step 2A - Prong 1: YES. The claims recite an abstract idea). The above referenced independent claim limitations do not integrate the abstract idea into a practical application as those limitations simply apply generic computer components to the recited abstract idea, without more. They otherwise attempt to use a computer as a tool to perform the abstract idea. The CRM, processor, computing device, system, network, machine learning model, and fraud prediction(s) computer related limitations of the independent claims are simply being applied as tools as against the abstract idea. The above said computer related elements additionally quite generally link the abstract idea above noted to computer technology. The recitation of a generic computer component(s) in a claim does not necessarily preclude that claim from reciting an abstract idea. These computer / computer related elements are simply applied as tools to the abstract idea. The computer related limitations of the above analyzed claim(s) are moreover all recited at a high-level of generality (e.g., a generic computer performing a generic computer function). The computer related additional elements in the independent claims do not integrate the articulated abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, and those additional elements are also set forth with a high level of generality. Any other possible computer or non-computer related elements in any of the claims which may be relevant to the present 35 USC 101 analysis include: dependent claim elements necessarily arising out of their dependency on the independent claims: CRM, processor, computing device, system, network, machine learning model, fraud prediction(s); additional computer related elements found in the dependent claims themselves (an automated-clearing-house feature, a computing device feature, a shared device feature, a shared-internet-protocol-address feature, a failed login feature, and linked-claim-dispute feature. non-computer related elements in dependent claims (an account feature, a demand draft feature, a transaction feature, a peer-to-peer-payment feature, an identity verification feature, a customer-service-contact feature, a password reset feature, a clearing-house feature, personal identifiable-information-change feature, a dispute history feature, or a merchant feature). non-computer related elements in any of the independent claims (none). Examiner notes that the sum total of all of the above detailed claim elements still fail to integrate the above articulated abstract idea into a practical application. This is because they do not impose any meaningful limits on practicing said abstract idea, whether they are considered either separately or as an ordered combination. Any additional (computer related) elements in the claims, if any, are set forth at a high level of generality. The claims are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additionally claimed elements in the claims do not integrate the abstract idea into a practical application). The claims lack additional elements pursuant to 35 USC 101, which additional elements are sufficient to amount to significantly more than the abstract idea (judicial exception), either separately or as an ordered combination. This lack of providing significantly more than the judicial exception is also referred to as a claim lacking an “inventive concept”. As discussed above with respect to integration of the abstract idea into a practical application, the additional computer related elements consisting here of the above detailed various pieces of computer hardware amount to no more than mere instructions to perform the judicial exception by applying generic computer(s) / computer components to it, and/or generally linking the abstract idea to computer technology. Mere instructions to perform a judicial exception by applying generic computer components and thereby automating the process cannot provide an inventive concept. The above detailed non-computer elements of the claim set also fail to amount to significantly more than the abstract idea (judicial exception), either separately or as an ordered combination. This Application's lack of providing significantly more than the judicial exception is also referred to as its claims lacking an “inventive concept. See MPEP 2106.05(f) where applying a computer as a tool to the abstract idea is not indicative of significantly more. The above detailed non-computer related elements do not change the outcome of the analysis, as they simply further limit ways which the abstract idea may be performed. (Step 2B: NO. The claims do not provide significantly more than the judicial exception). Claims 21 – 40 are not patent-eligible pursuant to 35 USC 101. Allowable Subject Matter Claims 21 – 40 would be allowable if rewritten or amended to overcome the additional rejections herein pursuant to 35 U.S.C. 101. The following is a statement of reasons for the indication of allowable subject matter: Independently, while the claims' limitations most recently set forth herein may individually be disclosed by the prior art, the claims as a whole are not obvious because the examiner would have to improperly use their separate limitations as a road map to combine them. CONCLUSION The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached form 892. Beckman (US10872341B1) – Systems and methods for secondary fraud detection during transaction verifications are disclosed. A payment system may transmit a fraud protection notification to a user in response to potential fraud being detected as part of a primary transaction fraud detection process. In response to the user interacting with the fraud protection notification to confirm that the transaction was not fraudulent, the system may capture user device data from the user's device. The system may perform a secondary fraud detection process on the captured user device data to determine whether the verification of the transaction has a risk of being fraudulent. Kramme (US20210374764A1) - In a computer-implemented method of facilitating a fraud dispute resolution process, types of information historically indicative of fraud (or its absence) may be identified by training a machine learning program using transaction data associated with financial transactions and fraud determinations for those transactions. An indication that fraud is suspected for a first transaction may be received, and transaction data may be retrieved. Based upon at least one of the identified types of information and the transaction data, a first set of one or more queries that are designed to ascertain whether the first transaction was fraudulent may be generated. The first set of queries may be transmitted to a remote computing device for display to the customer, and a first set of one or more customer responses may be received. Based upon the first set of customer responses, it may be determined whether the first transaction was fraudulent. Ramakrishnan (US20200034842A1) - A system for predicting a non-fraud dispute using an artificial intelligence (AI) based communications system is disclosed. The system may comprise a data access interface to receive instructions historical transaction and disputes data from at least one data source associated with an account issuer. The data access interface may also receive incoming transaction data associated with a transaction from at least one data source associated with an account holder. The system may comprise a processor to predict a likelihood of a non-fraud dispute associated with the transaction by: examining the historical transaction and disputes data; retrieving non-fraud dispute attributes; parsing the incoming transaction data; applying predictive analytics to the incoming transaction data to yield a prediction value; determining that the prediction value meets a predetermined threshold; and generating a prediction for the likelihood of a non-fraud dispute associated with the transaction associated with the account holder to be outputted, via an output interface to a user device. Kramme (US20210374753A1) - A method of identifying a potential chargeback scenario includes generating or updating chargeback candidate detection rules, at least by training a machine learning program. The machine learning program may be trained using transaction data associated with financial transactions, and using chargeback determinations, for the financial transactions, that were made in accordance with chargeback rules associated with a card network entity. The method also includes receiving an indication that fraud has been confirmed for a financial transaction associated with a merchant and a financial account, and retrieving transaction data associated with the financial transaction. The method may further include determining, by applying the chargeback candidate detection rules, that a chargeback may be warranted for the transaction, and causing an indication of such to be displayed to one or more people via one or more computing device user interfaces. Pranav (US20220358507A1) - Embodiments provide methods and systems for predicting chargeback behavioral data of an account holder. The method performed by a server system includes accessing payment transaction data associated with the account holder from a transaction database. The payment transaction data includes a set of transaction indicators corresponding to payment transactions performed by the account holder within a predetermined time period. The method further includes generating a set of transaction features based on the set of transaction indicators. Furthermore, the method includes computing, via a chargeback risk prediction model, a set of chargeback risk probability scores corresponding to one or more time intervals associated with the account holder based, at least in part, on the set of transaction features. The method also includes transmitting a notification to an issuer server associated with the account holder based, at least in part, on the set of chargeback risk probability scores. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW COBB whose telephone number is (571) 272-3850. The examiner can normally be reached 9 - 5, M - F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to call examiner Cobb as above, or to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Nolan, can be reached at (571) 270-7016. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /MATTHEW COBB/Examiner, Art Unit 3661 /PETER D NOLAN/Supervisory Patent Examiner, Art Unit 3661
Read full office action

Prosecution Timeline

Jun 24, 2024
Application Filed
Jan 15, 2025
Response after Non-Final Action
Nov 08, 2025
Non-Final Rejection — §101
Dec 16, 2025
Interview Requested
Jan 20, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602728
CONNECTED HOME SYSTEM WITH RISK UNITS
2y 5m to grant Granted Apr 14, 2026
Patent 12597010
Machine Learning Model for Combining Device Presence Data with Physical Venue Transaction Data
2y 5m to grant Granted Apr 07, 2026
Patent 12586073
Controlling Transactions on a Network
2y 5m to grant Granted Mar 24, 2026
Patent 12586045
PAYMENT PROCESSING METHOD AND APPARATUS USING AN INTERMEDIARY PLATFORM
2y 5m to grant Granted Mar 24, 2026
Patent 12583361
BATTERY TEMPERATURE CONTROL METHOD AND BATTERY TEMPERATURE CONTROL SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+36.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month