Prosecution Insights
Last updated: April 19, 2026
Application No. 18/977,355

SYSTEMS AND METHODS FOR TRAINING AND APPLYING MACHINE LEARNING SYSTEMS IN FRAUD DETECTION

Non-Final OA §101§103
Filed
Dec 11, 2024
Examiner
PINSKY, DOUGLAS W
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The PNC Financial Services Group, Inc.
OA Round
1 (Non-Final)
26%
Grant Probability
At Risk
1-2
OA Rounds
2y 12m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
29 granted / 112 resolved
-26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
39 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
31.2%
-8.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
26.8%
-13.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 112 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgments The application and preliminary amendment filed on 12/11/2024 and the IDSs filed on 02/13/2025, 11/05/2025, and 01/23/2026 are acknowledged. Status of Claims Claims 22-41 are pending. In the preliminary amendment filed on 12/11/2024, claims 1-21 were cancelled and claims 22-41 were added. Claims 22-41 are rejected. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) and/or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Specifically, Applicant claims priority under 35 U.S.C. 120 to U.S. application no. 18/189,952 filed on 03/24/2023, and under 35 U.S.C. 119(e) to provisional application no. 63/404,868 filed on 09/08/2022. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 and 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or earlier-filed nonprovisional application or provisional application for which benefit is claimed). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed applications, U.S. Patent Application No. 18/189,952 and provisional application no. 63/404,868, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. Specifically, claims 23 and 33 recite: wherein the three-digit risk indicator is generated based on a comparison with at least one threshold. Adequate support or enablement for the language underlined above is not found in the prior-filed applications. As for U.S. Patent Application No. 18/189,952, as best understood, the portions of the disclosure most closely related to the language underlined above is the specification at 0031 and 0054-0056. 0031 teaches that "risk indicators represent thresholds," not that they are generated based on comparison with a threshold. 0031 teaches that "the term 'indicator' may refer to a value corresponding to the risk of fraud." 0031 teaches that "an indicator is an alternative terminology that may describe risk scoring." 0031 teaches that "if the risk indicator is [above/below] a [certain] threshold, then the risk indicator may be assigned as [high/low] risk." As such, 0031 teaches that the already generated risk indicator is compared with a threshold, not that the risk indicator is generated based on a comparison with a threshold. 0056 teaches that "alert 1023 may be generated based on a comparison of the risk indicators to one or more thresholds." 0054-0055 do not disclose subject matter that teaches anything more relevant than the above-noted subject matter. Thus, as per the most closely related portions of the disclosure as described above, no subject matter is seen in U.S. Patent Application No. 18/189,952 suggesting that a risk indicator is generated based on a comparison with a threshold. As for provisional application no. 63/404,868, as best understood, the portions of the disclosure most closely related to the language underlined above is the specification at 004. 004 states in pertinent part: In some embodiments, financial transactions are rated with a three-digit risk score by the mode. As explained above, no disclosure is seen in provisional application no. 63/404,868 suggesting that a risk indicator is generated based on a comparison with a threshold. Specifically, claims 26 and 36 recite: wherein the machine learning model is periodically tuned based on at least one of the transactional data, the customer characteristic, or the historical data. Specifically, claims 28 and 38 recite: wherein the machine learning model is periodically tuned based on information associated with at least one of the processed action of the user, the probability, the three-digit risk indication, or the outcome Adequate support or enablement for the language underlined above is not found in the prior-filed applications. As for U.S. Patent Application No. 18/189,952, as best understood, the portion of the disclosure most closely related to the language underlined above is the specification at 0071. 0071 teaches tuning the machine learning model but does not teach or suggest that it is performed periodically Thus, as per the most closely related portion of the disclosure as described above, no subject matter is seen in U.S. Patent Application No. 18/189,952 suggesting tuning the machine learning model periodically. As for provisional application no. 63/404,868, as best understood, the portions of the disclosure most closely related to the language underlined above is the specification at 001 and 009. 001 and 009 teach tuning the machine learning model, including auto tuning, but do not teach or suggest that tuning is performed periodically. As explained above, no disclosure is seen in provisional application no. 63/404,868 suggesting tuning the machine learning model periodically. Applicant states that this application is a continuation application of the prior-filed application. A continuation application cannot include new matter. Applicant is required to delete the benefit claim or change the relationship (continuation application) to continuation-in-part because this application contains subject matter not disclosed in the prior-filed application, as per claims 23, 26, 28, 33, 36 and 38, as explained above. Drawings/Specification The specification is objected to and the drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because: - in respect of Fig. 17A, specification paragraph 0083 refers to "graph 1703A" and "graph 1700," thus using two different reference numerals to refer to the same singular item. Also, as per Fig. 17A, it would appear that both reference numerals 1700 and 1703A refer to the graph, thus using two different reference numerals to refer to the same singular item. Further, specification paragraph 0085 refers to "y-axis 1703A," but specification paragraph 0083 refers to the "y-axis 1702A," thus using two different reference numerals to refer to the same singular item. Further, specification paragraph 0085 refers to "y-axis 1703A," but specification paragraph 0083 refers to "graph 1703A," thus the same reference numeral is used to refer to two different items. Corrected drawing sheets in compliance with 37 CFR 1.121(d) and amendment to the specification, as applicable, are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 22-41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 22-41 are directed to a computer-implemented method, computing system, or non-transitory computer-readable medium, which are/is one of the statutory categories of invention. (Step 1: YES) Claims 22, 32 and 41 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a computer-implemented method, computing system, and non-transitory computer-readable medium for determining a likelihood of fraud of a transaction (see specification 0043-0044 for clarification of the term "class"). For claims 22, 32 and 41 (claim 32 being deemed representative), the limitations (indicated below in bold) of: at least one processor configured to: receive, by the at least one processor, a processed action of a user; determine, by a machine learning model, a probability that the processed action belongs to a class; based on the probability, generate, using the machine learning model, a risk indicator, wherein the risk indicator is a three-digit number associated with unauthorized activity; and predict, using the machine learning model, an outcome based on the three-digit risk indicator. as drafted, constitute a process that, under the broadest reasonable interpretation, covers "certain methods of organizing human activity," specifically, "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use. The Examiner notes that "fundamental economic practices" or "fundamental economic principles" describe concepts relating to the economy and commerce, including hedging, insurance, and mitigating risks, and "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. MPEP 2106.04(a)(2)II.A.,B. If a claim limitation, under its broadest reasonable interpretation, covers "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components and generally linking the use of a judicial exception to a particular technological environment or field of use, then it falls within the "certain methods of organizing human activity" grouping of abstract ideas. Accordingly, claims 22, 32 and 41 recite an abstract idea. (Step 2A - Prong 1: YES. The claims recite an abstract idea.) This judicial exception is not integrated into a practical application. Claims 22, 32 and 41 recite the additional elements of at least one processor, a machine learning model (the foregoing recited in claims 22, 32 and 41), and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by one or more processors of the computing system, cause the computing system to [perform operations] (the foregoing recited in claim 41), that implement the abstract idea. These additional elements are not described by the applicant and they are recited at a high level of generality (i.e., one or more generic computer elements performing generic computer functions, or generally linking the use of a judicial exception to a particular technological environment or field of use), such that they amount to no more than mere instructions to apply the exception using generic computer elements (namely, at least one processor, a machine learning model, and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by one or more processors of the computing system, cause the computing system to [perform operations]), or such that they amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (namely, a machine learning model). Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2A - prong 2: NO. The additional elements do not integrate the abstract idea into a practical application.) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of at least one processor, a machine learning model (the foregoing recited in claims 22, 32 and 41), and a non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising one or more instructions that, when executed by one or more processors of the computing system, cause the computing system to [perform operations] (the foregoing recited in claim 41), to perform the noted steps amount to no more than mere instructions to apply the exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use. Mere instructions to apply an exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, claims 22, 32 and 41 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.) Dependent claims 23-31 and 33-40 are similarly rejected because they further define/narrow the abstract idea of independent claims 22, 32 and 41 as discussed above, and/or do not integrate the abstract idea into a practical application or provide an inventive concept such as would render the claims eligible, whether each is considered individually or as an ordered combination. As for further defining/narrowing the abstract idea: Dependent claims 23 and 33 merely further describe wherein the three-digit risk indicator is generated based on a comparison with at least one threshold. Dependent claims 24 and 34 merely further describe wherein the at least one threshold is determined by the … model based on at least one of a deposit type, a deposit amount, a deposit location, or a fraud history. Dependent claims 25 and 35 merely further describe wherein the probability that the processed action belongs to the class is based on at least one of transactional data, a customer characteristic, or historical data. Dependent claims 26 and 36 merely further describe wherein the … model is periodically tuned based on at least one of the transactional data, the customer characteristic, or the historical data. Dependent claims 27 and 37 merely further describe wherein the three-digit risk indicator is derived from a model probability. Dependent claims 28 and 38 merely further describe wherein the … model is periodically tuned based on information associated with at least one of the processed action of the user, the probability, the three-digit risk indication, or the outcome. Dependent claims 29 and 39 merely further describes wherein the … model is benchmarked based on a relative precision. Dependent claims 30 and 40 merely further describe wherein the class is generated by the … model. Dependent claim 31 merely further describes wherein the processed action of the user is enriched in real time. As for additional elements: Dependent claims 24, 26, 28-30, 34, 36 and 38-40 recite "machine learning." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Dependent claims 23, 25, 27, 31, 33, 35 and 37 do not recite any additional elements, and accordingly, for the reasons provided above with respect to the independent claims, are not patent eligible. Therefore, dependent claims 23-31 and 33-40 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Beckman v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 22, 25, 27, 30, 32, 35, 37, 40 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over Beckman et al. (U.S. Patent No. 10,872,341 B1), hereafter Beckman, in view of Poduval et al. (U.S. Patent No. 2022/0358507 A1), hereafter Poduval. Regarding Claims 22, 32 and 41 Beckman teaches: (claim 22) the method being performed by the at least one processor and comprising: (4:20-48 payment system 130 may include processor(s), software, etc. to perform operations; 5:66-6:20 risk assessment engine 150 may include processor(s), software, etc. to perform operations) (claim 32) at least one processor configured to: (4:20-48 payment system 130 may include processor(s), software, etc. to perform operations; 5:66-6:20 risk assessment engine 150 may include processor(s), software, etc. to perform operations) (claim 41) A non-transitory computer-readable medium storing a set of instructions for identifying unauthorized activity in a computing system including at least one processor, the set of instructions comprising: one or more instructions that, when executed by one or more processors of the computing system, cause the computing system to: (4:20-48 payment system 130 may include processor(s), software, etc. to perform operations, including as per 4:49-5:7 performing authorization and authentication; 5:66-6:20 risk assessment engine 150 may include processor(s), software, etc. to perform operations, including as per 6:21-58 determining whether communications/transactions are fraudulent) receive, by the at least one processor, a processed action of a user; (8:13-17, Fig. 2, 204 payment system 130 receives transaction authorization request) determine, by a machine learning model, a probability that the processed action belongs to a class; (8:13-43, Fig. 2, 208 payment server 130 "[determines] whether the transaction may be fraudulent," "using any suitable technique and fraud detection process," based on risk factors as described at 8:20-27. The result of this process is a determination that the transaction is not fraudulent (8:28-37) or a determination that the transaction is fraudulent (8:38-64), i.e., a determination that the processed action belongs to a class, either the class of fraudulent transactions or the class of not fraudulent transactions. As the determination is made by any suitable fraud detection process, e.g., based on the risk factors at 8:20-27, it is understood that such a determination is a determination of a probability, not an absolute determination (e.g., the fact that "the account … has been recently flagged for fraud" or not is a risk factor that makes it more likely that the transaction is fraudulent or not; such factors do not yield absolute determinations). Nonetheless, since it is not explicitly stated that the determination of belonging to a class is a determination of a probability of belonging to a class, it would be obvious to combine embodiments and incorporate a probabilistic determination such as (1) that described as capable of being performed by transaction verification service 140 (5:11-26 "Transaction verification service 140 may comprise software-based services, APIs, SDKs, or the like configured to perform various fraud detection operations, as discussed further herein. For example, transaction verification service 140 may comprise a mobile device authentication and fraud prevention software solution, such as the INAUTH SECURITY PLATFORM™ offered by InAuth, Inc. The mobile device authentication and fraud prevention software solution may provide additional fraud detection services, such as, for example, the generation of a fraud score based on captured user device data. The fraud score may be configured to provide an analysis of the likelihood that user device 110, or an email account for user 101, has been compromised by a third party.") or again such as (2) that described as being performed by risk assessment engine 150, based on comparable risk factors (6:21-7:44, 9:58-10:33 e.g., "the captured device data may be input into the statistical model, the machine learning model, or the artificial intelligence model to determine a risk of fraud. … Based on the data consumption [e.g., device data, historical transaction fraud data, non-device related attributes], the model may be leveraged to predict whether the verification is coming from a fraudulent device."), because determinations as to whether a transaction is fraudulent, based on risk data, in the context of a payment system/payment processor, are generally probability determinations, as such determinations do not generally admit of absolute certainty, rather a probability/likelihood is a more reliable/plausible determination, hence affording more accurate, useful, and effective, fraud detection and mitigation/prevention; regarding by a machine learning model: 6:59-76 "risk assessment engine 150 may implement statistical models, machine learning, artificial intelligence, and the like to aid in identifying possible fraud. In that regard, the captured user device data may be input into the statistical model, the machine learning model, or the artificial intelligence model to determine a risk of fraud."; 15:7-16 "any of the operations may be conducted or enhanced by … machine learning") based on the probability, generate, using the machine learning model, a risk indicator, wherein the risk indicator is … associated with unauthorized activity; and (9:58-10:33, Fig. 3, 318-324; the risk indicator is the output/fraud determination classification of "low risk," "medium risk" or "high risk," as taught by 7:7-30, see also 10:34-11:3; regarding based on the probability: note that step 318 (determining a secondary fraud risk (e.g., 9:61-67), based on captured device data, historical data, and non-device attributes (e.g., 10:25-32)) (generating a risk indicator) is based on step 208 (Fig. 2) (determining whether the transaction is fraudulent) (determining a probability that the processed action belongs to a class), via a series of intermediate steps (namely, Fig. 2, 212, 214, Fig. 3, 302-306, 314, 316); regarding using the machine learning model: 6:59-76, 15:7-16 see quotations in previous bullet point immediately above) predict, using the machine learning model, an outcome based on the … risk indicator. (2:54-58, 7:7-30, 10:30-11:3 the fraud determination classification, e.g., "high risk" or "low risk," constitutes a predication that the transaction is fraudulent or not fraudulent and/or the user's device/email/account/etc. has been compromised) (compare the description of "outcome" in Applicant's specification, see 0057-0058 (Fig. 10); 0059, 0086 (Fig. 11)) Beckman does not explicitly disclose that the risk indicator is a three-digit number, but Poduval teaches: … wherein the risk indicator (chargeback (fraud) risk probability score) is a three-digit number associated with unauthorized activity; and (0097; note although Poduval refers to "chargeback risk probability score" and the like terminology, Poduval's disclosure (and terminology) are deemed to deal with (and refer to) -- and in any event are applicable to -- fraud risk determination, in view of Poduval's teachings, such as: 0034 "One of the most common reasons for the chargeback is fraud."; 0037 "The set of transaction indicators includes, … fraud risk features, …. " (note the transaction indicators are used to generate the transaction features that are inputted into the machine learning models to predict chargeback/fraud, see 0037, 0067-0075, 0079-0080, 00083, 0087, 0108, 0111-0112); 0038 "Furthermore, the server system is configured to implement or run a chargeback risk prediction model to compute a set of chargeback risk probability scores corresponding to one or more time intervals associated with the account holder based, at least in part, on the set of transaction features. … Moreover, the server system is configured to transmit a notification to an issuer server associated with the account holder based, at least in part, on the set of chargeback risk probability scores. In an example, the issuer server may analyze the set of chargeback risk probability scores to perform one or more downstream tasks (e.g., prediction of fraudulent payment transactions, etc.)."; 0070 "The set of transaction features may be determined from or engineered from the payment transaction data of the past payment transactions. … The set of transaction features includes at least one of: spend transaction features, merchant features of a plurality of merchants involved in the payment transactions, and fraud risk features. … In an example, the fraud risk features are generated based on the payment transactions performed due to fraud."; 0073 "In one example, the fraud risk features may include total fraud amount for card-not-present cross-border payment transactions performed in 1 month, 3 months, and so on, chargeback amount for fraudulent payment transactions performed at a merchant in 1 month, 3 months, and so on, and the like.") predict, using the machine learning model, an outcome based on the three-digit risk indicator. (0097, 0109-0111 the chargeback risk probability score indicates a percentage, e.g., 0097 "the chargeback risk probability score of 876 for the time interval 0-12 months indicates that there is a probability of 87.6% that the account holder 104 will raise the chargeback request in the next 12 months." -- this teaches predicting a chargeback request with an 87.6% probability (outcome); or 0109 "a chargeback risk probability score for fourth time interval i.e., next 12 months is calculated as 990. If a threshold value for the chargeback risk probability score is 800, the first account holder is expected to raise chargeback requests within the time interval 0-12 months." -- this teaches predicting a chargeback request (outcome); alternatively, if the chargeback risk probability score classifies an account holder as risky, i.e., risk score exceeds threshold (as per the example from 0109 given here) (see Fig. 3, 226, 310, 312 for context), then, per 0039-0040, 0091, 0103-0104, 0111-0112 and Fig. 3, 312, 314, chargeback amount prediction model 228 predicts probable chargeback amount band and corresponding chargeback risk level (outcome); regarding using the machine learning model: 0033, 0038, 0040, 0075, 0084, 0108, 0112, 0124, 0132 machine learning model, GBDT model) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Beckman's systems and methods for determining fraud risk, by incorporating therein these teachings of Poduval regarding use of a three-digit number as a risk score and predicting an outcome based thereon, because it would provide for more comprehensive, fine-tuned risk scoring and consequent responsive actions compared to Beckman (Beckman merely teaches a tripartite risk classification of "high risk," "medium risk," and "low risk" and mentions in another context numerical scores). This more comprehensive, fine-tuned process would allow for more precise risk scores/classifications and thus would be more likely to treat transactions appropriately according to their actual risk level, thus leading to more effective and satisfactory outcomes, and thus amounts to an improvement upon Beckman. Regarding Claims 25 and 35 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Beckman further teaches: wherein the probability that the processed action belongs to the class is based on at least one of transactional data, a customer characteristic, or historical data. (8:13-27 determination as to whether transaction is fraudulent (determination of probability that the processed action belongs to the class) may be based on "data regarding the transaction account associated with user 101 to check if the account is active, has been recently flagged for fraud, and/or the like. Payment system 130 may retrieve recent purchases and determine whether the geographical codes align with the geographical code of the transaction authorization request (e.g., user 101 purchases goods in Spain and Brazil on the same day)" -- this data teaches transactional data, a customer characteristic, or historical data.) Regarding Claims 27 and 37 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Poduval further teaches: wherein the three-digit risk indicator is derived from a model probability. (0097 "In one implementation, each chargeback risk probability score is a three-digit numeric value ranging from 001 to 999, indicative of the probability of chargeback to be experienced for the future payment transactions to be performed by the account holders. In an example, the chargeback risk probability score of 234 for the time interval 0-6 months indicates that there is a probability of 23.4% that the account holder 104 will raise the chargeback request in the next 6 months. In another example, the chargeback risk probability score of 876 for the time interval 0-12 months indicates that there is a probability of 87.6% that the account holder 104 will raise the chargeback request in the next 12 months." -- these scores (risk indicators) 234 and 876 are merely representations of the probabilities of 23.4% and 87.6%, respectively; regarding a model probability: as per 0033, 0038, 0040, 0075, 0084, 0108, 0124, 0132, the probability is determined by a machine learning/statistical/AI/ GBDT model, hence the probability is a model probability) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these further teachings of Poduval regarding the three-digit number being derived from a model probability, because it is appropriate for a fraud detection/prevention system, and will yield proper results, if the risk score / risk of fraud represents, and hence is derived from, the probability of fraud, and because where the fraud detection/prevention system uses a model to determine probability / risk of fraud, the probability will be generated by the model. Regarding Claims 30 and 40 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Beckman further teaches: wherein the class is generated by the machine learning model.(8:13-64, Fig. 2, 208 payment server 130 "[determines] whether the transaction may be fraudulent," "using any suitable technique and fraud detection process," based on risk factors as described at 8:20-27. The output of this process is a determination that the transaction is not fraudulent (8:28-37) or a determination that the transaction is fraudulent (8:38-64), i.e., an assignment of the transaction to a class (either the class of fraudulent transactions or the class of not fraudulent transactions), in other words, the process outputs or generates the class (or generates the output, which is the class) to which the transaction is assigned; regarding by the machine learning model: 6:59-76 "risk assessment engine 150 may implement statistical models, machine learning, artificial intelligence, and the like to aid in identifying possible fraud. In that regard, the captured user device data may be input into the statistical model, the machine learning model, or the artificial intelligence model to determine a risk of fraud."; 15:7-16 "any of the operations may be conducted or enhanced by … machine learning") Claims 23, 24, 26, 28, 33, 34, 36 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Beckman et al. (U.S. Patent No. 10,872,341 B1), hereafter Beckman, in view of Poduval et al. (U.S. Patent No. 2022/0358507 A1), hereafter Poduval, and further in view of Melul et al. (U.S. Patent Application Publication No. 2022/0198470 A1), hereafter Melul. Regarding Claims 23 and 33 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Poduval further teaches: … the three-digit risk indicator …. (0097) Beckman in view of Poduval does not explicitly disclose but Melul teaches: wherein the … risk indicator is generated based on a comparison with at least one threshold. (0022 Melul incorporates by reference Amitai (U.S. Patent Application Publication No. 2022/0044248 A1; U.S. Application No. 16/985,773); Amitai, 0026-0034, teaches that the (fraud risk) score (risk indicator) for a transaction is optimized/minimized (generated) (0031-0033) based on comparison with a threshold (0033, the score is determined based on the inequalities shown here, which are a comparison with a threshold T and a comparison with a threshold of 0); under broadest reasonable interpretation, the optimizing/minimizing teaches "is generated") It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these teachings of Melul/Amitai regarding a model tuning process, including optimizing risk scores, that serves to increase model accuracy and quality by minimizing false negatives and positives, because incorporating such a model tuning process, including optimizing risk scores, would improve model accuracy/quality and hence model results and, as such, would improve upon a system/method that does not perform such tuning. Melul, 0023 (Amitai, 0032, 0043, 0048; note Amitai, 0043, 0048 are continuations/elaborations of the content of Amitai, 0026-0034, cited in the rejection, as explained at Amitai, 0034 ("The tuning software 103 is described in more detail in the discussion of FIG. 2 below [namely, 0040-0049].")). Regarding Claims 24 and 34 Beckman in view of Poduval and Melul teaches the limitations of base claims 22 and 32 and intervening claims 23 and 33 as set forth above. Melul further teaches: wherein the at least one threshold is determined by the machine learning model based on at least one of a deposit type, a deposit amount, a deposit location, or a fraud history. (0022 Melul incorporates by reference Amitai (U.S. Patent Application Publication No. 2022/0044248 A1; U.S. Application No. 16/985,773); Amitai, 0026-0034, 0040-0049: Amitai, 0026, teaches that a rule set has thresholds. As such, different rule sets have different thresholds, and changing a rule set changes the thresholds. Changing a threshold amounts to setting or determining a new threshold. Amitai, 0028, 0031, teaches that a draft rule set is refined, by tuning the model, and so transformed into a model rule set. (Alternatively, Amitai, 0048, teaches that a new rule set may be adopted in the model tuning process.) Therefore, Amitai teaches that thresholds are changed, i.e., new thresholds are determined, by tuning the model. Further, per Amitai, 0040-0049 (describing Fig. 2), tuning the model includes calculating an "accuracy metric … by calculating the percentage of False Positives and False Negatives" (0043, 0048) -- as such, since the false positives and negatives constitute an aspect of the fraud history, the tuning of the model is based on fraud history. Thus, the transformation of the rules, e.g., of the draft rules into model rules, and the concomitant changing of thresholds or determination of new thresholds, is based on a fraud history. Regarding by the machine learning model: Melul 0050 "the constant threshold [i.e., the threshold to which confidence (fraud risk) scores are compared] is adjusted through machine learning techniques"; note, under broadest reasonable interpretation, adjustment of a threshold teaches setting or determining a new threshold) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, and as further modified by Melul's/Amitai's teachings regarding a model tuning process, including optimizing risk scores, that serves to increase model accuracy and quality by minimizing false negatives and positives, by incorporating therein these further teachings of Melul/Amitai regarding a model tuning process, including determining new thresholds based on a fraud history, that serves to increase model accuracy and quality by minimizing false negatives and positives, because, as per Melul's/Amitai's previous teachings, incorporating such a model tuning process, including determining new thresholds based on a fraud history, would improve model accuracy/quality and hence model results and, as such, would improve upon a system/method that does not perform such tuning. Melul, 0023 (Amitai, 0032, 0043, 0048; note Amitai, 0043, 0048 are continuations/elaborations of the content of Amitai, 0026-0034, cited in the rejection, as explained at Amitai, 0034 ("The tuning software 103 is described in more detail in the discussion of FIG. 2 below [namely, 0040-0049].")). Regarding Claims 26 and 36 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 and intervening claims 25 and 35 as set forth above. Beckman in view of Poduval does not explicitly disclose but Melul teaches: wherein the machine learning model is periodically tuned based on at least one of the transactional data, the customer characteristic, or the historical data. (0022 "… once the artificial neural network model is generated, transactions seen on the rail 106 [the transactional data] are used to tune the production model, …. In some cases, … the production model 104 is re-tuned 103 periodically.", 0023 "The model tuning software 103 outputs a production model 104 that is tuned by the latest transaction received from the rail 106 [the transactional data].") It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these teachings of Melul regarding periodically tuning a machine learning model used to detect fraudulent transactions, because periodically tuning a machine learning model is necessary to keep the model up to date with the latest transaction data and therefore periodic tuning maintains the good performance of the model and the accuracy of the model results and, as such, would improve upon a system/method that does not perform tuning. Melul, 0023. Regarding Claims 28 and 38 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Beckman in view of Poduval does not explicitly disclose but Melul teaches: wherein the machine learning model is periodically tuned based on information associated with at least one of the processed action of the user, the probability, the three-digit risk indication, or the outcome. (0022 "… once the artificial neural network model is generated, transactions seen on the rail 106 [the processed action of the user] are used to tune the production model, …. In some cases, … the production model 104 is re-tuned 103 periodically.", 0023 "The model tuning software 103 outputs a production model 104 that is tuned by the latest transaction received from the rail 106 [the processed action of the user].") It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these teachings of Melul regarding periodically tuning a machine learning model used to detect fraudulent transactions, because periodically tuning a machine learning model is necessary to keep the model up to date with the latest transaction data and therefore periodic tuning maintains the good performance of the model and the accuracy of the model results and, as such, would improve upon a system/method that does not perform tuning. Melul, 0023. Claims 29 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Beckman et al. (U.S. Patent No. 10,872,341 B1), hereafter Beckman, in view of Poduval et al. (U.S. Patent No. 2022/0358507 A1), hereafter Poduval, and further in view of Lu (U.S. Patent Application Publication No. 2019/0197442 A1). Regarding Claims 29 and 39 Beckman in view of Poduval teaches the limitations of base claims 22 and 32 as set forth above. Beckman in view of Poduval does not explicitly disclose but Lu teaches: wherein the machine learning model is benchmarked based on a relative precision. (0054 "data preprocessor 210 includes implementation of … performance benchmarking techniques to compare the performance measures such as accuracy, precision and recall"; note Applicant's specification 0086 defines "relative precision" as "[t]he number of true positives divided by the sum of the true positives and false negatives"; Lu 0054 teaches, inter alia, benchmarking based on "recall"; the definition of "recall" in the art is the same as Applicant's definition of "relative precision"1; therefore, Lu's teaching of "recall" teaches Applicant's recitation of "relative precision") It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these teachings of Lu regarding benchmarking a model based on performance measures such as accuracy, precision and recall, because it would improve model performance/results by having the model meet a standard/criterion/threshold. Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Beckman et al. (U.S. Patent No. 10,872,341 B1), hereafter Beckman, in view of Poduval et al. (U.S. Patent No. 2022/0358507 A1), hereafter Poduval, and further in view of Chisholm (U.S. Patent Application Publication No. 2014/0351137 A1). Regarding Claim 31 Beckman in view of Poduval teaches the limitations of base claim 22 as set forth above. Beckman in view of Poduval does not explicitly disclose but Chisholm teaches: wherein the processed action of the user is enriched in real time. (0042, 0101, claims 8 and 14; regarding in real time: 0024, 0041, 0042, 0044, 0049, 0054, 0101, the enrichment is performed by the decisioning platform (e.g., 0042, 0101, Fig. 4, preprocessor 204), which performs real-time fraud scoring/fraud prediction of transactions, such that the fraud scoring/fraud prediction can be and is used to decide whether to approve or decline a transaction (the candidate transaction currently being evaluated by the system (0049)) in real time) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Beckman's systems and methods for determining fraud risk, as modified by Poduval's teachings regarding use of a three-digit number as a risk score and predicting an outcome based thereon, by incorporating therein these teachings of Chisholm regarding enriching a transaction in real time to facilitate a process of determining whether a transaction is fraudulent, based on the following reasoning: Beckman (e.g., 10:25-31) teaches using other data (risk factors) in assessing risk/generating a risk score, which other data (risk factors) include data comparable to Chisholm's historical and cardholder data with which a transaction is enriched, but Beckman does not teach "enriching" the transaction (the transaction record that is being analyzed for risk of fraud) with this other data (risk factors). Thus, while Beckman uses this other data (risk factors) in performing the risk assessment, Beckman does not provide implementation detail as to how this other data (risk factors) (e.g., "historical data and non-device related attributes" (10:25-31)) is obtained so that it is available for use in performing the fraud risk assessment. However, Chisholm's enrichment process provides implementation detail appropriate to implement this under-specified aspect of Beckman's systems and methods, as Chisholm's enrichment constitutes a known way of having other supplementary data at hand together with the primary data under test, for use in analyzing the primary data, as Beckman requires. In addition, the combination (incorporation of Chisholm's teaching of enrichment into Beckman) would have predictable results, e.g., the enrichment can be incorporated into Beckman in a mechanical-like manner without adversely affecting any other relevant aspects of Beckman. Thus, combining Chisholm's teaching of enrichment with Beckman provides implementation detail (namely, for having the risk factors at hand for use in the risk analysis) that permits Beckman to actually perform its intended function of assessing fraud risk of transactions so as to allow/provide for appropriate remedial actions. Note although Beckman's operations are not explicitly described as being performed in real-time, as best understood Beckman's fraud detection/risk scoring is performed in real-time such that the fraud score/prediction is used to decide whether to approve or decline a current transaction. On this understanding, the fact that Chisholm's enrichment is performed in real-time aligns with and serves Beckman's requirements. On the other hand, if this understanding is incorrect, Chisholm's real-time enrichment would facilitate improved performance by Beckman, by permitting transactions to be enriched in real time, thus permitting fraud scoring / prediction that is both more accurate (on account of the presence/inclusion of the enriched data) and that can be performed in real-time rather than after the fact. Conclusion The prior art made of record and not relied upon, as set forth in the accompanying Notice of References Cited (PTO-892), is considered pertinent to applicant's disclosure. Comeaux (US-11669844-B1) teaches evaluating a transaction for fraud, including generating an alert probability score (fraud risk score), based on a wide variety of risk factors (e.g., behavioral profile including user personal data, financial data, and user social network data; historical user financial data), generating an alert, and taking action to prevent processing of a fraudulent transaction, including using and training a machine learning model. Comeaux (US-10567402-B1) and Comeaux (US-11722502-B1) teach fraud detection/prevention similar to Comeaux (US-11669844-B1) but to greater depth in certain aspects. Phatak (US-2022/0006899-A1) and Anderson (US-12136096-B1) teach a fraud alert queue that prioritizes fraud alerts based on fraud importance. Vaswani (US-2022/0377090-A1) teaches fraud detection/prevention (including risk scores and alerts) similar to Comeaux (US-11669844-B1). Karpovsky (US-2022/0191173-A1) teaches determining fraud risk based on VPN and/or proprietary knowledge and periodic monitoring. Pavlovic ("Log-normal Distribution - A simple explanation”) teaches content about log-normal distribution similar to that of Applicant's disclosure (specification paragraph 0045). Vimal (US-2023/0186311-A1) (qualifying as prior art based on Indian priority date) teaches, inter alia, benchmarking a machine learning model based on precision, recall, F1, and/or F2 scores, see 0097. Thomas (US-10997596-B1) teaches appending a fraud accuracy tag to a declined transaction, where the fraud accuracy tag is indicative of whether the decline of the transaction is a true positive decline or a false positive decline, whereby the fraud accuracy tag is suitable to provide insight into accuracy of a fraud strategy implemented in connection with the declined transaction. Beckman (US-10872341-B1) teaches secondary fraud detection during transaction verification, where the transaction verification process with the user itself is evaluated and scored for likelihood of fraud, using machine learning, and including assigning designations of high risk, medium risk, and low risk. Selway (US-2013/0013491-A1) teaches evaluating a transaction for fraud, including generating a risk score, based on risk factors, generating an alert, and taking remedial action to prevent processing of a fraudulent transaction, including using neural models, and where the transaction may be received from a specified one of several specified transaction channels (e.g., bank, merchant, ATM, remote, etc.) and the alert/notification may be sent to the same transaction channel or to any of the transaction channels. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS W PINSKY whose telephone number is (571) 272-4131. The examiner can normally be reached on 8:30 am - 5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DWP/ Examiner, Art Unit 3626 /JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626 1 See, e.g., Wilber ("Precision and Recall"), p. 9.
Read full office action

Prosecution Timeline

Dec 11, 2024
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §103
Apr 16, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12481976
ENCODED TRANSFER INSTRUMENTS
2y 5m to grant Granted Nov 25, 2025
Patent 12450588
METHOD FOR PROCESSING A SECURE FINANCIAL TRANSACTION USING A COMMERCIAL OFF-THE-SHELF OR AN INTERNET OF THINGS DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12450591
SYSTEMS AND METHODS FOR CONTACTLESS CARD ACTIVATION VIA UNIQUE ACTIVATION CODES
2y 5m to grant Granted Oct 21, 2025
Patent 12406309
Auto Filing of Insurance Claim Via Connected Car
2y 5m to grant Granted Sep 02, 2025
Patent 12254516
NETWORK-BASED JOINT INVESTMENT PLATFORM
2y 5m to grant Granted Mar 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
26%
Grant Probability
41%
With Interview (+15.5%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 112 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month