Prosecution Insights
Last updated: April 19, 2026
Application No. 18/472,401

SYSTEM AND METHOD FOR GENERATING FRAUD RULE CRITERIA

Final Rejection §101
Filed
Sep 22, 2023
Examiner
SHAIKH, MOHAMMAD Z
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Toronto-Dominion Bank
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
3y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
285 granted / 544 resolved
At TC average
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
29 currently pending
Career history
573
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
33.7%
-6.3% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 544 resolved cases

Office Action

§101
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 1. This office action is in response to an amendment received on 1/14/26 for patent application 18/472,401. 2. Claims 1, 11, 20 are amended. 3. Claims 1-9, 11-21 are pending. RESPONSE TO ARGUMENTS Applicant argues#1 Step 2A, Prong One Applicant respectfully submits that the claimed subject matter is not directed to a judicial exception under Step 2A, Prong One. Applicant's claimed subject matter is directed towards how a computer system processes data through a tiered execution pipeline in which results of computational evaluation control how subsequent computational stages are executed. The claims as amended require that the system execute a series of operations including, after selecting a first cutoff value, excluding the at least one first group that is flagged as risky from further processing such that the at least one first group is not evaluated against additional cutoff values. This feature clarifies that the claimed subject matter is directed to internal execution-flow control within the computer system, and not organizing human activity or a commercial interaction. The Examiner has stated that the features of creating a first set of data that includes data flagged as fraud and data flagged as not fraud; identifying a subset of the first set of data as having at least one geographic region, the at least one geographic region being risky; removing the subset of the first set of data from the first set of data; categorizing the first set of data into a number of first groups; for each first group, calculating at least one metric; comparing the at least one metric to a number of first cut off values; automatically selecting a first cutoff value that generates a maximum performance output as a first threshold, the maximum performance output comprising a highest false positive rate allowed with a maximum incremental gain on overall fraud captured; flagging at least one first group that has the at least one metric below the first threshold as risky; generating fraud rule criteria based on the at least one first group; output that defines the fraud rule criteria for identifying fraud (see Office Action, pages 22 and 23) represent a commercial interaction as steps for generating fraud rule criteria, yet the Examiner has not identified the commercial interaction allegedly represented. These steps do not organize, manage, or regulate any commercial interaction. They do not regulate relationships between people or institutions, nor do they prescribe or coordinate human conduct. They do not facilitate transactions, determine transaction terms, coordinate commercial participants, or approve or deny transactions. Rather, the recited steps are internal computer-implemented data processing operations performed on historical transaction data for the purpose of generating executable fraud rule criteria. Applicant's claimed subject matter relies on the automated calculation of metrics, iterative evaluation against cutoff values, and dynamic termination of further evaluation across multiple processing stages. These operations cannot reasonably be characterized as organizing human activity because they do not prescribe how humans interact, transact, or make decisions, nor do they disclose economic principles or commercial interactions. Rather, they describe how the computer system manages internal data flow and execution paths during runtime of a fraud rule generation process. As described in the specification, "[r]esponsive to selecting the first cutoff value that generates the maximum performance output as the first threshold, any first group that has the at least one metric below the first threshold is flagged as risky and may be filtered or assigned to a data bucket 420" (see specification, paragraph [0083]). This ensures that only data requiring further processing and evaluation against additional cutoff values is processed, thereby reducing redundant computation, rather than reprocessing already flagged data. Examiner Response Examiner respectfully disagrees. Examiner has identified the abstract elements in the claims that are reciting the identified abstract idea, see the section 101 rejection below. The limitations (exclude the at least one first group flagged as risky from further processing, preventing the first group from being evaluated against one or more additional cutoff values) is part of the identified abstract idea. The Federal circuit in Bascom has stated that the concept of filtering data is an abstract concept. Filtering of data before fed into a computer based learning system is both commonly understood and explained in the specification (See spec para 63-64). Furthermore, the concept of fraud mitigation by generating fraud rule criteria is a commercial interaction. Furthermore, the Federal Circuit in the Alice decision, stated that fraud detection is a long-standing business practice. The rejection is maintained. Applicant argues#2 Step 2A, Prong Two Even if the claimed subject matter was deemed to recite an abstract idea, the claimed subject matter is integrated into a practical application. As amended, the claims now specify how the computer system achieves the fraud rule generation by controlling execution of a tiered evaluation pipeline. In particular, the claims require that groups flagged as risky are excluded from further processing such that those groups are not evaluated against additional cutoff values. This feature dictates which computational stages are executed and which are terminated, ensuring that subsequent cutoff evaluations occur only for data that advances beyond prior tiers and clarifies that the claimed subject matter applies any alleged abstract idea in a concrete manner that affects how the computer system operates at runtime. The specification describes this tiered evaluation pipeline, explaining that only data that is not filtered at one tier is passed forward to the next tier for additional cutoff evaluation (see specification, paragraphs [0084], [0085], and [0094] to [0096]). This approach is a technical mechanism for structuring and controlling computation. Examiner Response Examiner respectfully disagrees. The spec paras that applicant refers to and other paras are reproduced below: [0083] Responsive to selecting the first cutoff value that generates the maximum performance output as the first threshold, any first group that has the at least one metric below the first threshold is flagged as risky and may be filtered or assigned to a data bucket 420. The method 400 continues to the second tier. [0084] The method 400 includes a second tier (step 430). [0085] The second tier includes generating a second set of training data and performing operations to select a second cutoff value that generates a maximum performance output as a second threshold. [0094]The at least one metric is compared to a number of second cutoff values. In one or more embodiments, the second cutoff values may include a number of predetermined false positive rate cutoffs and this may be performed similar to step 540 of the method 500. One or more of the second cutoff values may be the same as one or more of the first cutoff values. [0095]The method 600 includes selecting a second cutoff value that generates a maximum performance output as a second threshold (step 650). [0096]The second cutoff values are evaluated to determine a cutoff value that generates a maximum performance output. In one or more embodiments, the maximum performance output may include a highest false positive rate allowed with a maximum incremental gain on overall fraud captured. As such, the second cutoff value that generates the highest false positive rate allowed with the maximum incremental gain on overall fraud captured is selected as the second threshold. [0112 ] Referring back to FIG. 3, responsive to completion of the third tier, the fraud detection engine may generate fraud rule criteria for identifying or detecting fraud (step 340). Specifically, the fraud detection engine may generate fraud rule criteria based on what groups have been assigned into the data buckets 420, 440, 460. For example, the fraud detection engine may generate a query that may be used for fraud rule criteria and the query may include variables used to generate the group that was assigned into one of the data buckets. In one or more embodiments, the query may be generated by concatenating all criteria (variables, thresholds, etc.) used to generate any group flagged or deemed to be “risky”. The limitations (exclude the at least one first group flagged as risky from further processing, preventing the at least one first group from being evaluated against one or more additional cutoff values) is part of the identified abstract idea. The tiered evaluation pipeline disclosed in the specification (see spec paras reproduced above) is recited at a high level of generality and is operating in its ordinary capacity (where a tiered evaluation pipeline is a multistage assessment process that filters data using different criteria). Applicant’s claims do not improve technology; the underlying technology remains unaffected by the claims. Applicant is addressing a business problem (fraud detection) with a business solution. Applicant is merely using existing technology (for its intended purpose) to implement the business solution. Any improvements lie in the abstract idea itself, not in underlying technology Therefore there are no additional elements that are indictive of integration into a practical application. The rejection is maintained. Applicant argues#3 Applicant's claimed subject matter further integrates into a practical application by requiring that fraud rule criteria be output as computer-readable executable code that defines the fraud rule criteria in a format compatible with an external system for use by the external system for identifying fraud. The fraud detection engine may "format the fraud rule criteria into Total System Services, Inc. (TSYS) code," producing "a production code version of the fraud rule criteria that can be directly used by a TSYS system" (see specification, paragraph [0113]). This demonstrates that the claimed system automatically deploys executable rules within a computing environment without human intervention. The Examiner appears to treat the recited retraining as a generic or routine machine- learning step however the retraining includes the tiered execution pipeline. Retraining is not an unconstrained re-optimization over all data but requires the described tiered execution in which data is evaluated against cutoff values, flagged as risky, and excluded from further processing, terminating execution flow for flagged groups across multiple tiers. This approach defines how the retraining is carried out at the computational execution level, rather than being a conventional background process. The retraining refines the generated fraud rule criteria based on the outcomes of the tiered execution pipeline to maintain system performance over time, ensuring the system remains relevant and effective despite rapidly evolving fraud patterns (see specification, paragraph [0003]). This periodic retraining therefore meaningfully limits any alleged abstract idea by dictating how the computer system updates fraud rule logic through constrained execution paths, rather than merely applying the idea through generic or routine model training. Examiner Response Examiner respectfully disagrees Furthermore, the python code and TSYS code are behaving as programmed. In this case, as these codes are both commonly understood and explained in the spec, “the computer program may utilize Python modules such as for example pyodbc, numpy, pandas, etc. The output of the computer program may include a text or “txt” file that contains a production code version of the fraud rule criteria that can be directly used by a TSYS system. The TSYS system may automatically implement the fraud rule using the fraud detection engine implement by a computer processor recited at a high level of generality. The argument pertaining to the retraining and the tiered execution pipeline has been addressed above with respect to Applicant argues#2 above. The rejection is maintained. Applicant argues#4 Step 2B Applicant's claimed subject matter recites an inventive concept and is not a generic application of known ideas using conventional computer functions. The exclusion of groups flagged as risky from further cutoff evaluation is a computer-execution control mechanism, not a conventional economic practice. The exclusion of groups flagged as risky does not merely annotate or label data but prevents those groups from being subjected to further cutoff evaluation, thereby altering the execution path of the fraud rule generation process. The specification emphasizes that this exclusion mechanism occurs within the tiered fraud detection architecture, where groups are "filtered or assigned to a data bucket" (see specification, paragraph [0083]) and the process continues without those groups. When considered as an ordered combination, Applicant's claimed subject matter recites a technical solution that improves automated fraud rule generation by controlling execution flow and reducing unnecessary computation. Even if Step 2B is reached, the Examiner's conclusion that the claimed exclusion of groups from further cutoff evaluation is merely conventional rests on an unsupported factual assumption. The claims recite a specific execution-flow control mechanism that terminates further evaluation of flagged groups across multiple cutoff tiers. Under Berkheimer, whether such an ordered combination was well-understood, routine, or conventional is a factual question. Applicant respectfully submits that the Examiner has not provided evidence supporting such a finding. In addition, under recent USPTO Examiner guidance (Aug. 4, 2025), claims should be analyzed as a whole and Examiner characterizations of abstract ideas should not expand judicial groupings beyond what a human could reasonably perform in the mind. The claimed exclusion of groups from further cutoff evaluation is an execution-level constraint imposed on the computing system that cannot be practically performed mentally, and represents an improvement in how the computer operates with respect to training data processing and multi-tier evaluation. Examiner Response Examiner respectfully disagrees. The limitations (exclude the at least one first group flagged as risky from further processing, preventing the first group from being evaluated against one or more additional cutoff values) is part of the identified abstract idea. Applicant misapprehends when a Berkheimer analysis is required under current examination policy. Simply put, Examiner is not required under current Examination policy to evaluate under Step 2B, whether additional elements constitute “well-understood, routine, and conventional activities,” [“WURC activities”] unless an additional element(s) were found to be insignificant extra-solution activity in Step 2A, Prong 2. MPEP § 2106.05(d)(I). Here, the condition precedent was not met, and the current Final Office Action determined the additional elements were no more than mere instructions to apply the abstract idea exception using a computer. MPEP § 2106.05(f). Thus, Examiner was not required to determine a Berkheimer analysis. MPEP § 2106.05(d)(I). (See Section 101 rejection below). Furthermore, as far as evidence is concerned regarding well-understood routine and conventional activities, applicant is pointed to: MPEP 2106.07(a)III At Step 2A Prong Two or Step 2B, there is no requirement for evidence to support a finding that the exception is not integrated into a practical application or that the additional elements do not amount to significantly more than the exception unless the examiner asserts that additional limitations are well-understood, routine, conventional activities in Step 2B. The rejection is maintained. Claim Rejections- 35 U.S.C § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 1. Claims 1-9, 11-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 11, 20 are directed to a system, method and computer readable medium which are statutory categories of invention. (Step 1: YES). Claim 1 recites the limitations of: A computer system comprising: at least one processor; and a memory coupled to the at least one processor and storing processor-executable instructions which, when executed by the at least one processor, configure the at least one processor to: create a first set of training data that includes data flagged as fraud and data flagged as not fraud; identify a subset of the first set of training data as having at least one geographic region, the at least one geographic region being risky; remove the subset of the first set of training data from the first set of training data; categorize the first set of training data into a number of first groups; for each first group, calculate at least one metric; compare the at least one metric to a number of first cutoff values; automatically select, by the at least one processor, a select a first cutoff value that generates a maximum performance output as a first threshold, the maximum performance output comprising a highest false positive rate allowed with a maximum incremental gain on overall fraud captured; flag at least one first group that has the at least one metric below the first threshold as risky; exclude the at least one first group flagged as risky from further processing, preventing the at least one first group from evaluated against one or more additional cutoff values; generate fraud rule criteria based on the at least one first group; output computer program code that defines the fraud criteria in a format compatible with an external system for use by the external system for identifying fraud; and retrain, at predetermined intervals, based on recent historical transaction data. These limitations, under their broadest reasonable interpretation, cover performance of the limitation as certain methods of organizing human activity. The claim recites elements that are in bold above, which covers performance of the limitation as a commercial interaction, steps for generating fraud rule criteria (e.g., create a first set of data that includes data flagged as fraud and data flagged as not fraud; identify a subset of the first set of data as having at least one geographic region, the at least one geographic region being risky; remove the subset of the first set of data from the first set of data; categorize the first set of data into a number of first groups; for each first group, calculate at least one metric; compare the at least one metric to a number of first cutoff values; automatically select, a select a first cutoff value that generates a maximum performance output as a first threshold, the maximum performance output comprising a highest false positive rate allowed with a maximum incremental gain on overall fraud captured; flag at least one first group that has the at least one metric below the first threshold as risky; exclude the at least one first group flagged as risky from further processing, preventing the at least one first group from evaluated against one or more additional cutoff values; generate fraud rule criteria based on the at least one first group; output that defines the fraud rule criteria for dentifying fraud) If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a Commercial Interaction, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Claims 11,20 are abstract for similar reasons. (Step 2A-Prong 1: YES. The claims are abstract). This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). Claims 1, 11, 20 includes the following additional elements: -A processor -A memory coupled to the at least one processor -Training data -Retraining the system -Computer program code in a format compatible with an external system - A non-transitory computer readable medium The one or more processors, memory coupled to the at least one processor, training data, retraining the system, computer program code in a format compatible with an external system and a non-transitory computer readable medium are recited at a high level of generality and are being used in their ordinary capacity and are being used as a tool for implementing the steps of the identified abstract idea, see MPEP 2106.05(f), where applying a computer or using a computer as a tool to perform the abstract idea is not indicative of a practical application. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea Therefore claims 1, 11, 20 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, there are no additional elements recited in the claim beyond the judicial exception. Mere instructions to implement an abstract idea, on or with the use of generic computer components, or even without any computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claims 1, 11, 20 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more) Dependent claims 2-9, 12-19, 21 further define the abstract idea that is present in their respective independent claims 1,11 and thus correspond to Certain Methods of Organizing Human Activity and hence are abstract for the reasons presented above. Claim 21 further defines the identified abstract idea as recited in claim 1. The additional element of (the classifier) is recited at a high level of generality, operating in its ordinary capacity, and is being used as a tool to implement the steps of the identified abstract idea, see MPEP 2106.05(f) Therefore, the dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims (2-9, 12-19,21) are directed to an abstract idea. Thus, the claims 1-9, 11-21 are not patent-eligible. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD Z SHAIKH whose telephone number is (571)270-3444. The examiner can normally be reached M-T, 9-600; Fri, 8-11, 3-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BENNETT SIGMOND can be reached at 303-297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMAD Z SHAIKH/Primary Examiner, Art Unit 3694 3/17/2026
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Feb 07, 2025
Non-Final Rejection — §101
Apr 30, 2025
Applicant Interview (Telephonic)
Apr 30, 2025
Examiner Interview Summary
May 06, 2025
Response Filed
Jun 03, 2025
Final Rejection — §101
Aug 01, 2025
Response after Non-Final Action
Aug 21, 2025
Request for Continued Examination
Aug 27, 2025
Response after Non-Final Action
Oct 16, 2025
Non-Final Rejection — §101
Jan 14, 2026
Response Filed
Mar 19, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602729
SYSTEMS AND METHODS FOR BUILDING, UTILIZING, AND/OR MAINTAINING AN AUTONOMOUS VEHICLE-RELATED EVENT DISTRIBUTED LED
2y 5m to grant Granted Apr 14, 2026
Patent 12586074
MODEL UTILIZATION SYSTEM, MODEL UTILIZATION METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12579537
DIGITAL WALLET BALANCE DISPLAY IN AN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12547991
SYSTEMS, METHODS, AND APPARATUS FOR CONSOLIDATING A SET OF LOANS
2y 5m to grant Granted Feb 10, 2026
Patent 12548084
INDIVIDUALIZED REAL-TIME USER INTERFACE FOR EVENTS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
84%
With Interview (+31.3%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 544 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month