Prosecution Insights
Last updated: April 19, 2026
Application No. 18/529,956

TARGETED HEURISTIC RULE GENERATION TOOLS

Final Rejection §103
Filed
Dec 05, 2023
Examiner
JIMENEZ, JUSTIN ABEL
Art Unit
3697
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stripe, Inc.
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
2 granted / 8 resolved
-27.0% vs TC avg
Strong +86% interview lift
Without
With
+85.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
36 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
38.8%
-1.2% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103
Detailed Action Claims 1, 2-12, and 14-20 are pending and are examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 3-4, 12, 14, and 17-19 are currently amended. Claims 2, and 13 are cancelled. Response to Remarks 35 U.S.C. § 112 Applicant’s amendments to the claims have overcome the previous rejections. Accordingly, the previous rejections are withdrawn. 35 U.S.C. § 101 Applicant’s amendments to the claims have overcome the previous rejections. Accordingly, the previous rejections are withdrawn. 35 U.S.C. § 103 Remark 1: Applicant argues “Even combined, the references fail to teach or suggest a method and system in which a user can interact with a graphical user interface (GUI) to select a predictive attribute, set a threshold for that attribute, and then view simulated results indicating which transactions would be blocked or unblocked based on the chosen threshold. Jiang describes analyzing performance of gradient boosting machine models by constructing lookup tables that encode the contribution of each feature to a model's decision. While Jiang may be designed for explanation of model outcomes and the generation of reason codes, Jiang does not provide a user interface for interactively selecting predictive attributes, setting thresholds, or simulating the effect of those thresholds on transaction outcomes (see Jiang, paragraphs 0035-0037), and therefore fails to disclose receiving a user selection of one of these predictive attributes to generate a blocking rule, much less simulating the blocking rule and providing displayed feedback result of this blocking rule in action. Gai describes a dynamic rule strategy and fraud detection system that uses tree-based classification to generate fraud rules, but Gai's process is largely automated. Although analysts may interact with the system to define watch segments or adjust sample rates, Gai is silent as to a user interface that supports interactive selection of predictive attributes, threshold setting, and real-time simulation of rules and blocked or unblocked transactions (Gai, col. 4, lines 6-24; col. 5, lines 43-56). As such, Jiang fails to cure the deficiencies noted above in Jiang.” (Applicant Arguments, 2025-09-02). Response to Remark 1: Examiner respectfully disagrees, as the cited references (e.g. Gai, Jiang, Crawford, and Gopinathan) still teach the currently amended independent claims, as shown at least in paragraphs 22-23, 28, and 45 of Crawford, and as further outlined in paragraphs 13-15 of this action. Indeed, Gai teaches ‘pulling all approved transactions’ and the ARIE ‘models a high-risk population to transform observed fraud trends to fraud rules’ where the model chooses the best attribute and the best splitting points, and its output can identify a group of transactions that will be potentially declined. Moreover, Crawford teaches the ‘a user interface that supports interactive selection of predictive attributes’ as it discloses interactive GUI screens where the user uses GUI ‘boxes’ to select particular fields/attributes (e.g. First Name Field, Last Name Field, Credit Card) from among multiple available attributes to construct matching rules/patterns. Accordingly, this contention is unpersuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, and 10-19 are rejected under 35 U.S.C. 103 as being unpatentable over Gai et al. (US 10,607,228) (hereinafter “Gai”) in view of Jiang et al. (US20200218987A1) (hereinafter “Jiang”) further in view of Crawford et al. (US20090044279A1) (hereinafter “Crawford”). As per Claim 1, 12, and 17, Gai teaches: A computer-implemented method for generating transaction processing rules, the method comprising: transmitting, by the server system via a network, data that configures . . .; (“As illustrated in FIG. 2, the host system may include a UNIX Server environment 100 or other type of server environment hosting the dynamic rule strategy and fraud detection system 200. The dynamic rule strategy fraud detection system 200 may include an automated trigger identification engine 210, an adaptive rule interaction engine (ARIE) 220, and a control engine 230. The dynamic rule strategy fraud detection system 200 may include or access a model warehouse database 240 and may send output to a model output database 250. Thus, any time the dynamic rule strategy executes, three major steps are executed using the automated trigger identification engine 210, adaptive rule induction engine 220, and control engine 230.” (col. 4, ln. 35-47); “The card processing systems 50 may be or include systems utilized for processing credit card transactions as currently known in the art. The analyst systems 40 may be computing devices similar to those described above. The analyst systems 40 preferably allow analysts to interact with the dynamic rule strategy and fraud detection system through user interfaces providing for interactivity between the systems.” (col. 3, ln. 47-55); “The outputs are dynamic rule strategy score. The dynamic rule strategy score represents the probability (between 0.0 and 1.0) that POS transaction is fraudulent.” (col. 10, ln. 2-4) identifying, by a server system, a set of transactions, wherein each transaction in the set of transactions includes a set of values for a set of attributes and a portion of the set of transactions include a target label; (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) processing, . . . of the server system, the set of transactions to determine a plurality of predictive attributes in the set of attributes, wherein the plurality of predictive attributes are indicative of transactions in the portion of the set of transactions that include the target label; (“The algorithm of the ARIE 220 may include a tree-based classification method which is a flexible and simple technique to provide clear output. The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers. Given the underlying nature of fraud prevention where multiple risk drivers are integrated to represent the target, the tree-based classification method is a good tool for direct variable interaction. Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 43-56); “The dynamic rule strategy fraud detection system 200 is capable of quickly detecting fraud trends and combatting fraud through rule development and subsequent implementation. Details of this system are further described with respect to FIG. 2. The system automatically identifies new fraud trends and/or emerging fraud attacks, automatically constructs associated development data for strategy creation, and automatically conducts rule induction. In embodiments of the invention, the rule induction may be conducted using a tree-based classification method. Ultimately, the dynamic rule strategy and fraud detection system may be implemented in a production system to fully realize the power of combining advanced methodology, automation, and big data analytics in fraud space. The dynamic rule strategy provides an automatic framework to develop fraud rules. During each execution, the dynamic rule strategy pulls recent transactional data and performs fraud rule development. Therefore, new attributes may be selected and new rules developed every time the dynamic rule strategy executes.” (col. 4, ln. 6-24) displaying, by the server system via the network, the one or more predictive attributes to the . . .; (“As illustrated in FIG. 2, the host system may include a UNIX Server environment 100 or other type of server environment hosting the dynamic rule strategy and fraud detection system 200. The dynamic rule strategy fraud detection system 200 may include an automated trigger identification engine 210, an adaptive rule interaction engine (ARIE) 220, and a control engine 230. The dynamic rule strategy fraud detection system 200 may include or access a model warehouse database 240 and may send output to a model output database 250. Thus, any time the dynamic rule strategy executes, three major steps are executed using the automated trigger identification engine 210, adaptive rule induction engine 220, and control engine 230.” (col. 4, ln. 35-47); “The card processing systems 50 may be or include systems utilized for processing credit card transactions as currently known in the art. The analyst systems 40 may be computing devices similar to those described above. The analyst systems 40 preferably allow analysts to interact with the dynamic rule strategy and fraud detection system through user interfaces providing for interactivity between the systems.” (col. 3, ln. 47-55); “The outputs are dynamic rule strategy score. The dynamic rule strategy score represents the probability (between 0.0 and 1.0) that POS transaction is fraudulent.” (col. 10, ln. 2-4) receiving, by the server system, a one or more selected predictive attributes of the plurality of predictive attributes based on a first user input received through the . . .; (“In S319, the system denotes segments, for example as user defined watch segments or as segments with high fraud concentration. User defined segments may include segments that user wants to monitor and model continuously using ARIE regardless of whether the segments are identified as risky by the automated trigger engine. These segments may be designated, for example, if a user believes that these segments are generally at risk. Those segments as well as model ready data (S3 23) associated will also be sent to ARIE for rule induction.” (col. 6, ln. 64 – col. 7, ln. 6); “High risk segments are a subset of all the predefined segments, and they are selected through automated trigger identification for rule induction based on the analysis of the most recent historical transaction window.” (col. 5, ln. 13-15); “The automated trigger identification engine 210 achieves identification through segmentation. Segmentation is intended to capture potential concentrations of ongoing fraud attacks at a relatively granular level. There are multiple ways to segment the portfolio of transactions. In embodiments of the invention, the segments may be user defined such that segments uniquely vulnerable to fraud can be isolated.” (col. 4, ln. 48-55) receiving, by the server system, through a control of the . . ., a second user input indicating a corresponding threshold for the selected one or more predictive attributes; (“Example criteria may be (Fraud unit rate >=2%, or Fraud amount rate >=2%, or increased $200K+ in weekly fraud compare to historical average excluding the highest) which represents a fairly concentrated fraud trend for each alerted segment. The criteria of potential concentration of fraud attacks/alert could be customized based on business input. The output of this automated trigger notification procedure is high risk segments with fraud concentration based on trigger event criteria. High risk segments are a subset of all the predefined segments, and they are selected through automated trigger identification for rule induction based on the analysis of the most recent historical transaction window.” (col. 5, ln. 4-16); “User interfaces may additionally be provided to enable the analyst model target variables, trigger event criteria, and adjust the development data sample rate and ratio of development to validation within a cycle as well as customizing selections of divisions or segments.” (col. 6, ln. 47-53); “The ARIE 220 executes and models on high risk segments identified by automated trigger identification, and/or on the user defined segments which are designed to directly call ARIE. In embodiments of the invention, the ARIE 220 is developed in C++ but various alternative languages can be used. The ARIE 220 is specially designed and fine-tuned to enhance results for fast fraud detection analytics. It supports high-speed pervasive deployment and is highly reconfigurable with user-defined customization, allowing prediction and scoring on a massive scale” (col. 5, ln. 27-37); “an analyst system may provide a user interface transmitted over a network allowing the analyst to utilize single button actuation to initiate the dynamic rule strategy fraud detection process”. (col. 6, ln. 38-42); “User interfaces may additionally be provided to enable the analyst model target variables, trigger event criteria, and adjust the development data sample rate and ratio of development to validation within a cycle as well as customizing selections of divisions or segments”. (col. 6, ln. 47-52); see also Fig. 3, “Parameter Settings: Time Period and Data Location. . . Trigger Criteria. . . Fraud Report Setting. determining that a number of the one or more selected predictive attributes in the heuristic rule is below a maximum limit; (“the tree growing or splitting phase continues until one of the following stopping criteria is hit: 1) All transactions in the node belong to a single target class; 2) If the node were split, the number of transactions in one or both of the child nodes would be less than a preset threshold” (col. 9, ln. 6-11); “the default threshold is 0.01 % of the total transactions in the train dataset, windowed within [5, 5000]. This threshold may be specified through a tunable parameter, which can be tuned to fit individual application needs or any changing pattern of data” (col. 9, ln. 12-16); “The best split is not greater than a certain pre-set threshold” (col. 9, ln. 16-17)) in response to the number being below the maximum limit, generating, by the server system, a heuristic rule that checks each of the one or more selected predictive attributes against the corresponding threshold; and (“The dynamic rule strategy provides an automatic framework to develop fraud rules. During each execution, the dynamic rule strategy pulls recent transactional data and performs fraud rule development. Therefore, new attributes may be selected and new rules developed.” (col. 4, ln. 19-24); “Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 51-58); “Once the dynamic rule strategy is implemented in the server, the fraud rules generated can be implemented by business and incorporated in production systems, and fraud rule implementation can be accomplished through a separate implementation process. Overall, the dynamic rule strategy is a framework/process to create fraud rules by detecting fraud concentrations and developing a decision tree model. The output of the model can be used to identify a group of transactions that will be potentially declined based on estimated probability of fraud.” (col. 6, ln. 1-10); “the ARIE proceeds to blocks 532, 534, 536, and 538 for splitting metrics at block 534 to choose the best attribute at block 532, the best splitting point” (col. 8, ln. 50-54); “The set of rules corresponds to the decision tree leaf nodes” (col. 10, ln. 9-10)) applying, by the server system, the heuristic rule to the set of transactions which determines a set of blocked transactions and a set of unblocked transactions based on whether each of the one or more selected predictive attributes for each transaction of the set of transactions satisfies the corresponding threshold value; and (“Once the dynamic rule strategy is implemented in the server, the fraud rules generated can be implemented by business and incorporated in production systems, and fraud rule implementation can be accomplished through a separate implementation process. Overall, the dynamic rule strategy is a framework/process to create fraud rules by detecting fraud concentrations and developing a decision tree model. The output of the model can be used to identify a group of transactions that will be potentially declined based on estimated probability of fraud.” (col. 6, ln. 1-10); “In block 446, once rules are selected, fraud rule implementation testing may be performed. Once implementation testing is successful, businesses may implement selected rules in the production system. Finally, the rule is implemented in the production system in block 448. Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 17- 24); “The control engine 230 plays an important role in enhancing the control process of the dynamic rule strategy model. It takes the output fraud rules of the ARIE 220 to validate the model using full validation data output through automated trigger identification. The control engine 230 may further utilize the Monte Carlo Cross Validation techniques for stability tests and send the rules for analyst review.” (col. 5, ln. 56-64); “The output of the model can be used to identify a group of transactions that will be potentially declined” (col. 6, ln. 8-10); “Once implementation testing is successful, businesses may implement selected rules in the production system.” (col. 8, ln. 18-20); “Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 21-25)) displaying, by the server system via the network, an indication of the set of blocked transaction and the set of unblocked transactions. (“The fraud report may be in a spreadsheet format. . . its dataset may include, for example: the total number and . . . amount of approved transactions by segment,” (col. 6, ln. 58-62); “outputs a summary and rule selection in S341, which are displayable on a user interface of the analyst system at 306.” (col. 7, ln. 13-15); “analyst systems 40 preferably allow analysts to interact with the dynamic rule strategy and fraud detection system through user interfaces providing for interactivity” (col. 3, ln. 52-55)). Gai does not disclose: • “a graphical user interface (GUI) to display” (claim 1). However, as per Claim 1, Crawford in the analogous art of internet-based fraud detection, teaches: “a graphical user interface (GUI) to display”. (See “The user then can right click on screen 20 to show expand-on box 204. The user then selects “credit card” for further expansion.” (para. 0028); “The user then brings up pattern match generator 1300 as shown in FIG. 13 and begins to create a pattern matcher.” (para. 0045); “The user begins by using box 1301 and selecting what the first part of the pattern will be, in this case the user selects the word “first.” Then using box 1302, the user selects N (which would mean the first N characters) and another box pops up to allow the user to select the specific value for N. In our case, the user selects “1.” The user would then go to box 1303 and select where those characters are from. In this case, the user would select “First Name Field” and then using box 1304 would select the “followed by” notation. The user would then press the “Next Phrase” button and then would repeat back at box 1301 to select the word “exactly” followed by the “2” from box 1302, followed by “the integers” from box 1303. Then the user would select “followed by” from box 1304, then press the “Next Phrase” button again, then would repeat back at box 1301 and select the words “all” from box 1301, and then “Last Name Field” from box 1303.” (para. 0045); “The suspicious account activity is defined by a set of rules that describe the attributes of accounts that are members of the cluster.” (para. 0023); “These membership rules can be modified, if desired, by the user via rule editor 107.” (para. 0022)) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai with the technique of Crawford to include a graphical user interface (GUI) where the user is providing input in terms of selecting particular attributes of the plurality of attributes within a transaction authorization process. Therefore, the incentives of providing an enriched user experience and greater degree of UI control provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. Gai does not disclose: • “by a machine learning (ML) model” (claim 1). However, as per Claim 1, Jiang in the analogous art of securing financial transactions, teaches: “by a machine learning (ML) model”. (See “In an embodiment, the disclosure provides a method for constructing a lookup table for determining outcomes of a tree base machine learning prediction model, including: determining all possible outcomes for each hierarchy of the prediction model; determining contributions of a set of features for each hierarchy of the prediction model; and constructing the lookup table based on all possible outcomes and contributions of each of the set of features.” (para. 0003); “At 302, the server builds a GBM model based on a set of features. The GBM model is built using one or more hierarchy of prediction models, e.g., decision trees. Each decision tree is considered a hierarchical level. Traversing each decision tree from a parent node to a leaf node relies on decisions made regarding the set of features at each parent node. Each hierarchical level is combined with a previous level using weighting parameters. The weighting parameters are optimized using machine learning techniques.” (para. 0029); “GBM modeling combines multiple decision trees for better decision making or prediction. The GBM model is trained iteratively, constructing trees to focus on errors. Subsequent trees in GBM models are used to predict errors left after previous trees.” (Para. 0056); “Starting from the parent node 502, a FICO score is first checked to determine whether it is less than or equal to 700. If the FICO score is greater than 700, then at node 504, a debt-to-income ratio is checked to determine whether it is less than or equal to 0.3. Leaf nodes of the decision tree in FIG. 5, i.e., nodes 506, 510, and 508, indicate terminal states where a decision is made—the decision being an approval or a rejection. A threshold is used at each parent node and each intermediate node to determine which branch to take. Rule-based prediction can be a powerful tool, exhibiting non-linear characteristics. Decision trees, as shown in FIG. 5, can hierarchically encode one or more rules.” (para. 0055) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai with the technique of Jiang to include machine learning modeling within a transaction authorization process. Therefore, the incentives of providing increased data analysis for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. As per Claim 3, Gai teaches: The computer-implemented method of claim 2, . . . (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) Gai in view of Jiang does not disclose: • “wherein the control comprises a slider and the corresponding threshold is received by the server system based on a position of a slider in the graphical user interface and the slider is movable over a range of potential values for the corresponding threshold” (claim 3). However, as per Claim 3, Crawford in the analogous art of internet-based fraud detection, teaches: “wherein the control comprises a slider and the corresponding threshold is received by the server system based on a position of a slider in the graphical user interface and the slider is movable over a range of potential values for the corresponding threshold”. (See “Fraud detection is facilitated by using matching rules to uncover clusters of entities, by then generating cluster membership rules and converting those rules to database queries. The cluster membership rules are based upon an accumulation of links of various types and strengths between entities . In one embodiment, the entities are website accounts, clusters are identified, and the system then constructs cluster membership rules for identifying subsequent accounts that match the attributes of those clusters. The cluster membership rules are designed to define the parameters of the identified clusters. When the rules are deployed in a transaction blocking system, for example, when a rule that describes an identified cluster is triggered, the transaction blocking system blocks the transaction with respect to new users who enter the website.” (Para. 0008); “At a certain point, the user begins to identify what might be a cluster and then the user can add or remove accounts from the cluster as desired using cluster editor 105. When the user is satisfied with a cluster, the cluster can be automatically characterized by cluster explainer 106, with that characterization being represented by a decision tree. That decision tree can then be transformed to a corresponding SQL expression which can be applied to the database for later retrieval of additional matching accounts. Cluster explainer 106 is used to automatically induce a set of cluster membership rules that identify the parameters that caused an account to be part of the identified cluster. For example, the rules might indicate that “to be a member of the cluster, the e-mail address must follow a certain pattern and the security answer must follow another pattern, and the account holder must be a resident in Bakersfield, and so on and so forth. These membership rules can be modified, if desired, by the user via rule editor 107.” (para. 0021-0022); See Fig. 1, “generated SQL query”. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai in view of Jiang with the technique of Crawford to include user-controlled adjustment of rule parameters within a transaction authorization process. Therefore, the incentives of providing increased data accessibility for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. As per Claim 4, Gai teaches: The computer-implemented method of claim 1, further comprising: determining, by the server system, a false positive rate based on a number of transactions in the set of blocked transactions without the target label; and transmitting, by the server system via a network, data that configures the . . . to render an indication of the false positive rate. (“Business computing systems may take all the rules from the ARIE in 442, look at result of out of time validation, and review and select rules. Selection may be based on combination of individual rule performance (False Positive Rate, rule benefit in dollar amount) and operation capacity (# of expected out sorts). In block 446, once rules are selected, fraud rule implementation testing may be performed. Once implementation testing is successful, businesses may implement selected rules in the production system. Finally, the rule is implemented in the production system in block 448. Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 11-24); “The rule refinement process then selects the final tree from this subtree sequence created with CCP by picking the one with the lowest misclassification error rate on a prune dataset. The estimated misclassification rate of tree T on a prune dataset is designated by R's(T). Experiments have shown that usually there is quite a long subsequence of trees with error rates close to each other. Experiments have also shown that the tree size that achieves the minimum within this long flat valley is quite sensitive to the transaction data selected for training. To reduce this instability, the smallest tree with R's within one standard error of the minimum is selected per the 1-SE rule (Breiman et al. [1984]).” (col. 9, ln. 40-53); “In block 446, once rules are selected, fraud rule implementation testing may be performed. Once implementation testing is successful, businesses may implement selected rules in the production system. Finally, the rule is implemented in the production system in block 448. Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 16-24)). Gai does not disclose: • “a graphical user interface (GUI) to display” (claim 4). However, as per Claim 4 , Crawford in the analogous art of internet-based fraud detection, teaches: “a graphical user interface (GUI) to display”. (See “The user then can right click on screen 20 to show expand-on box 204. The user then selects “credit card” for further expansion.” (para. 0028); “The user then brings up pattern match generator 1300 as shown in FIG. 13 and begins to create a pattern matcher.” (para. 0045); “The user begins by using box 1301 and selecting what the first part of the pattern will be, in this case the user selects the word “first.” Then using box 1302, the user selects N (which would mean the first N characters) and another box pops up to allow the user to select the specific value for N. In our case, the user selects “1.” The user would then go to box 1303 and select where those characters are from. In this case, the user would select “First Name Field” and then using box 1304 would select the “followed by” notation. The user would then press the “Next Phrase” button and then would repeat back at box 1301 to select the word “exactly” followed by the “2” from box 1302, followed by “the integers” from box 1303. Then the user would select “followed by” from box 1304, then press the “Next Phrase” button again, then would repeat back at box 1301 and select the words “all” from box 1301, and then “Last Name Field” from box 1303.” (para. 0045); “The suspicious account activity is defined by a set of rules that describe the attributes of accounts that are members of the cluster.” (para. 0023); “These membership rules can be modified, if desired, by the user via rule editor 107.” (para. 0022)) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai with the technique of Crawford to include a graphical user interface (GUI) where the user is providing input in terms of selecting particular attributes of the plurality of attributes within a transaction authorization process. Therefore, the incentives of providing an enriched user experience and greater degree of UI control provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. As per Claim 5, Gai teaches: The computer-implemented method of claim 1, wherein the target label is associated with fraudulent transactions. (“Fraud analysts look for potential fraud trends using several separate fraud reports, individually pulling data for identified target sections, segments, divisions, or areas, exploring data in more detail and potentially further conducting data analytics through multi-dimensional division and selecting key fraud drivers/attributes for rule/strategy development. After the rule/strategy is developed, analysts send the rule strategy to management for approval and to information technology experts for implementation.” (col. 1, ln. 30-38); “The system automatically identifies new fraud trends and/or emerging fraud attacks, automatically constructs associated development data for strategy creation, and automatically conducts rule induction. In embodiments of the invention, the rule induction may be conducted using a tree-based classification method. Ultimately, the dynamic rule strategy and fraud detection system may be implemented in a production system to fully realize the power of combining advanced methodology, automation, and big data analytics in fraud space. The dynamic rule strategy provides an automatic framework to develop fraud rules. During each execution, the dynamic rule strategy pulls recent transactional data and performs fraud rule development.” (col. 4, ln. 10-22); “The criteria of potential concentration of fraud attacks/alert could be customized based on business input. The output of this automated trigger notification procedure is high risk segments with fraud concentration based on trigger event criteria. High risk segments are a subset of all the predefined segments, and they are selected through automated trigger identification for rule induction based on the analysis of the most recent historical transaction window.” (col. 5, ln. 8-16)) As per Claim 6, Gai teaches: The computer-implemented method of claim 1, wherein the target label is applied to the portion of the set of transactions based on user input. (“The automated trigger identification engine 210 achieve identification through segmentation. Segmentation is intended to capture potential concentrations of ongoing fraud attacks at a relatively granular level. There are multiple ways to segment the portfolio of transactions. In embodiments of the invention, the segments may be user defined such that segments uniquely vulnerable to fraud can be isolated. For each dynamic rule strategy execution, a period of transactional data is used in automated trigger identification.” (col. 4, ln. 47-57); “In S319, the system denotes segments, for example as user defined watch segments or as segments with high fraud concentration. User defined segments may include segments that user wants to monitor and model continuously using ARIE regardless of whether the segments are identified as risky by the automated trigger engine. These segments may be designated, for example, if a user believes that these segments are generally at risk. Those segments as well as model ready data (S3 23) associated will also be sent to ARIE for rule induction.” (col. 6, ln. 64 – col. 7, ln. 6); “The criteria of potential concentration of fraud attacks/alert could be customized based on business input. The output of this automated trigger notification procedure is high risk segments with fraud concentration based on trigger event criteria. High risk segments are a subset of all the predefined segments, and they are selected through automated trigger identification for rule induction based on the analysis of the most recent historical transaction window.” (col. 5, ln. 8-16)) As per Claim 7, Gai teaches: The computer-implemented method of claim 1, wherein the portion of the set of transactions that include the target label include a first portion of the set of transactions and a second portion of the set of transactions include a non-target label wherein the one or more predictive attributes determined by processing the set of transactions by the ML model of the server system are more indicative of transactions in the first portion of the set of transactions that include the target label and less indicative of transactions in the second portion of the set of transactions that include the non-target label. (“The algorithm of the ARIE 220 may include a tree-based classification method which is a flexible and simple technique to provide clear output. The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers. Given the underlying nature of fraud prevention where multiple risk drivers are integrated to represent the target, the tree-based classification method is a good tool for direct variable interaction. Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 43-56); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period.” (col. 5, ln. 59-67); “Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules. Once the ARIE 220 generates fraud rules, the control engine 230 implements out of time validation, implementation testing, business review, and potential analyst review before the each fraud rule is implemented in the production system” (col. 5, ln. 16-28) As per Claim 8, Gai teaches: The computer-implemented method of claim 7, wherein the target label is associated with fraudulent transactions and the non-target label is associated with non-fraudulent transactions. (“The algorithm of the ARIE 220 may include a tree-based classification method which is a flexible and simple technique to provide clear output. The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers. Given the underlying nature of fraud prevention where multiple risk drivers are integrated to represent the target, the tree-based classification method is a good tool for direct variable interaction. Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 43-56); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period.” (col. 5, ln. 59-67); “Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules. Once the ARIE 220 generates fraud rules, the control engine 230 implements out of time validation, implementation testing, business review, and potential analyst review before the each fraud rule is implemented in the production system” (col. 5, ln. 16-28). As per Claim 10, Gai teaches: The computer-implemented method of claim 1, . . .. (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) Gai in view of Jiang does not disclose: • “wherein the contact information includes an email address” (claim 10). However, as per Claim 10, Crawford in the analogous art of internet-based fraud detection, teaches: “wherein the contact information includes an email address”. (See Fig. 2-6 with node labels “stilgoing13@domain.name5.”; “The user inspects the display, looking for similarities across the three nodes being displayed, and notices that the e-mail addresses for all of these nodes are similar. The user brings up expand-on box 304 and checks the “e-mail” box. This instructs the system to link to additional accounts that have email addresses that match any of the email addresses of the three visible nodes according to the matching rules that have been established for email addresses” (para. 0030) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai in view of Jiang with the technique of Crawford to include email addresses as part of user information within a transaction authorization process. Therefore, the incentives of providing increased data accessibility for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. As per Claim 11, Gai teaches: The computer-implemented method of claim 1, wherein values for a first attribute in the set of attributes include numerical values and values for a second attribute in the set of attributes include Boolean values. (“With respect to data cleansing, categorical predictors with over a pre-set number of levels are excluded from the development dataset. Entry of the data to the ARIE occurs in block 508. Entry of the data to the ARIE occurs in block 508. The data is processed in blocks 510, 512, 514, and 516 through reading of the csv (comma separated values) header, encoding of categorical levels, imputation of missing values and memory mapping. For a categorical input variable, a new category representing missing values is created for the duration of the tree growing. For a continuous input variable, missing values are imputed with a constant value, such as DBL_MAX, which denotes the maximum finite representable floating-point number for the duration of the tree growing. The ARIE performs stratified random sampling in block 520.” (Col. 8, ln. 31-45); “The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers.” (col. 5., ln. 45-47)) As per Claim 15, Gai teaches: The non-transitory computer readable storage medium of claim 12, the operations further comprising: determining, by the server system, a false positive rate based on a number of transactions in the set of blocked transactions without the target label; and transmitting, by the server system via a network, data that configures a graphical user interface to render information including the false positive rate. (“Business computing systems may take all the rules from the ARIE in 442, look at result of out of time validation, and review and select rules. Selection may be based on combination of individual rule performance (False Positive Rate, rule benefit in dollar amount) and operation capacity (# of expected out sorts). In block 446, once rules are selected, fraud rule implementation testing may be performed. Once implementation testing is successful, businesses may implement selected rules in the production system. Finally, the rule is implemented in the production system in block 448. Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 11-24); “The rule refinement process then selects the final tree from this subtree sequence created with CCP by picking the one with the lowest misclassification error rate on a prune dataset. The estimated misclassification rate of tree T on a prune dataset is designated by R's(T). Experiments have shown that usually there is quite a long subsequence of trees with error rates close to each other. Experiments have also shown that the tree size that achieves the minimum within this long flat valley is quite sensitive to the transaction data selected for training. To reduce this instability, the smallest tree with R's within one standard error of the minimum is selected per the 1-SE rule (Breiman et al. [1984]).” (col. 9, ln. 40-53); “In block 446, once rules are selected, fraud rule implementation testing may be performed. Once implementation testing is successful, businesses may implement selected rules in the production system. Finally, the rule is implemented in the production system in block 448. Transfer of validated rules to the production system may occur automatically in order to further reduce the reaction time in response to emerging fraud trends.” (col. 8, ln. 16-24)) As per Claim 16, Gai teaches: The non-transitory computer readable storage medium of claim 12, wherein the portion of the set of transactions that include the target label include a first portion of the set of transactions and a second portion of the set of transactions include a non-target label wherein the one or more predictive attributes determined by processing the set of transactions by the ML model of the server system are more indicative of transactions in the first portion of the set of transactions that include the target label and less indicative of transactions in the second portion of the set of transactions that include the non-target label. (“The algorithm of the ARIE 220 may include a tree-based classification method which is a flexible and simple technique to provide clear output. The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers. Given the underlying nature of fraud prevention where multiple risk drivers are integrated to represent the target, the tree-based classification method is a good tool for direct variable interaction. Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 43-57); “Both development and validation data may be pulled in this procedure. In S327, in order to ensure pure model saving, pipeline rule impact is excluded. The output of this procedure will be the input to the ARIE. In S330, the ARIE operates as will further describe below and outputs a summary and rule selection in S341, which are displayable on a user interface of the analyst system at 306.” (col. 7, ln. 8-14) As per Claim 19, Gai teaches: The server system of claim 17, wherein the portion of the set of transactions that include the target label include a first portion of the set of transactions and a second portion of the set of transactions include a non-target label wherein the one or more predictive attributes determined by processing the set of transactions by the ML model of the server system are more indicative of transactions in the first portion of the set of transactions that include the target label and less indicative of transactions in the second portion of the set of transactions that include the non-target label. (“The algorithm of the ARIE 220 may include a tree-based classification method which is a flexible and simple technique to provide clear output. The algorithm is particularly powerful in fraud modeling and strategy analysis given its data-driven nature. It can handle missing values and is not sensitive to outliers. Given the underlying nature of fraud prevention where multiple risk drivers are integrated to represent the target, the tree-based classification method is a good tool for direct variable interaction. Thus, the tree-based classification method is leveraged to differentiate the risk of card transactions or other applicable types of transactions. With tree-based classification, the ARIE 220 can rank the likelihood of fraudulent transactions based on the probability, which will be used to facilitate the generation of fraud rules.” (col. 5, ln. 43-57); “Both development and validation data may be pulled in this procedure. In S327, in order to ensure pure model saving, pipeline rule impact is excluded. The output of this procedure will be the input to the ARIE. In S330, the ARIE operates as will further describe below and outputs a summary and rule selection in S341, which are displayable on a user interface of the analyst system at 306.” (col. 7, ln. 8-14); “The control engine 230 plays an important role in enhancing the control process of the dynamic rule strategy model. It takes the output fraud rules of the ARIE 220 to validate the model using full validation data output through automated trigger identification. The control engine 230 may further utilize the Monte Carlo Cross Validation techniques for stability tests and send the rules for analyst review.” (col. 5, ln. 56-64) As per Claim 14, Gai teaches: The non-transitory computer readable storage medium of claim 12, . . . (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) Gai in view of Jiang does not disclose: • “wherein the control comprises a slider and the corresponding threshold is received by the server system based on a position of a slider in the graphical user interface and the slider is movable over a range of potential values for the corresponding threshold” (claim 14). However, as per Claim 14, Crawford in the analogous art of internet-based fraud detection, teaches: “wherein the control comprises a slider and the corresponding threshold is received by the server system based on a position of a slider in the graphical user interface and the slider is movable over a range of potential values for the corresponding threshold”. (See “Fraud detection is facilitated by using matching rules to uncover clusters of entities, by then generating cluster membership rules and converting those rules to database queries. The cluster membership rules are based upon an accumulation of links of various types and strengths between entities . In one embodiment, the entities are website accounts, clusters are identified, and the system then constructs cluster membership rules for identifying subsequent accounts that match the attributes of those clusters. The cluster membership rules are designed to define the parameters of the identified clusters. When the rules are deployed in a transaction blocking system, for example, when a rule that describes an identified cluster is triggered, the transaction blocking system blocks the transaction with respect to new users who enter the website.” (Para. 0008); “At a certain point, the user begins to identify what might be a cluster and then the user can add or remove accounts from the cluster as desired using cluster editor 105. When the user is satisfied with a cluster, the cluster can be automatically characterized by cluster explainer 106, with that characterization being represented by a decision tree. That decision tree can then be transformed to a corresponding SQL expression which can be applied to the database for later retrieval of additional matching accounts. Cluster explainer 106 is used to automatically induce a set of cluster membership rules that identify the parameters that caused an account to be part of the identified cluster. For example, the rules might indicate that “to be a member of the cluster, the e-mail address must follow a certain pattern and the security answer must follow another pattern, and the account holder must be a resident in Bakersfield, and so on and so forth. These membership rules can be modified, if desired, by the user via rule editor 107.” (para. 0021-0022); See Fig. 1, “generated SQL query”. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai in view of Jiang with the technique of Crawford to include user-controlled adjustment of rule parameters within a transaction authorization process. Therefore, the incentives of providing increased data accessibility for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gai in view of Jiang and Crawford in further in view of Gopinathan et al. (US5819226) (hereinafter “Gopinathan”). As per Claim 9, Gai teaches: The computer-implemented method of claim 1, . . . (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) Gai in view of Jiang does not disclose: • “wherein the set of attributes include a first attribute and a respective value of the first attribute for a respective transaction indicates an amount of time since contact information associated with the respective transaction was first identified by the server system” (claim 9). However, as per Claim 9, Gopinathan in the analogous art of securing financial transactions, teaches: “wherein the set of attributes include a first attribute and a respective value of the first attribute for a respective transaction indicates an amount of time since contact information associated with the respective transaction was first identified by the server system”. (See “Module roll15.sas 1113 generates a 15-day rolling window of data. This data has multiple records for each accountday listed in sample.ssd. The current day and 14 preceding days are listed for each sample account. Module roll15to7 .sas 1117 takes the roll15 data set and filters out days eight to 15 to produce roll7, a 7-day rolling window data set 1119. Days eight to 15 are ignored. Module genrolv.sas 1118 generates input variables for a rolling window of the previous 15 days of transactions. It processes a data set with multiple and variable numbers of records per account and produces a data set with one record per account. The result is called rollv.ssd. Module roll15tol.sas 1114 takes the roll15 data set and filters out days except the current day to produce rolll. Module gencurv.sas 1115 uses rolll to generate current day variables 1116 describing transactions occurring during the current day” (col. 6, ln. 61 – col. 7, ln. 10); “Module genprof.sas generates profile variables which form the profile records 1111” (col. 7, ln. 11-12) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai in view of Jiang with the technique of Gopinathan to include duration since initial identification as an additional step within a transaction authorization process. Therefore, the incentives of providing increased data security for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. As per Claim 20, Gai teaches: The server system of claim 17, . . .. (“Thus, the automated trigger identification engine 210 pulls all approved transactions and analyzes them to identify a high risk population in terms of fraud concentration based on customized trigger criteria. Once the high risk population is identified, the Adaptive Rule Induction Engine (ARIE) 220 automatically executes and models the high risk population to transform observed fraud trends to fraud rules.” (Col. 5, ln. 17-23); “Inputs for development of the dynamic rule strategy may be derived from analysis of point of sale (POS) authorization data or other data in accordance with the particular fraud channel. Inputs are valued as of the time that an authorization was received and a decision was made. Inputs may include for example, data relevant to all of the approved transactions during the course of a day or other selected time period.” (col. 6, ln. 13-20); “Multiple metrics by segment are calculated for fraud reporting and trigger identification throughout the target period. The length of all periods can be user defined. The metrics of the target period may be used for high risk segment detection based on trigger criteria, and the metrics of a historical transaction window will be used for comparison with the target period to monitor fraud trend changes over time. The output of this procedure is high risk segments identified throughout the target period”. (col. 4, ln. 63-67)) Gai in view of Jiang does not disclose: • “wherein the set of attributes include a first attribute and a respective value of the first attribute for a respective transaction indicates an amount of time since contact information associated with the respective transaction was first identified by the server system” (claim 20). However, as per Claim 20, Gopinathan in the analogous art of securing financial transactions, teaches: “wherein the set of attributes include a first attribute and a respective value of the first attribute for a respective transaction indicates an amount of time since contact information associated with the respective transaction was first identified by the server system”. (See “Module roll15.sas 1113 generates a 15-day rolling window of data. This data has multiple records for each accountday listed in sample.ssd. The current day and 14 preceding days are listed for each sample account. Module roll15to7 .sas 1117 takes the roll15 data set and filters out days eight to 15 to produce roll7, a 7-day rolling window data set 1119. Days eight to 15 are ignored. Module genrolv.sas 1118 generates input variables for a rolling window of the previous 15 days of transactions. It processes a data set with multiple and variable numbers of records per account and produces a data set with one record per account. The result is called rollv.ssd. Module roll15tol.sas 1114 takes the roll15 data set and filters out days except the current day to produce rolll. Module gencurv.sas 1115 uses rolll to generate current day variables 1116 describing transactions occurring during the current day” (col. 6, ln. 61 – col. 7, ln. 10); “Module genprof.sas generates profile variables which form the profile records 1111” (col. 7, ln. 11-12) It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the method of Gai in view of Jiang with the technique of Gopinathan to include duration since initial identification as an additional step within a transaction authorization process. Therefore, the incentives of providing increased data security for the user provided a reason to make an adaptation, and the invention resulted from application of the prior knowledge in a predictable manner. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US20210081948A1 (Kala), discussing “. . . At 108, the risk management server 85 may analyze the fraud data to filter out fraud transaction parameters that may be frequently identified as a trigger for a probable fraudulent transaction. . . in some embodiments, the one or more fraud transaction parameters may be identified using machine learning techniques, such as decision trees, ensemble learning methods, random forest, etc. Random forest may be used as an algorithm to identify a list of fraud transaction parameters (categorical and non-categorical) that may have a relatively high influence on identifying fraudulent transactions” (Para. 0026); “At 312, the method may include providing a risk management graphical user interface (GUI) that may include selection options for fraud rules. In some embodiments, the available fraud rules may be based on the fraud parameters identified based on the fraud transaction data and/or the determination of high risk or high frequency merchants. An exemplary risk management GUI is shown in FIG. 4, but other embodiments are also contemplated.” (Para. 0043) Any inquiry concerning this communication or earlier communications from the examiner should be directed to Justin A. Jimenez whose telephone number is (571) 270-3080. The examiner can normally be reached on 8:30 AM - 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John W. Hayes can be reached on 571-272-6708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Justin Jimenez/ Patent Examiner, Art Unit 3697 /JOHN W HAYES/Supervisory Patent Examiner, Art Unit 3697
Read full office action

Prosecution Timeline

Dec 05, 2023
Application Filed
Apr 28, 2025
Non-Final Rejection — §103
Aug 07, 2025
Interview Requested
Sep 02, 2025
Response Filed
Dec 22, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591889
BLOCKCHAIN-BASED SOURCE IDENTIFIER
2y 5m to grant Granted Mar 31, 2026
Patent 12591881
METHOD AND SYSTEM FOR BLOCKCHAIN SERVICE ORCHESTRATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
99%
With Interview (+85.7%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month