Prosecution Insights
Last updated: April 19, 2026
Application No. 19/025,281

REDUCING FALSE POSITIVES USING CUSTOMER FEEDBACK AND MACHINE LEARNING

Non-Final OA §101§103§DP
Filed
Jan 16, 2025
Examiner
ANDERSON, SCOTT C
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
595 granted / 1024 resolved
+6.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
38 currently pending
Career history
1062
Total Applications
across all art units

Statute-Specific Performance

§101
36.2%
-3.8% vs TC avg
§103
31.5%
-8.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1024 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION This Office action is in reply to application no. 19/025,281, filed 16 January 2025. Claims 1-20 are pending and are considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 is objected to because of the following informalities: in the first line of the claim, “of determine” is grammatically incorrect. The Examiner suggests “for determining”. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12,236,439 (“reference patent”). Although the claims at issue are not identical, they are not patentably distinct from each other. What follows is a reproduction of claim 1 of the present application and claim 1 of the reference patent. Language in the present application but not the reference patent is in boldface; language in the reference patent but not the present application is underlined, and a discussion follows. Application: A computer-implemented method of determine a cause of a false positive fraud alert, the method comprising: Patent: A computer-implemented method of using a machine-learned model to determine a cause of a false positive fraud alert, the method comprising: Application: receiving, by one or more processors, transaction data associated with a financial transaction initiated by a customer; Patent: receiving, by one or more processors, transaction data associated with a financial transaction initiated by a customer; Application: determining, using a rules engine applying one or more fraud detection rules, a fraud alert associated with the financial transaction; Patent: determining, using a rules engine applying one or more fraud detection rules, a fraud alert associated with the financial transaction; Application: receiving, by the one or more processors and from a customer computing device, customer feedback indicating that the fraud alert is a false positive fraud alert; Patent: receiving, by the one or more processors and from a customer computing device, customer feedback indicating that the fraud alert is a false positive fraud alert; Patent: in response to receiving the customer feedback, providing the transaction data as input to a machine-learned model trained to output a piece of data within the transaction data that caused the false positive fraud alert; Application: determining, by the one or more processors, and based at least in part on the transaction data and the customer feedback, a cause associated with the false positive fraud alert; and Patent: determining, based on the output of the machine-learned model, a cause associated with the false positive fraud alert; Application: modifying, by the one or more processors and based at least in part on the cause, a first fraud detection rule of the rules engine including a threshold associated with determining the false positive fraud alert. Patent: modifying, based on the cause, a first fraud detection rule of the rules engine including a threshold associated with the piece of data that caused the false positive fraud alert. As can be seen from the above, claim 1 of the reference patent is in several ways narrower than the present claim 1, such that one practicing the invention of the reference patent would necessarily infringe the corresponding portions of the present claim. The only distinctions in the present claim over the claim of the reference patent are: (1) the source of data used in a determination is different, transaction data and customer feedback rather than the output of a machine-learned model; however, in the reference patent, that same data are used to train the model, so in the patent, the same data are used for the determination, just indirectly; (2) the application is explicit that the determining is performed by a processor, but this is the case in the patent, just implicitly; (3) the application is broader as to the basis for the determination as it only requires it be based “at least in part” on certain data, while the patent requires it be based on the output of the model which was based on the data. Distinction (1) is an obvious modification as it is broader; in data-flow terms, the application requires data -> determination where the patent requires data -> machine-learned model -> determination, so the application omits a step in the data flow. Distinction (2) is an obvious modification as it is simply a scrivener’s choice as to what extent to reiterate “by the processor”. Distinction (3) is an obvious modification as it is simply broader in the application. The corresponding dependent claims are either identical, or narrower in the reference patent. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to statutory categories of invention, as each is directed to a method (process) or system (machine). The claim(s) recite(s) a data gathering step (receiving transaction data), determining a fraud alert using rules, further data gathering (receiving data about the alert), determining a cause in no particular manner but merely based the available data, and changing a number in no particular manner based on all of this. First, as the alert is based on a "financial transaction initiated by a customer", it is directed to determination of information related to fraud in a financial transaction, which is a commercial interaction, one of the "certain methods of organizing human activity" deemed abstract. Second, these are steps that can be performed by humans in the mind or with pen and paper. Before computers, it was quite routine for human clerks to receive financial transaction data and to check such data for indicia of fraud while being alert to the possibility of false-positive fraud indicia. A person can use such steps to mentally work out a reason or cause for an alert and can write a new rule on a sheet of paper. None of this would present any practical difficulty, and none requires any technology beyond pens and paper. This judicial exception is not integrated into a practical application because aside from the bare inclusion of a generic computer, discussed below, nothing is done beyond what was set forth above, which does not go beyond generally linking the abstract idea to the technological environment of generic, networked computers. See MPEP § 2106.05(h). As the claims only manipulate data related to financial transactions, fraud alerts and the like, they do not improve the "functioning of a computer" or of "any other technology or technical field". See MPEP § 2106.05(a). They do not apply the abstract idea "with, or by use of a particular machine", MPEP § 2106.05(b), as the below-cited Guidance is clear that a generic computer is not the particular machine envisioned. They do not effect a "transformation or reduction of a particular article to a different state or thing", MPEP § 2106.05(c). First, such data, being intangible, are not a particular article at all. Second, the claimed manipulation is neither transformative nor reductive; as the courts have pointed out, in the end, data are still data. They do not apply the abstract idea "in some other meaningful way beyond generally linking [it] to a particular technological environment", MPEP § 2106.05(e), as the lack of technical and algorithmic detail in the claims is so as not to go beyond such a general linkage. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional claim limitations, considered individually and as an ordered combination, are insufficient to elevate an otherwise-ineligible claim. Claim 12, which has the most, includes a processor and media storing instructions. These elements are recited at a high degree of generality and the specification does not meaningfully limit them, such that a generic computer will suffice. It only performs generic computer functions of nondescriptly manipulating data and sharing data with persons and/or other computers. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. The type of information being manipulated does not impose meaningful limitations or render the idea less abstract. The claim elements when considered as an ordered combination - a generic computer performing a chronological sequence of abstract steps - do nothing more than when they are analyzed individually. The other independent claim is simply a different embodiment but is likewise directed to a generic computer performing, essentially, the same process. The dependent claims further do not amount to significantly more than the abstract idea: claims 2, 5, 8, 9, 13, 16 and 19 are simply further descriptive of the type of information being manipulated. Claims 3 and 14 simply require iteration. Claims 4, 10 and 15 simply recite further, abstract manipulation of data. Claims 6, 7, 17 and 20 simply require additional sharing of data. Claims 11 and 18 are a nullity, simply disclosing that a step is not performed. The claims are not patent eligible. For further guidance please see MPEP § 2106.03 – 2106.07(c) (formerly referred to as the “2019 Revised Patent Subject Matter Eligibility Guidance”, 84 Fed. Reg. 50, 55 (7 January 2019)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7, 8, 11-14 and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. (U.S. Patent No. 8,413,234) in view of Geckle et al. (U.S. Publication No. 2015/0170147). In-line citations are to Zang. With regard to Claim 1: Zang teaches: A computer-implemented method [Col. 1, lines 26-27; a computer executes “a set of computer-useable instructions”] of determine a cause of a false positive fraud alert, the method comprising: receiving, by one or more processors, transaction data associated with a financial transaction… [Col. 3, lines 63-64; Col. 4, lines 1-4; a “communications record” is received involving “activity that took place over a given period of time” such as a “billing cycle” which reads on it being associated with one or more financial transactions] determining, using a rules engine applying one or more fraud detection rules, a fraud alert associated with the financial transaction; [Col. 4, lines 53-54; based on the source of a transaction a “fraud alert” may be “generated”] receiving, by the one or more processors… feedback indicating that the fraud alert is a false positive fraud alert; [Col. 6, lines 14-15; “obtaining feedback on fraud alerts and adjusting the threshold” in order to mitigate the problem that, lines 8-9, “too many false positives are being generated”] determining, by the one or more processors, and based at least in part on the transaction data and the customer feedback, a cause associated with the false positive fraud alert; [Claim 8; it may be that a particular communications device is “classified as a fraudulent source”] and modifying, by the one or more processors and based at least in part on the cause, a first fraud detection rule of the rules engine including a threshold associated with determining the false positive fraud alert. [Col. 6, lines 14-15 as cited above] Zang does not explicitly teach that a transaction is initiated by a customer or that feedback is customer feedback received from a customer computing device, but it is known in the art. Geckle teaches a system for cancelling a transaction based on a determination of a “fraud-score” which also measures a “false positive” fraud transaction rate. [abstract] A user may interact with the system using a “laptop computer”, “mobile phone” or “tablet computer” to make, for example, a “purchase” of a selected item; the purchase is then confirmed. [0015] “Customer feedback” may be used to identify a transaction as a “false positive”. [Claim 10] This may be done after someone asks the customer to confirm whether or not a transaction was fraudulent. [0023] The fraud models may be “based on historical transactions”, comparing them to present transactions. [0018] Geckle and Zang are analogous art as each is directed to electronic means for making fraud determinations. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Geckle with that of Zang in order to reduce the complications of identifying fraudulent transactions, as taught by Geckle; [0001] further, it is simply a substitution of one known part for another with predictable results, simply obtaining feedback and initiating a transaction in the manner of Geckle rather than, or in addition to, that of Zang; the substitution produces no new and unexpected result. It is notoriously old and well-known in the software arts that a developer can divide an application into subcomponents as she chooses and can name them whatever she likes. Therefore, in this and the subsequent claims, referring to software components by name, such as a “rules engine”, is considered mere labeling and given no patentable weight. With regard to Claim 2: The computer-implemented method of claim 1, wherein determining the cause comprises identifying, as the cause associated with the false positive fraud alert, at least one of: a transaction amount of the financial transaction; a merchant associated with the financial transaction; a purchased product or service associated with the financial transaction; a transaction time of the financial transaction; or a transaction type associated with the financial transaction. [Col. 4, lines 53-54 as cited above in regard to claim 1; a purchase made with a suspected fraudulent device reads on a transaction type; Geckle, 0015 as cited above in regard to claim 1; an item selected for a purchase which is then confirmed reads on the claimed purchased product or service] With regard to Claim 3: The computer-implemented method of claim 2, further comprising: determining, as a second cause associated with the false positive fraud alert, the first fraud detection rule of the rules engine. [Geckle, 0028; an “amount of false positive fraud transactions” may be used; 0046; a suspected social security number may be used; this would have been obvious to one then of ordinary skill in the art, as it is simply a substitution of known data for other data with predictable results] With regard to Claim 7: The computer-implemented method of claim 1, wherein receiving the customer feedback comprises: based on determining the fraud alert, transmitting an electronic fraud alert to the customer computing device; and receiving the customer feedback from the customer computing device, responsive to the electronic fraud alert, indicating that the fraud alert is a false positive. [Geckle, 0023, 0015 as cited above in regard to claim 1] With regard to Claim 8: The computer-implemented method of claim 1, wherein determining the cause associated with the false positive fraud alert comprises identifying one or more inconsistencies between a fact pattern associated with the financial transaction, and a historical transaction usage pattern of the customer. [Geckle, 0018 as cited above in regard to claim 1] With regard to Claim 11: The computer-implemented method of claim 1, further comprising: causing at least one of an electronic fraud alert or an account freeze associated with the financial transaction not to be performed, based on receiving the indication that the fraud alert is a false positive. This claim is not patentably distinct from claim 1 as it essentially requires nothing more than conditionally doing nothing. With regard to Claim 12: Zang teaches: A computer system for determining a cause of a false positive fraud alert, the computer system comprising… one or more non-transitory computer-readable storing computer-executable instructions that, when executed, cause the one or more processors to perform operations [Col. 2, lines 32-33; “nontransitory computer-readable media” store, line 40, “computer-useable instructions”; the following paragraphs detail the method of carrying out the instructions] comprising: receiving transaction data associated with a financial transaction… [Col. 3, lines 63-64; Col. 4, lines 1-4; a “communications record” is received involving “activity that took place over a given period of time” such as a “billing cycle” which reads on it being associated with one or more financial transactions] determining, using a rules engine applying one or more fraud detection rules, a fraud alert associated with the financial transaction; [Col. 4, lines 53-54; based on the source of a transaction a “fraud alert” may be “generated”] receiving … feedback indicating that the fraud alert is a false positive fraud alert; [Col. 6, lines 14-15; “obtaining feedback on fraud alerts and adjusting the threshold” in order to mitigate the problem that, lines 8-9, “too many false positives are being generated”] determining, based at least in part on the transaction data and the customer feedback, a cause associated with the false positive fraud alert; [Claim 8; it may be that a particular communications device is “classified as a fraudulent source”] and modifying, based at least in part on the cause, a first fraud detection rule of the rules engine including a threshold associated with determining the false positive fraud alert. [Col. 6, lines 14-15 as cited above] Zang does not explicitly teach that a transaction is initiated by a customer or that feedback is customer feedback received from a customer computing device, nor does Zang explicitly point out that his computer must necessarily include one or more processors, but it is known in the art. Geckle teaches a system for cancelling a transaction based on a determination of a “fraud-score” which also measures a “false positive” fraud transaction rate. [abstract] A user may interact with the system using a “laptop computer”, “mobile phone” or “tablet computer” to make, for example, a “purchase” of a selected item; the purchase is then confirmed. [0015] The system includes a “processor and memory”. [id.] “Customer feedback” may be used to identify a transaction as a “false positive”. [Claim 10] This may be done after someone asks the customer to confirm whether or not a transaction was fraudulent. [0023] The fraud models may be “based on historical transactions”, comparing them to present transactions. [0018] Geckle and Zang are analogous art as each is directed to electronic means for making fraud determinations. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Geckle with that of Zang in order to reduce the complications of identifying fraudulent transactions, as taught by Geckle; [0001] further, it is simply a substitution of one known part for another with predictable results, simply obtaining feedback and initiating a transaction in the manner of Geckle rather than, or in addition to, that of Zang; the substitution produces no new and unexpected result. With regard to Claim 13: The computer system of claim 12, wherein determining the cause comprises identifying, as the cause associated with the false positive fraud alert, at least one of: a transaction amount of the financial transaction; a merchant associated with the financial transaction; a purchased product or service associated with the financial transaction; a transaction time of the financial transaction; or a transaction type associated with the financial transaction. [Col. 4, lines 53-54 as cited above in regard to claim 1; a purchase made with a suspected fraudulent device reads on a transaction type; Geckle, 0015 as cited above in regard to claim 1; an item selected for a purchase which is then confirmed reads on the claimed purchased product or service] With regard to Claim 14: The computer system of claim 13, the operations further comprising: determining, as a second cause associated with the false positive fraud alert, the first fraud detection rule of the rules engine. [Geckle, 0028; an “amount of false positive fraud transactions” may be used; 0046; a suspected social security number may be used; this would have been obvious to one then of ordinary skill in the art, as it is simply a substitution of known data for other data with predictable results] With regard to Claim 17: The computer system of claim 12, wherein receiving the customer feedback comprises: based on determining the fraud alert, transmitting an electronic fraud alert to the customer computing device; and receiving the customer feedback from the customer computing device, responsive to the electronic fraud alert, indicating that the fraud alert is a false positive. [Geckle, 0023, 0015 as cited above in regard to claim 1] With regard to Claim 18: The computer system of claim 12, the operations further comprising: causing at least one of an electronic fraud alert or an account freeze associated with the financial transaction not to be performed, based on receiving the indication that the fraud alert is a false positive. This claim is not patentably distinct from claim 1 as it essentially requires nothing more than conditionally doing nothing. With regard to Claim 19: The computer system of claim 12, wherein determining the cause associated with the false positive fraud alert comprises identifying one or more inconsistencies between a fact pattern associated with the financial transaction, and a historical transaction usage pattern of the customer. [Geckle, 0018 as cited above in regard to claim 12] Claim(s) 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. in view of Geckle et al. further in view of Siddens et al. (U.S. Publication No. 2013/0282583). These claims are similar so are analyzed together. With regard to Claim 4: The computer-implemented method of claim 1, wherein modifying the first fraud detection rule of the rules engine comprises adding a fraud detection rule or subtracting a fraud detection rule from a set of rules applied by the rules engine. With regard to Claim 15: The computer system of claim 12, wherein modifying the first fraud detection rule of the rules engine comprises adding a fraud detection rule or subtracting a fraud detection rule from a set of rules applied by the rules engine. Zang and Geckle teach the method of claim 1 and system of claim 12, but do not explicitly teach adding or removing a rule, but it is known in the art. Siddens teaches a fraud detection rule interaction system. [title] It may “add to the rule profile or to store newly created fraud detection rules created by the user for the rule profile”. [0074] Siddens and Zang are analogous art as each is directed to electronic means for making determinations regarding fraud. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Siddens with that of Zang in order to ameliorate the difficulty in identifying interactions between fraud rules, as taught by Siddens; [0003] further, it is simply a substitution of one known part for another with predictable results, simply managing rules in the manner of Siddens in place of, or in addition to, that of Zang; the substitution produces no new and unexpected result. Claim(s) 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. in view of Geckle et al. further in view of Adjaoute (U.S. Publication No. 2016/0086185). With regard to Claim 5: The computer-implemented method of claim 1, wherein modifying the first fraud detection rule comprises modifying at least one of: a transaction amount criteria of the first fraud detection rule; a merchant criteria of the first fraud detection rule; a purchased product or service criteria of the first fraud detection rule; or a transaction type criteria of the first fraud detection rule. With regard to Claim 16: The computer system of claim 12, wherein modifying the first fraud detection rule comprises modifying at least one of: a transaction amount criteria of the first fraud detection rule; a merchant criteria of the first fraud detection rule; a purchased product or service criteria of the first fraud detection rule; or a transaction type criteria of the first fraud detection rule. Zang and Geckle teach the method of claim 1 and system of claim 12, but do not explicitly teach modifying one of these, but it is known in the art. Adjaoute teaches a financial risk alert system [title] that performs “fraud detection” using a “computer model”. [0258] A threshold may include a “threshold amount” of dollars in a transaction, [0198] and the system may update the thresholds. [0288] Adjaoute and Zang are analogous art as each is directed to electronic means for managing fraud determinations. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Adjaoute with that of Zang and Geckle in order to limit losses, as taught by Adjaoute; [0002] further, it is just a substitution of one known part for another with predictable results, simply modifying the element of Adjaoute rather than that of Zang; the substitution produces no new and unexpected result. Claim(s) 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. in view of Geckle et al. further in view of Dhurandhar et al. (U.S. Publication No. 2015/0242856). With regard to Claim 6: The computer-implemented method of claim 1, wherein modifying the first fraud detection rule comprises: providing the cause associated with the false positive fraud alert to a machine-learned model; and determining, based at least in part on an output of the machine-learned model, a modification to the first fraud detection rule. With regard to Claim 20: The computer system of claim 12, wherein modifying the first fraud detection rule comprises: providing the cause associated with the false positive fraud alert to a machine-learned model; and determining, based at least in part on an output of the machine-learned model, a modification to the first fraud detection rule. Zang and Geckle teach the method of claim 1 and method of claim 12, including modifying a rule as cited above, but do not explicitly teach using a machine-learned model, but it is known in the art. Dhurandhar teaches a fraud identification system [title] that uses “machine learning” to identify “true or false positive labels/flags of fraud”. [0011] It may then “update business rules” using the results. [0012] Dhurandhar and Zang are analogous art as each is directed to electronic means for making determinations related to fraud. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Dhurandhar with that of Zang and Geckle in order to reduce the difficulty of identifying certain types of fraud, as taught by Dhurandhar; [0002] further, it is simply a substitution of one known part for another with predictable results, simply making a determination in the manner of Dhurandhar rather than that of Zang; the substitution produces no new and unexpected result. Claim(s) 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Zang et al. in view of Geckle et al. further in view of Boding et al. (U.S. Publication No. 2014/0089193). With regard to Claim 9: The computer-implemented method of claim 1, wherein determining the cause associated with the false positive fraud alert comprises outputting a fact pattern associated with the false positive fraud alert, wherein the fact pattern includes a combination of two or more of: a card type; a card issuer; a card number; a cardholder name; a merchant name; a merchant location; a transaction location; a transaction amount; and a transaction type. Zang and Geckle teach the method of claim 1 but do not explicitly teach these data items, but in addition to being of no patentable significance as explained below, they are known in the art. Boding teaches a system for using fraud detection rules [0004] in which, among the data considered, are a “consumer name” and “merchant name” and “location”. [0030] Determinations are made involving “false positive transactions”. [0076] If a transaction is not determined to be fraudulent, the “transaction may be completed” and the consumer notified as such. [0056] Data are output. [0007] Rules are used for transactions based on historical transactions in making fraud determinations. [0004] Boding and Zang are analogous art as each is directed to electronic means for managing data related to fraud and false positive indicia. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Boding with that of Zang and Geckle in order to assist merchants in evaluating effectiveness of rules, as taught by Boding; [0004] further, it is simply a substitution of one known part for another with predictable results, simply providing Boding’s data in place of, or in addition to, that of Zang; the substitution produces no new and unexpected result. This claim is not patentably distinct from claim 1 as it consists entirely of nonfunctional printed matter, disclosing at most the content of output which bears no functional relation to the substrate and so is considered but given no patentable weight. The reference is provided for the purpose of compact prosecution. With regard to Claim 10: The computer-implemented method of claim 9, further comprising: receiving second transaction data associated with a second financial transaction; determining, using the rules engine, that the second financial transaction corresponds to the fact pattern associated with the false positive fraud alert; and causing, by the rules engine, based at least in part on the determining that the second financial transaction corresponds to the fact pattern, at least one of: deferring transmission of an electronic fraud alert associated with the second financial transaction; or deferring an account freeze associated with the second financial transaction. [Boding, 0004, 0056 as cited above in regard to claim 9] “Deferring” an activity requires nothing more than simply not performing it; not performing a step is a nullity which does not distinguish over the art; in any event, notifying a customer of a completed transaction without providing a fraud alert sufficiently reads on deferring the fraud alert. The reference is provided for the purpose of compact prosecution. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT C ANDERSON whose telephone number is (571)270-7442. The examiner can normally be reached M-F 9:00 to 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCOTT C ANDERSON/Primary Examiner, Art Unit 3694
Read full office action

Prosecution Timeline

Jan 16, 2025
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602730
Machine-Learning Driven Data Analysis Based on Demographics, Risk, and Need
2y 5m to grant Granted Apr 14, 2026
Patent 12603165
PRESCRIPTION DRUG PRICING AND ADJUDICATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12597031
METHODS AND SYSTEMS FOR DETECTING SUSPICIOUS OR NON-SUSPICIOUS ACTIVITIES INVOLVING A MOBILE DEVICE USE
2y 5m to grant Granted Apr 07, 2026
Patent 12585844
REACH AND FREQUENCY PREDICTION FOR DIGITAL COMPONENT TRANSMISSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586135
SYSTEMS AND METHODS FOR LIGHT DETECTION AND RANGING (LIDAR) BASED GENERATION OF A HOMEOWNERS INSURANCE QUOTE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
89%
With Interview (+30.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1024 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month