Prosecution Insights
Last updated: April 19, 2026
Application No. 19/046,068

ADVANCED FRAUD DETECTION SYSTEM

Non-Final OA §101§103
Filed
Feb 05, 2025
Examiner
CHISM, STEVEN R
Art Unit
3692
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Wells Fargo Bank N A
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
3y 5m
To Grant
71%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
39 granted / 132 resolved
-22.5% vs TC avg
Strong +41% interview lift
Without
With
+41.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
173
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
27.3%
-12.7% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
30.7%
-9.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 132 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The following is a non-final Office Action in response to application number 19046068 filed on February 05, 2025. Claims 1-20 are currently pending, and have been examined. Claim Objections Claim 7 is objected to because of the following informalities: “…, a WFA cookie, or an IP address of DDA origination”. For the acronyms “WFA” and “DDA”, according to the contemporary rules for acronyms, the contemporary rule is to write out the full name when first mentioned, with the acronym in brackets. Use only the acronyms after this. Additionally, similar language is recited in claim 17. Claim Interpretation – Intended Use Regarding claim 1, Examiner notes that the following limitation: “identifying, …, a subset of transactions … that trigger at least one suspect condition” is an intended use of “a subset of transactions”, and therefore carries limited patentable weight. Additionally similar language is recited in claim 11. (MPEP § 2103 I C). Regarding claim 1, Examiner notes that the following limitation: “outputting the graph for display”, is an intended use of “outputting the graph”, and therefore carries limited patentable weight. Additionally, similar language is recited in claim 11. (MPEP § 2103 I C). Regarding claim 4, Examiner notes that the following limitation: “… using a k-nearest neighbor evaluation to determine …”, is an intended use of “using a k-nearest neighbor evaluation”, and therefore carries limited patentable weight. Additionally, similar language is recited in claim 14. (MPEP § 2103 I C). Regarding claim 6, Examiner notes that the following limitations: “… using the graph to determine …”, is an intended use of “using the graph”, and therefore carries limited patentable weight. Additionally, similar language is recited in claim 16. (MPEP § 2103 I C). Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. In the instant case, claims 1-10 are directed to a “method” and claims 11-20 are directed to a “non-transitory machine-readable medium”. Therefore, these claims are directed to one of the four statutory categories of invention. Claim 1 recites “financial fraud detection”, which is a form of commercial or legal interactions (i.e., organizing human activity), and an abstract idea. Specifically, the claim recites: “receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts; identifying, using the transaction data, a subset of transactions of the plurality of transactions that trigger at least one suspect condition; determining, from respective metadata of the subset of transactions, at least one related feature of a portion of the subset of transactions; generating a graph of the portion of the subset of transactions based on the at least one related feature, the graph identifying respective accounts of the plurality of accounts corresponding to the subset of transactions; and outputting the graph for display”, where the italicized claim language represents the abstract idea “financial fraud detection”. (MPEP §2106.04 II.A.1.). This judicial exception is not integrated into a practical application because, when analyzed under prong two of step 2A (MPEP §2106.04 II.A.2.), the additional element of the claim (the bolded claim language), such as “outputting the graph for display”, represents the use of a computer as a tool to perform an abstract idea. Therefore, the additional element does not integrate the abstract idea into a practical application as it does no more than represent a computer performing functions that correspond to implementing the acts of “financial fraud detection”. With respect to “receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts” , “identifying, using the transaction data, a subset of transactions of the plurality of transactions that trigger at least one suspect condition”, “determining, from respective metadata of the subset of transactions, at least one related feature of a portion of the subset of transactions”, “generating a graph of the portion of the subset of transactions based on the at least one related feature, the graph identifying respective accounts of the plurality of accounts corresponding to the subset of transactions”, and “outputting the graph for display”, the claim lacks technological details regarding how “receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts” , “identifying, using the transaction data, a subset of transactions of the plurality of transactions that trigger at least one suspect condition”, “determining, from respective metadata of the subset of transactions, at least one related feature of a portion of the subset of transactions”, “generating a graph of the portion of the subset of transactions based on the at least one related feature, the graph identifying respective accounts of the plurality of accounts corresponding to the subset of transactions”, and “outputting the graph for display”, are performed and does not integrate the judicial exception into a practical application or provide significantly more because the recitation is equivalent to the words, “apply it”. (MPEP §2106.05 (f)(1)). The limitations of “receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts” and “outputting the graph for display” as additional elements, are recited at a high level of generality (i.e., as a general means of gathering transaction data, metadata, a plurality of accounts, and data outputting, such as the graph data for display), and amounts to mere data gathering and data output, which are forms of insignificant extra-solution activity. Accordingly, these additional elements, even in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. When analyzed under step 2B (MPEP 2106.05 I.A.), the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. Viewed as a whole, the combination of elements recited in the claim merely describes the concept of “financial fraud detection” using computer technology (e.g., “outputting the graph for display”). Therefore, as this additional element does no more than employ a computer as a tool to implement the abstract idea, it does not improve computer functionality or improve another technology or technical field. (MPEP 2106.05 I A (f) & (h)). Therefore, claim 1 is non-statutory. Claim 11 also recites the abstract idea of “financial fraud detection”, as well the additional elements of “a non-transitory machine-readable medium” and “processing circuitry”, which represent the use of a computer as a tool to perform an abstract. Therefore, the additional elements do not integrate the abstract idea into a practical application as they do no more than represent a computer performing functions that correspond to implementing the acts of “financial fraud detection”. The limitations of “receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts” and “outputting the graph for display” as additional elements, are recited at a high level of generality (i.e., as a general means of gathering transaction data, metadata, a plurality of accounts, and data outputting, such as the graph data for display), and amounts to mere data gathering and data output, which are forms of insignificant extra-solution activity. Accordingly, these additional elements, even in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. When analyzed under step 2B (MPEP 2106.05 I.A.), the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. Viewed as a whole, the combination of elements recited in the claim merely describes the concept of “financial fraud detection” using computer technology (e.g., “a non-transitory machine-readable medium” and “processing circuitry”). Therefore, as these additional elements do no more than employ a computer as a tool to implement the abstract idea, they do not improve computer functionality nor improve another technology or technical field. (MPEP 2106.05 I A (f) & (h)). Therefore, claim 11 is non-statutory. Dependent claims 2-10 and 12-20 further describe the abstract idea of “financial fraud detection”, which is insufficient to overcome the rejections of claims 1 and 11. Dependent claims 2-6, 8-10, 12-16, and 18-20 do not recite any new additional elements that integrate the abstract idea into a practical application, and that do no more than represent a computer performing functions that correspond to implementing the acts of “financial fraud detection”, when analyzed under Step 2A, Prong Two. Dependent claims 7 and 17 recite new additional elements of “WFA cookie” and “an IP address of DDA origination”, which do no more than employ a computer as a tool to implement the abstract idea. And, as they do no more than employ a computer as a tool to implement the abstract idea, they do not improve computer functionality nor improve another technology or a technical field. Hence, claims 1-20 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and § 103 (or as subject to pre-AIA 35 U.S.C. § 102 and § 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U. S. 1. 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gittino et al (U. S. Patent No. 12147986 B1), herein referred to as Gittino, and in further view of Tate et al (U. S. Patent No. 12008573 B2), herein referred to as Tate. Regarding claims 1 and 11, Gittino discloses a method for graph-based anomaly detection, the method comprising: … identifying, using the transaction data, a subset of transactions of the plurality of transactions that trigger at least one suspect condition (FIG. 2, 4, items 200, 202, 404, 406, 408, 410; C/L 8/2-6, “… FIG. 2 illustrates an example of a graphic visualization of a community of interest 200 from the network of entities of interest 100. One or more community-finding algorithms may be employed in identifying such communities of interest …”; 8/14-27, “… As illustrated in FIG. 2, a result, … one or more community- finding algorithms may be generation of a visualization of one or more distinguishable groups of closely connected entities, such as community 202. A community, such as community 202, may include, … entities 102, 104, 106, 108, 110, 112 shown in FIG. 1 that may be further analyzed individually. Each of the identified communities may be consecutively analyzed as a single new object, and new attributes may be derived from a network structure of the particular community, such as community 202, including, for example a total number of entities reported to authorities, a path length of flow of funds, and clustering and centrality measures …”; 10/47-59, “… FIG. 4 illustrates an example of a graphic visualization of communities of entities defined by connections which the entities have to one another identified using automated network analysis for embodiments of the invention. Referring to FIG. 4, automated network analysis algorithms for embodiments of the invention may be employed to detect and visualize communities of entities defined by the connections that entities have to one another. Referring to FIG 4, the visualization may include a relatively large number of connectors representing transactions between the entities. Further, anomalous connectivity patterns may be found in certain of the communities, such as the encapsulated entities forming communities 402, 404, 406, 408, 410 …”); … generating a graph of the portion of the subset of transactions based on the at least one related feature, the graph identifying respective accounts of the plurality of accounts corresponding to the subset of transactions (FIG. 1, 6, items 102, 104, 106, 108, 110, 112, 602, 604; C/L 7/21-35, “… The machine learning system may map each user, or a created network model of a user, as a unique node that retains the correspondent attributes of the entity that is represented by the node. Each node may be shown as an icon, such as a square or circle or some other suitable device. Referring to FIG. 1, entities that, for example, have been reported to authorities for unexplained anomalous behavior, such as entities 102, 104, 106, 108, 110, 112, may be represented in the visualization 100 by icons of a particular color or shape, and all other entities, such as entities 114, 116 may be represented by icons of a different color or shape. The machine learning system may prepare the particular colors, shapes, or configurations of nodes to be more distinct for a human viewer based on a feedback of actions into the machine learning system …”; 11/44-12/5, “… Embodiments of the invention enable actual visualization of the network and suspected fraudulent activity in which movement of money may suggest an intent to commit a crime. Referring to FIG. 6, the visualization of multiple transactions greater than $1,000 sent by a commercial entity with a history of anomalous activity, such as commercial entity 602, to multiple different consumers may merit investigation. Referring further to FIG. 6, the 'triangle' of transactions between commercial entity 602, a consumer with no history of anomalous activity, such as consumer 604, and a consumer with a history of anomalous activity, such as consumer 606, may suggest an existence of funneling activity and raise questions as to the role of consumer 604. According to embodiments of the invention, networks have attributes that can be mathematically described, and the mathematic components may be used to analyze and visualize properties and characteristics the networks. These network properties define network models, which aid in understanding how a network may evolve and what the network may be expected to look like at a certain time. While this example is directed to money laundering, any other type of fraudulent interaction may be identified by a 65 machine learning system … interactions between users that are accessing multiple secure locations to retrieve data and then forwarding the data to a third party, may cause the machine learning system to identify the third party as a potential fraudulent actor. Any type of fraudulent activity and any type of interaction between fraudulent actors and others may be monitored and investigated …”); and outputting the graph for display (FIG. 1, items 102, 104, 106, 108, 110, 112; C/L 6/36-56, “… Following data retrieval, data results may be encoded as visual objects in a graphic display, an example of which is illustrated in FIG. 1. FIG. 1 illustrates an example of a graphic visualization 100 of a network of entities of interest-based transactions of those entities according to embodiments of the invention. Referring to FIG. 1, each entity may be mapped as a unique node that retains the correspondent attributes of the entity that is represented by the node. Each node may be shown as an icon, such as a square or circle or some other suitable device. Referring to FIG. 1, entities that, for example, have been reported to authorities for unexplained anomalous behavior, such as entities 102, 104, 106, 108, 110, 112, may be represented in the visualization 100 by icons of a particular color or shape, and all other entities, such as entities 114, 116 may be represented by icons of a different color or shape. Further, transactions between entities may be represented in the visualization as links activated between nodes corresponding to the respective entities. Such links may be in the form, for example, of arrows that indicate a direction of flow of transactions between entities …”; 14/63-15/16, “…various display and graphical user interface components to enable selection and display of a specific network of interest, such as a network or community identified using network analysis metrics for embodiments … such display and graphical user interface components may graphically depict key insights such as transaction amount and count reflected, for example, in a size, such as a length or width of a connection, and segmentation values reflected in a shape, such as a circle, star, or octagon, of a symbol or icon representing a network entity … such display and graphical user interface components for embodiments of the invention may graphically depict other key insights such as highlighting suspicious transaction patterns with symbols or icons, such as triangles or squares, and showing relative risks of entities and connections by depicting symbols or icons representing such entities and connections, for example, in sizes proportionate to a level of risk or colors in colors of an intensity proportionate to a level of risk …”). Gittino does not specifically disclose, however, Tate discloses receiving transaction data including metadata related to a plurality of transactions with a plurality of accounts (FIG. 1, items 100, 136, 138; C/L 9/9-28, “The e-commerce platform 100 may provide a financial facility 120 for secure financial transactions with customers, such as through a secure card server environment. The e-commerce platform 100 may store credit card information, such as in payment card industry data (PCI) environments (e.g., a card server), to reconcile financials, bill merchants, perform automated clearing house (ACH) transfers between an e-commerce platform 100 financial institution account and a merchant's bank account (e.g., when using capital), and the like. These systems may have Sarbanes-Oxley Act (SOX) compliance and a high level of diligence required in their development and operation. The financial facility 120 may also provide merchants with financial support, such as through the lending of capital (e.g., lending funds, cash advances, and the like) and provision of insurance. In addition, the e-commerce platform 100 may provide for a set of marketing and partner services and control the relationship between the e-commerce platform 100 and partners. They also may connect and onboard new merchants with the e-commerce platform 100 …”, 9/33-55, “… online store 138 may support a great number of independently administered storefronts and process a large volume of transactional data on a daily basis for a variety of products. Transactional data may include customer contact information, billing information, shipping information, information on products purchased, information on services rendered, and any other information associated with business through the e-commerce platform 100 … the e-commerce platform 100 may store this data in a data facility 134. The transactional data may be processed to produce analytics 132, which in turn may be provided to merchants or third-party commerce entities, such as providing consumer trends, marketing and sales insights, recommendations for improving sales, evaluation of customer behaviors, marketing and sales modeling, trends in fraud, and the like, related to online commerce, and provided through dashboard interfaces, through reports, and the like. The e-commerce platform 100 may store information about business and merchant transactions, and the data facility 134 may have many ways of enhancing, contributing, refining, and extracting data, where over time the collected data may enable improvements to aspects of the e-commerce platform 100 …”); … determining, from respective metadata of the subset of transactions, at least one related feature of a portion of the subset of transactions (FIG. 5, item 309; C/L 21/6-39, “… FIG. 5 illustrates trust graph 309 including users and attributes known to the fraud detector 302 … Trust graph 309 stores a plurality of users, Users A, B, C, D, and F, and a plurality of attributes, Attributes G, H, I, and J, that have associated untrustworthiness values that have been previously determined from interaction of each of the users with the e-commerce platform. In trust graph 309, each user is represented by a user node in the form of a circle. Each user has one or more attributes, and each attribute is represented by an attribute node in the form of a box. The relationship between a user and an attribute is indicated by a line linking a user node and an attribute node, and an attribute may be associated with more than one user. Each user node is assigned a probability that the user corresponding to that user node is a fraudulent user. The assigned probability is called a user untrustworthiness value. A fraudulent user is a user that is associated with fraudulent activity. Users A and B, i.e. Jane and Fred, have been previously determined by the fraud detector 302 to be fraudulent users and have thus been assigned an untrustworthiness value of 1. In trust graph 309, User A has two associated attributes, Attribute G (i.e., the bank account number #55699344458 provided by User A) and Attribute H (i.e., the IP address 172.16.254.1 of the user device used by User A). These attributes were determined from previous interaction of User A with the e-commerce platform. User B is also known to be associated with two attributes, Attribute H (i.e., the IP address 172.16.254.1 of the user device used by User B) and Attribute I (i.e., the e-mail address 35 culprit@company.com provided by User B). In trust graph 309, both known fraudulent users, User A and User B, share the same attribute, Attribute H, which here is the same IP address …”); … With respect to claim 11, Tate further discloses at least one non-transitory machine-readable medium, including instructions for graph-based anomaly detection, which when executed by processing circuitry (FIG. 3, 4, items 100, 136, 202, 302, 308; C/L 17/58-18/15, “… FIG. 3 illustrates the e-commerce platform 100 of FIG. 1, but with a fraud detector 202 in the commerce management engine 136. The fraud detector 202 performs fraud detecting methods disclosed herein, e.g., determining the likelihood that a user interacting with an e-commerce platform 100 is engaging in fraudulent activity … the fraud detector 202 may analyze attributes associated with a user interacting with online store 138, and determine whether any of these attributes are linked to fraudulent activity based on previous interactions of other users with the e-commerce platform 100, e.g. via a trust graph in the manner described herein. The fraud detector 202 may be used to determine whether a user is likely to commit fraudulent activity and prevent fraudulent activity from being performed before the user is able to complete a transaction, e.g., … The fraud detector 202 may be implemented by one or more general-purpose processors that execute instructions stored in a memory or stored in another non-transitory computer-readable medium. The instructions, when executed, cause the fraud detector 202 to perform the operations of the fraud detector 202. Alternatively, some or all of the fraud detector 202 may be implemented using dedicated circuitry, such as an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or a programmed field programmable gate array (FPGA) …”; 18/49-64, “… The fraud detector 302 includes a processor 304 for implementing the operations described herein that are performed by the fraud detector 302, e.g. operations such as assigning a value associated with the untrustworthiness of the user and generating web content that provides friction to the user if the fraud detector 302 determines the user may be fraudulent. The processor 304 may be implemented by one or more general purpose processors that execute instructions stored in a memory (e.g. in memory 308) or stored in another non-transitory computer-readable medium. The instructions, when executed, cause the processor 304 to directly perform, or instruct the fraud detector 302 to perform, the operations of the fraud detector 302 described herein. In other embodiments, the processor 304 may be implemented using dedicated circuitry, such as a programmed FPGA, a GPU, or an ASIC …”), cause the processing circuitry to perform operations comprising: … Tate discloses computer-implemented systems and methods for detecting fraudulent activity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include computer-implemented systems and methods for detecting fraudulent activity, as in Tate, to improve and/or enhance the technology of machine learning creations and usage of cluster visualizations, as in Gittino, because it would amount to combining elements that in the combination would perform the same function as they functioned separately. One of ordinary skill in the art before the effective filing date of the invention would have been motivated to combine the references to provide computer-implemented systems and methods for estimating the likelihood of a fraudulent user before or as an e-commerce transaction takes place by searching for signals that might be fraudulent activity, e.g., a surge of orders from the same IP address, in real time as the e-commerce transaction is taking place. Regarding claims 2 and 12, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino does not specifically disclose, however, Tate discloses the method of claim 1, wherein the at least one related feature includes a zip code (FIG. 6, item 309; C/L 3/11-16, “…the first attribute may be one of the following associated with the user: a user name, an e-mail address, a phone number, an IP address, a postal address, a domain, financial information, an indication of a product previously purchased by the user, an indication of an online store the user previously purchased from, or browser details …”; 22/30-41, “… User E 502 is associated with a new attribute, Attribute Q 504, which is the address provided by User E 502 "77 Main St. New York" … User E 502 shares the IP address of Attribute J with User F and the e-mail address of Attribute I with Users B and D … in the interaction with the e-commerce platform thus far, User E 502 has provided a postal address that is new, but the email address provided by User E 502 is the same as that provided by Users B and D, and the IP address of user device 320 used by User E 502 is the same as the IP address that was previously used by the user device of User F. Attribute Q 504 is new (i.e. not shared with another attribute already in the graph) and has therefore been initialized to have an a priori untrustworthiness value of 1/10000 … new attributes, such as Attribute Q 504, may be initialized to a generic untrustworthiness value based on the attribute type. In other implementations, new attributes might not be assigned a value and/or might not be considered when assigning an untrustworthiness value to a new user…”). Tate discloses computer-implemented systems and methods for detecting fraudulent activity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include computer-implemented systems and methods for detecting fraudulent activity, as in Tate, to improve and/or enhance the technology of machine learning creations and usage of cluster visualizations, as in Gittino, because it would amount to combining elements that in the combination would perform the same function as they functioned separately. One of ordinary skill in the art before the effective filing date of the invention would have been motivated to combine the references in providing an e-commerce platform with users being linked to each other via common attributes, where an attribute is information associated with a user, e.g., an email address of the user, an IP address used by the user, the user’s contact address, etc. When two users share an attribute, the users may be connected through that attribute, e.g., in the form of a graph, where an untrustworthiness value indicative of fraudulent activity may be assigned to each user and/or each attribute, and how connected the user/attribute is to another untrustworthy user/attribute to identify potentially fraudulent activities. Regarding claims 3 and 13, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino further discloses the method of claim 1, wherein the graph illustrates a set of entities that are related via the at least one related feature, the set of entities including at least one of customers (FIG. 4, 5, items 404, 502, 504, 506, 508, 510, 512, 514, 516, 518; C/L 10/60-11/11, “… FIG. 5 illustrates an enlarged view of largest of the five encapsulated entities shown on the example visualization of FIG. 4. Referring to FIG. 5, the visualization for embodiments of the invention may show, for example, connections between entities without a history of anomalous activity and other entities with a history of such activity that may uncover connectivity between such entities indicative of anomalous behavior … multiple interactions performed by an entity without a history of anomalous activity, such as entity 502, to an entity with a history of such activity, such as entity 504, may merit investigation … multiple entities with a history of such activity, such as entities 504, 506, 508, 510, 512, 514, 516, 518, transacting with an entity without a history of anomalous activity, such as entity 520, may likewise merit investigation. Moreover, an entity without a history of anomalous activity, such as entity 520, which is common to multiple entities with a history of such activity, may merit a focused investigation …”), ATMs, bank branches. Regarding claims 4 and 14, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino further discloses the method of claim 1, wherein determining the at least one related feature includes using a k-nearest neighbor evaluation to determine at least one chain of connection among the subset of transactions (C/L 19/52-63, “Different machine-learning algorithms have been contemplated to carry out the embodiments … linear regression (LiR), logistic regression (LoR), Bayesian networks (for example, naive-bayes), random forest (RF) (including decision trees), neural networks (NN) (also known as artificial neural networks), matrix factorization, a hidden Markov model (HMM), support vector machines (SVM), K-means clustering (KMC), K-nearest neighbor (KNN), a suitable statistical machine learning algorithm, and/or a heuristic machine learning system for classifying or evaluating whether a user is likely to conduct a fraudulent interaction …”; 26/6-53, “… user interaction histories are used to train the linear equation or kernel function of the SVM machine learning module, which, after training, is used to estimate whether a user is likely to conduct a fraudulent interaction. K-Means Clustering, K-means clustering is implemented. KMC assumes data points have implicit shared characteristics and "clusters" data within a centroid or "mean" of the clustered data points. During training, KMC adds a number of k centroids and optimizes its position around clusters. This process is iterative, where each centroid, initially positioned at random, is re-positioned towards the average point of a cluster. This process concludes when the centroids have reached an optimal position within a cluster. Training of a KMC module is typically unsupervised … user interaction histories are used to train the centroids of a KMC machine learning module, which, after training, is used to estimate whether a user is likely to conduct a fraudulent interaction. K-Nearest Neighbor, K-nearest neighbor is implemented … KNN shares similar characteristics to KMC … KNN assumes data points near each other share similar characteristics and computes the distance between data points to identify those similar characteristics but instead of k centroids, KNN uses k number of neighbors. The k in KNN represents how many neighbors will assign a data point to a class, for classification, or object property value, for regression. Selection of an appropriate number of k is integral to the accuracy of KNN … a large k may reduce random error associated with variance in the data but increase error by ignoring small but significant differences in the data … a careful choice of k is selected to balance overfitting and underfitting. Concluding whether some data point belongs to some class or property value k, the distance between neighbors is computed. Common methods to compute this distance are Euclidean, Manhattan or Hamming to name a few … neighbors are given weights depending on the neighbor distance to scale the similarity between neighbors to reduce the error of edge neighbors of one class "outvoting" near neighbors of another class … k is 1 and a Markov model approach is utilized … user interaction histories are used to train a KNN machine learning module, which, after training, is used to estimate whether a user is likely to conduct a fraudulent interaction …”). Regarding claims 5 and 15, Gittino and Tate disclose the limitations of claims 1, 4, 11, and 14. Gittino does not specifically disclose, however, Tate discloses the method of claim 4, wherein the at least one chain of connection includes a connection among entities that do not share an address, zip code, or last name (FIG. 5, 6, items 309, 502, 504; C/L 21/65-22/11, “… User F, Abe, in trust graph 309 is only associated with Attribute J (i.e., the IP address 164.12.128.1 of the user device used by User F). User F does not share any attributes with any other users and therefore is not associated with any of the other user nodes in the trust graph 309. As the fraud detector 302 has determined that User F and Attribute J are not associated with fraudulent activity and are not linked to other nodes in the trust graph 309, they have each been assigned a small a priori untrustworthiness value of 1/10000 assigned by default. The value of a default a priori untrustworthiness value may be a function of the attribute type. User F has an associated low probability of being a fraudulent user because it has no connection to a known fraudulent user …”; 22/42-50, “… Attribute Q 504 is new (i.e. not shared with another attribute already in the graph) and has therefore been initialized to have an a priori untrustworthiness value of 1/10000. In some implementations, new attributes, such as Attribute Q 504, may be initialized to a generic untrustworthiness value based on the attribute type. In other implementations, new attributes might not be assigned a value and/or might not be considered when assigning an untrustworthiness value to a new user …”; 23/25-33, “… because the attribute Q is not connected to another user (i.e. it is a "dangling edge"), the attribute Q may be ignored in the computation or incorporated in another way, e.g. by assuming it is connected to a "dummy" non-existent user having a predetermined a priori probability of being fraudulent. f(x)·f(M) is multiplication of f(x) by f(M), which may be a matrix multiplication, depending upon the implementation …”). Tate discloses computer-implemented systems and methods for detecting fraudulent activity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include computer-implemented systems and methods for detecting fraudulent activity, as in Tate, to improve and/or enhance the technology of machine learning creations and usage of cluster visualizations, as in Gittino, because it would amount to combining elements that in the combination would perform the same function as they functioned separately. One of ordinary skill in the art before the effective filing date of the invention would have been motivated to combine the references in providing an e-commerce platform with users being linked to each other via common attributes, where an attribute is information associated with a user, e.g., an email address of the user, an IP address used by the user, the user’s contact address, etc. When two users share an attribute, the users may be connected through that attribute, e.g., in the form of a graph, where an untrustworthiness value indicative of fraudulent activity may be assigned to each user and/or each attribute, and how connected the user/attribute is to another untrustworthy user/attribute to identify potentially fraudulent activities. Regarding claims 6 and 16, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino further discloses the method of claim 1, further comprising using the graph to determine at least one root user causing at least two fraud trees within the graph, and outputting an indication of the at least one root user (FIG. 9, items 900, 902, 904, 906, 908; C/L 12/52-13/11, “… FIG. 9 … reveals evidence of a high coefficient of clustering, also referred to as high local clustering between entities. Referring to FIG. 9 … monetary instruments and wire transactions between entities may be aggregated as either less than or greater than $1000, and the entities may be identified as either the subject of a suspicious activity report or not the subject of such a report … machine learning algorithms for embodiments of the invention may be employed to indicate the risk level of the entity and to visualize a size of node representing an entity as proportional to a level of risk associated with the entity. Referring further to FIG. 9, it may be noted that the network visualization 900 discloses a significant number of commercial and non-commercial entities with histories of anomalous behavior transacting with a commercial entity 902 at the center of the network that does not have a history of anomalous behavior. It may likewise be noted that the network visualization 900 also discloses multiple non-commercial entities transferring money to a commercial entity with a history of anomalous behavior that may be an indication of funneling. Finally, it may be noted that the network visualization 900 also discloses the occurrence of high local clustering at 904 that may indicate a high probability that commercial entities 906, 908, both with a history of anomalous behavior, may transact in the future …”). Regarding claims 7 and 17, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino does not specifically disclose, however, Tate discloses the method of claim 1, wherein the at least one related feature includes at least one of a physical address, a WFA cookie, or an IP address of DDA origination (C/L 3/11-16, “…the first attribute may be one of the following associated with the user: a user name, an e-mail address, a phone number, an IP address, a postal address, a domain, financial information, an indication of a product previously purchased by the user, an indication of an online store the user previously purchased from, or browser details …”; 17/13-25, “… A user may attempt to perform a transaction related to online store 138 of the e-commerce platform 100. The user might not have previously interacted with the e-commerce platform 100, and it might not be known whether the user is a fraudulent user. Information relating to the user may be analyzed to determine whether the user may be a fraudulent user. The information may be identifying information (e.g., contact information such as a name, e-mail address, shipping address, or phone number), and/or financial information (e.g., a method of payment, such as a credit card number), and/or browser information and/or an IP address, etc. Each item of information will be referred to as an attribute of the user …”; 20/66-21/5, “… There might be different attribute node types, e.g. "email address", "IP address", etc. … the attribute "jane@example.com" is an attribute of the attribute type "email address". Attributes of a user are represented by attribute nodes that are connected to the user node of that user …”; 22/30-41, “… User E 502 is associated with a new attribute, Attribute Q 504, which is the address provided by User E 502 "77 Main St. New York". As well, User E 502 shares the IP address of Attribute J with User F and the e-mail address of Attribute I with Users B and D … in the interaction with the e-commerce platform thus far, User E 502 has provided a postal address that is new, but the email address provided by User E 502 is the same as that provided by Users B and D, and the IP address of user device 320 used by User E 502 is the same as the IP address that was previously used by the user device of User F …”). Tate discloses computer-implemented systems and methods for detecting fraudulent activity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include computer-implemented systems and methods for detecting fraudulent activity, as in Tate, to improve and/or enhance the technology of machine learning creations and usage of cluster visualizations, as in Gittino, because it would amount to combining elements that in the combination would perform the same function as they functioned separately. One of ordinary skill in the art before the effective filing date of the invention would have been motivated to combine the references providing an e-commerce platform with users being linked to each other via common attributes, where an attribute is information associated with a user, e.g., an email address of the user, an IP address used by the user, the user’s contact address, etc. When two users share an attribute, the users may be connected through that attribute, e.g., in the form of a graph, where an untrustworthiness value indicative of fraudulent activity may be assigned to each user and/or each attribute, and how connected the user/attribute is to another untrustworthy user/attribute to identify potentially fraudulent activities. Regarding claims 8 and 18, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino further discloses the method of claim 1, wherein the at least one suspect condition includes at least one of a money transfer above a threshold amount (FIG. 6, items 600, 602, 604, 900, 902, 904, 906, 908; C/L 11/37-56, “… Referring to FIG. 6 … a visualization of a network of money movement between entities 600 may be generated. In such visualization, monetary instruments and wire transactions between entities may be aggregated as either less than or greater than $1000, and the entities may be identified as either the subject of a suspicious activity report or not the subject of such a report. … actual visualization of the network and suspected fraudulent activity in which movement of money may suggest an intent to commit a crime. Referring to FIG. 6, the visualization of multiple transactions greater than $1,000 sent by a commercial entity with a history of anomalous activity, such as commercial entity 602, to multiple different consumers may merit investigation. Referring further to FIG. 6, the 'triangle' of transactions between commercial entity 602, a consumer with no history of anomalous activity, such as consumer 604, and a consumer with a history of anomalous activity, such as consumer 606, may suggest an existence of funneling activity and raise questions as to the role of consumer 604 …”; 12/52-13/11, “… FIG. 9 illustrates an example of network analysis and visualization according to embodiments of the invention that reveals evidence of a high coefficient of clustering, also referred to as high local clustering between entities. Referring to FIG. 9… monetary instruments and wire transactions between entities may be aggregated as either less than or greater than $1000, and the entities may be identified as either the subject of a suspicious activity report or not the subject of such a report … machine learning algorithms for embodiments of the invention may be employed to indicate the risk level of the entity and to visualize a size of node representing an entity as proportional to a level of risk associated with the entity. Referring further to FIG. 9, it may be noted that the network visualization 900 discloses a significant number of commercial and non-commercial entities with histories of anomalous behavior transacting with a commercial entity 902 at the center of the network that does not have a history of anomalous behavior … the network visualization 900 also discloses multiple non-commercial entities transferring money to a commercial entity with a history of anomalous behavior that may be an indication of funneling … the network visualization 900 also discloses the occurrence of high local clustering at 904 that may indicate a high probability that commercial entities 906, 908, both with a history of anomalous behavior, may transact in the future …”), a subscription to a credit monitor service, a lack of retail activity, a bill pay transaction, or a wire transfers from a business account … Regarding claims 9 and 19, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino further discloses the method of claim 1, wherein nodes of the graph include a color corresponding to a dollar amount (C/L 3/33-47, “… the element disposed between the plurality of pairs of said plurality of icons representing transactions between transacting entities may comprise … an element having an appearance indicating a transaction amount and count of transactions between said transacting entities … the element having an appearance indicating a transaction amount and count of transactions between said transacting entities may comprise … an element having a size indicating a transaction amount and count of transactions between said transacting entities … the element disposed between each of the plurality of pairs of said plurality of icons representing transactions between transacting entities may comprise … an element having an appearance indicating a level of risk of unlawful activity associated with each transaction …”; 15/1-16, “… such display and graphical user interface components may graphically depict key insights such as transaction amount and count reflected … in a size, such as a length or width of a connection, and segmentation values reflected in a shape, such as a circle, star, or octagon, of a symbol or icon representing a network entity … such display and graphical user interface components for embodiments of the invention may graphically depict other key insights such as highlighting suspicious transaction patterns with symbols or icons, such as triangles or squares, and showing relative risks of entities and connections by depicting symbols or icons representing such entities and connections … in sizes proportionate to a level of risk or colors in colors of an intensity proportionate to a level of risk …”). Regarding claims 10 and 20, Gittino and Tate disclose the limitations of claims 1 and 11. Gittino does not specifically disclose, however, Tate discloses the method of claim 1, further comprising determining at least two rings of fraudulent activity within the graph and determining that the at least two rings are related based on a feature of a node connected to both of the at least two rings (FIG. 5, item 309; C/L 21/6-39, “… FIG. 5 illustrates trust graph 309 including users and attributes known to the fraud detector 302, according to one embodiment. Trust graph 309 stores a plurality of users, Users A, B, C, D, and F, and a plurality of attributes, Attributes G, H, I, and J, that have associated untrustworthiness values that have been previously determined from interaction of each of the users with the e-commerce plat­form. In trust graph 309, each user is represented by a user node in the form of a circle. Each user has one or more attributes, and each attribute is represented by an attribute node in the form of a box. The relationship between a user and an attribute is indicated by a line linking a user node and an attribute node, and an attribute may be associated with more than one user. Each user node is assigned a probability that the user corresponding to that user node is a fraudulent user. The assigned probability is called a user untrustworthiness value. A fraudulent user is a user that is associated with fraudulent activity. Users A and B, i.e. Jane and Fred, have been previously determined by the fraud detector 302 to be fraudulent users and have thus been assigned an untrustwor­thiness value of 1. In trust graph 309, User A has two associated attributes, Attribute G (i.e., the bank account number #55699344458 provided by User A) and Attribute H (i.e., the IP address 172.16.254.1 of the user device used by User A). These attributes were determined from previous interaction of User A with the e-commerce platform. User B is also known to be associated with two attributes, Attribute H (i.e., the IP address 172.16.254.1 of the user device used by User B) and Attribute 1 (i.e., the e-mail address culprit@company.com provided by User B). In trust graph 309, both known fraudulent users, User A and User B, share the same attribute, Attribute H, which here is the same IP address …”). Tate discloses computer-implemented systems and methods for detecting fraudulent activity. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include computer-implemented systems and methods for detecting fraudulent activity, as in Tate, to improve and/or enhance the technology of machine learning creations and usage of cluster visualizations, as in Gittino, because it would amount to combining elements that in the combination would perform the same function as they functioned separately. One of ordinary skill in the art before the effective filing date of the invention would have been motivated to combine the references to provide computer-implemented systems and methods for estimating the likelihood of a fraudulent user before or as an e-commerce transaction takes place by searching for signals that might be fraudulent activity, e.g., a surge of orders from the same IP address, in real time as the e-commerce transaction is taking place. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kloke et al (U. S. Patent No. 10318584 B2) – Outcome Analysis For Graph Generation Kloke discloses an example method determining a point from a data set closest to a particular data point using a particular metric and scoring a particular data point based on whether the closest point shares a similar characteristic, selecting a subset of metrics based on the metric score to generate a subset of metrics, evaluating a metric-lens combination by calculating a metric-lens score based on entropy of shared characteristics across subspaces of a reference map generated by the metric-lens combination, selecting a metric-lens combination based on the metric-lens score, generating topological representations using the received data set, associating each node with at least one shared characteristic based on member data points of that particular node sharing the shared characteristic, scoring groups within each topological representation based on entropy, scoring topological representation based on the group scores, and providing a visualization of at least one topological representation based on the graph scores. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN CHISM whose telephone number is (571) 272-5915. The examiner can normally be reached during 9:00 AM – 3:00 PM Monday – Thursday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan D. Donlon can be reached (571) 270-3602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN CHISM/ Examiner, Art Unit 3692 /RYAN D DONLON/Supervisory Patent Examiner, Art Unit 3692
Read full office action

Prosecution Timeline

Feb 05, 2025
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597066
FEDERATED DATA ROOM SERVER AND METHOD FOR USE IN BLOCKCHAIN ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12591882
METHODS AND SYSTEMS FOR SHARING A CONSENT TOKEN ASSOCIATED WITH A USER CONSENT AMONG APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12572943
DIGITAL AUTHORIZATION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12555092
IOT DEVICES
2y 5m to grant Granted Feb 17, 2026
Patent 12450660
CHAT SUPPORT PLATFORM WITH CHAT ROUTING BASED ON GEOGRAPHIC LOCATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
71%
With Interview (+41.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 132 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month