Prosecution Insights
Last updated: April 19, 2026
Application No. 18/629,309

MONEY MULE DETECTION USING LINK PREDICTION

Final Rejection §101§103
Filed
Apr 08, 2024
Examiner
BUNKER, WILLIAM B
Art Unit
3691
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Actimize Ltd.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
171 granted / 216 resolved
+27.2% vs TC avg
Strong +94% interview lift
Without
With
+94.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
240
Total Applications
across all art units

Statute-Specific Performance

§101
42.4%
+2.4% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
3.4%
-36.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. The present application, filed on or after March 13, 2013, is being examined under the first inventor to file provisions of the AIA . This is a regular utility application with no claim of priority. Claims 1 - 18 are pending and examined as follows: Response to Amendment 2.. An Amendment was filed October 8, 2025 (hereinafter “Amendment”) and has been entered into the record and fully considered. The Amendment was filed in response to a Non-Final Rejection dated July 9, 2024. Despite the Amendment to the Claims and Applicant’s remarks, the Rejections set forth in the Non-Final Rejection are hereby maintained; although, the Rejection under §103 is on new grounds necessitated by the Amendment. An explanation of the maintained Rejections and a response to Applicant’s arguments are set forth below. Please see the “Conclusion” section of this Action below for important information regarding responding to this Action. The previous Non-Final Rejection is repeated below for completeness of the record. An Appendix section setting for the previous Actions in this case is also set forth below for completeness of the record. Status of Claims: Claims 1 – 18 remain pending in this Application. None have been cancelled. Only independent Claims 1 and 10 are pending and they were amended in the Amendment in substantially identical fashion. None of the dependent Claims were amended. Therefore, the following explanation of the maintained rejections with regard to Claim 1 is considered explanatory of the Rejection as a whole. OFFICE NOTE: Interviews are always welcome at any stage of prosecution. Please use the AIR form for scheduling an interview if such is desired. The link for the AIR form is found at the end of this Action. With regard to the Amendment: Claim 1 was amended as follows: PNG media_image1.png 684 746 media_image1.png Greyscale PNG media_image2.png 706 702 media_image2.png Greyscale PNG media_image3.png 544 650 media_image3.png Greyscale Summary of the Amendment and Broadest Reasonable Interpretation: Claim terminology is to be given its plain and ordinary meaning to a person of ordinary skill in the art, consistent with the specification. This is true, unless the terms are given a special meaning. See MPEP §2111.01 Here, no special meaning is detected. The amendments to the Claim were very minor, almost trivial. As noted in the Amendment, a financial institution server is executing the method of detecting suspected mules and an automatic action is taken – such as delaying or declining the transaction. With regard to §101: Respectfully, the Amendment does not advance prosecution substantially. Thus, the amendments to the Claim do not alter the analysis set for the Non-Final Rejection regarding §101. The only changes are summarized above. The above quoted recitations merely relate to the addition of a financial FI server. This addition does not add materially to the specificity of the Claim since it already recited a processor and CRM memory. Taking some form of automated action – such as added to the Claim – would be entirely predictable by a person of ordinary skill in the art. That is the purpose of these systems – to detect fraud and then take some action. The recited limitations relate to very common economic activity. These limitations are recited at a very high – extremely high – level of generality. There is nothing concrete or substantive about these recitations. The Claims lack the specificity required for eligibility. For example: There is no specificity around the “types” of seeds. In any suspected money laundering there could be literally dozens of “seed types.” How is a seed entity determined or “considered?” How was the “at least one mule account” previously identified? There is no specificity around the “similarity score.” What is about the score that renders one account “similar” to another one? There is no specificity around the transition – in the Claim – from clusters to pairs of accounts? “How” is the pre-training dataset used to “define” a relation between the pairs of accounts?” How are these labels related to the earlier labels relating to “mule account clusters?” The Claim is not clear. Thus, the Claim provides little specificity in terms of how the model is trained nor how the training data is prepared for training (e.g. how dimensionality reduction is accomplished.) Only the mere outcome or result that the a mule list is generated. No special functionality is recited. No new computerized components are recited. These limitations recite results or “outcome” of computer processing without specifying “how” a technical problem is solved. That is, the solution of a technical problem is not reflected in the Claim. Taking the claim elements separately, the function performed by the computer elements at each step of the process is purely typical of processing data and especially financial transactional data. Using a computer to receive information, cluster it, calculate scores, established linked pairs, and the like - are among the most basic functions of a computer. Without greater specificity as to “how” certain functions solve a technical problem, the currently recited limitations can be achieved by any general purpose computer without special programming. In short, each step does no more than require a generic computer to perform generic computer functions. Considered as an ordered combination, the computer components of the Claim add nothing that is not already present when the steps are considered separately. Claim 1 does not, for example, purport to improve the functioning of the computer elements nor does the claim reflect how an improvement in any other technology or technical field is achieved. Thus, Claim 1 amounts to nothing significantly more than instructions to “apply” the abstract idea of generating an AI chatbot to provide an estimate of a home for the purpose of some insurance product using some unspecified, generic algorithm and computer components. Such is not sufficient to integrate a practical application in the abstract idea. Accordingly, the Rejection is maintained. With regard to §103: It is respectfully submitted that the Amendment required a closer look at prior art related to servers that generate a list of suspected mules which serve as intermediaries – usually intermediary accounts – in money laundering operations. More specifically, the amendments related to “link predictions” between “pairs” of accounts. Thus, out of an abundance caution and for the avoidance of doubt, a new grounds of Rejection is established. NEW GROUNDS OF REJECTION: Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 - 18 are rejected under 35 U.S.C. §103 as being unpatentable over U.S. Patent Publication No. 2021/0334822 to Pati et al. (hereinafter “Pati) in view of U.S. Patent No. 2021/0264318 to Butvinik (hereinafter “Butvinik”) and further in view of U.S. Patent Publication No. 2020/0394707 to Guo (hereinafter “Guo) and still further in view of U.S. Patent Publication No. 2022/0405860 to Juban et al. (hereinafter “Juban”). Juban is directly on point with the claimed invention and in the same field of endeavor anti-money laundering systems (“AML”). It teaches the use of a “server” for executing the machine learning system (0137) and the system is operated by a financial institution such as a bank (0067-0068). Juban teaches the generation of a list of suspected money launderers: PNG media_image4.png 390 690 media_image4.png Greyscale Furthermore, Juban utilizes a link or “relationship” between one account and another, such as described in 0008: “In some embodiments, the method further comprises generating a weighted priority score for each of the plurality of account holders based at least on the money laundering risk score of the account holder and a quantitative measure of the account holder or of a transaction of the account holder. In some embodiments, the quantitative measure comprises one or more of the following: a quantity of at-risk assets, a quantity of total assets, a net worth, a number or a total value of suspicious transactions, a length of time of a suspicious transaction or activity, a quantitative measure related to the account holder's relationship to a set of accounts (e.g., a length of time, a number of transactions), a quantitative measure related to the account holder's relationship to one or more other account holders,” (Emphasis Added) The stark similarities between the claimed invention and Juban – in terms of account similarities and links/paired relationships - are illustrated in the following quotation: “[0116] The machine learning model may apply natural language processing (NLP) to transactions to derive important information, such as identifying similarities in accounts, account holders, and account information, as shown in FIG. 15. Such NLP approaches may be beneficial since many fraudulent activities may occur under the guise of fake or falsified account information aimed to avoid detection from legitimate account dealings. The AML model may review all account or account holder information (business type, company transactions, account holder names, addresses) and determine a similarity score for different accounts or account holders. The similarity score may be crucial in identifying criminal activity that has moved accounts or shares characteristics that would support separation of legitimate and criminal activity. The natural language processing applied to transaction messages may include text pre-processing (e.g., configuring a pre-processing pipeline, and processing and persisting text data), training a corpus language model for a count of n-grams, using a machine learning model to retrieve a time-series of count and to find important n-grams to predict a label, implementing metrics for important n-grams, and incorporating NLP metrics along with other features in a general classifier. [0117] The AML model may use graph technology to take advantage of existing, extensive and emergent connections between attributes of interest, such as similarities in accounts, transfers among entities, and degrees of separation. These attributes of interest may be particularly useful as inputs to the machine learning classifier when determining the likelihood of illegal activity for any individual account or account holder. A variety of graph methods may be applied, such as: trusted PageRank, traversal, and clustering. [0118] For example, the trusted PageRank method may take the premise that a “trusted” set of nodes can support validation or ranking of other unknown nodes. In search engines, trusted nodes may include government and education websites. Analysis and evaluation of the links from those sites may enable classification of nodes that are some number of hops from the trusted nodes. Alternatively, “untrusted” nodes can be used in the same manner, with the degree of closeness defining a highly risky node. These methods may be useful but may require augmentation to ensure that those nodes which are “gaming the system” are detected and rooted out. Coupled with the trusted and untrusted nodes, random walks among nodes may be evaluated as hubs. In websites, links may be traversed with a given probability of teleportation. The random walkers may eventually hit trusted and untrusted nodes. This approach may enable analysis of the broad system, taking advantage of trusted nodes, but also avoiding problems of hackers who make their way into becoming a trusted node. In application to anti-money laundering, trusted PageRank can be applied in a similar manner, in which known “non-illicit” accounts are trusted and the known illicit accounts are untrusted. The graph can be traversed through transactions among accounts, connections among accounts, and similarities between accounts. Additionally, the links between accounts can be bi-directional and have a quantity (e.g., in the context of values of transactions). PR⁡(acct)=∑v∈BacctPR⁡(v)L⁡(v) [0119] The PageRank value for a node acct may be dependent on the PageRank values for each page v contained in the set Bacct (the set containing all pages linking to node acct), divided by the number L(v) of links from node v. [0120] As shown by the example in FIG. 16, a higher rank is given to C than E, despite E having more connections. However, C has a bidirectional link with B (a trusted node), which gives it greater relevance. E's network is much weaker, as none of its connected nodes have clear trusted links with B.” (Emphasis Added) Thus, Juban utilizes analytical similarities, hops, and links between accounts/nodes. Therefore, despite the Amendments to Claim 1, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combined graph and clustering system of Pati in view of Butvinik to add the ratio and threshold teachings of Guo and to add the linked prediction teachings of Juban. The motivation to do so comes from Pati. As quoted below in Pati, it also teaches the “aggregation” of data for developing training datasets. It would greatly enhance the efficiency and reduced dimensionality of the training dataset of the combined system to use the linked prediction teachings of Guo. Therefore, the Rejection under §103 is maintained. Response to Arguments 3. Applicant's arguments set forth in the Remarks section of the Amendment have been fully considered but they are not persuasive. With regard to section 101 rejection, Applicant argues as follows: PNG media_image5.png 358 770 media_image5.png Greyscale To be clear, the Non-Final Rejection does not take the position that the Claim – taken as a whole and as an ordered combination merely recites an abstract idea or is directed to an abstract idea. Rather, the Action is clear on its face that the Claim “recites” an abstract idea – namely, a method of organizing human activity (e.g. detecting money laundering) – and the “additional limitations” do not serve to integrate that abstract idea into a practical application. In fact, the limitations paraphrased in Applicant’s argument above are – in and of themselves – an illustration of the high level of generality recited in the Claim. Functions such as identifying, training, and further identifying – are among the most common of computer functions. Accordingly, the Claim is a classic example of an “apply it” situation, as explained in the Non-Final Rejection in more detail. Greater specificity is required to integrate these steps into a practical application. Several suggestions are provided above. Perhaps an interview would be productive for this purpose. Applicant’s remaining arguments are likewise not persuasive. The Rejection must be maintained. As to §103, Applicant’s arguments are moot in view of the new grounds of rejection. Accordingly, the Rejections are maintained. Conclusion 4. Applicant should carefully consider the following in connection with this Office Action: A. Finality Applicant's amendment necessitated the new ground(s) of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. B. Search and Prior Art The search conducted in connection with this Office Action, as well as any previous Actions, encompassed the inventive concepts as defined in the Applicant’s specification. That is, the search(es) included concepts and features which are defined by the pending claims but also pertinent to significant although unclaimed subject matter. Accordingly, such search(es) were directed to the defined invention as well as the general state of the art, including references which are in the same field of endeavor as the present application as well as related fields (e.g. using clustering and other dimensionality reduction techniques – such as linked pair or link prediction analysis, to detect money laundering). Indeed, there is a plethora of prior art in these fields. Therefore, in addition to prior art references cited and applied in connection with this and any previous Office Actions, the following prior art is also made of record but not relied upon in the current rejection: U.S. Patent Publication No. 2024/0330693 to Quamar et al. This reference relates to the concept of linked prediction. U.S. Patent Publication No. 2025/0131441 to Rule et al. This reference relates to the concept of anti-money laundering prediction systems. U.S. Patent Publication No. 2017/0206596 to Zhang. This reference relates to the concept of linked pair analysis. U.S. Patent Publication No. 2022/0172211 to Muthuswamy et al. This reference relates to the concept of seed-based clustering. B. Responding to this Office Action In view of the foregoing explanation of the scope of searches conducted in connection with the examination of this application, in preparing any response to this Action, Applicant is encouraged to carefully review the entire disclosures of the above-cited, unapplied references, as well as any previously cited references. It is likely that one or more such references disclose or suggest features which Applicant may seek to claim. Moreover, for the same reasons, Applicant is encouraged to review the entire disclosures of the references applied in the foregoing rejections and not just the sections mentioned. C. Interviews and Compact Prosecution The Office strongly encourages interviews as an important aspect of compact prosecution. Statistics and studies have shown that prosecution can be greatly advanced by way of interviews. Indeed, in many instances, during the course of one or more interviews, the Examiner and Applicant may reach an agreement on eligible and allowable subject matter that is supported by the specification. Interviews are especially welcomed by this examiner at any stage of the prosecution process. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool (e.g. TEAMS). To facilitate the scheduling of an interview, the Examiner requests either a phone call at the number set forth below or the use of the AIR form as follows: USPTO Automated Interview Request http://www.uspto.gov/interviewpractice. Other forms of interview requests filed in this application may result in a delay in scheduling the interview because of the time required to appear on the Examiner's docket. Thus, a phone call or the use of the AIR form is strongly encouraged. D. Communicating with the Office Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM BUNKER whose telephone number is (571)272-0017. The examiner can normally be reached on M - F 8:30AM - 5:30PM, Pacific. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abhishek Vyas, can be reached at 571-270-1836. Information regarding the status of an application, whether published or unpublished, may be obtained from the “Patent Center” system. For more information about the Patent Center system, see https://patentcenter.uspto.gov/ /William (Bill) Bunker/ U.S. Patent Examiner AU 3691 (571) 272-0017 - office william.bunker@uspto.gov December 8, 2025 Claim Rejections – 35 USC § 101 2. 35 USC § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture and composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. A. Rejection Based on Abstract Idea Claims 1 - 18 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Furthermore, this rejection is based on the 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). B. Statutory Categories Claim 1 is a system claim and it recites various hardware components such as a computer system, a memory, and a processor. This claim therefore falls into the category of machine/manufacture. Independent Claim 10 a method claim and therefore falls into the category of a “process.” C. The Claim Recites an Abstract Idea Claim 1 is illustrative of the rejection of all claims. Claim 1 recites the limitation: “from a plurality of entity types associated with a financial institution, selecting a seed entity type and collecting a plurality of entities of the selected type associated with the financial institution; for each collected entity, considered as a seed entity, from the plurality of entities: identifying a first network of accounts associated with the seed entity, m transaction hops away from the seed entity, and looking at period t in history; if the first network of accounts includes at least one mule account, storing the network;5;” This limitation, as drafted, is a process that, under its broadest reasonable interpretation, constitutes a method of organizing human activity, specifically, fundamental economic principles or practices. That is, analyzing this limitation in the context of the claim as a whole, it recites a process that falls within the grouping of abstract ideas comprising certain methods of organizing human activity. Fundamental economic principles or practices are examples of such methods. In this case, the fundamental economic principle or practice is the common practice of generating a graph or network of nodes and edges by which the potential for fraud – or in this case money laundering – can be more readily analyzed. This is an extremely common practice in machine learning. Furthermore, the mere nominal recitation of terms - such as “processor,” or “computer readable medium” - does not remove the claim from the category of common or abstract methods of organizing human activity. Thus, Claim 1 recites a judicial exception, namely, an abstract idea. D. The Claim Does Not Integrate the Abstract Idea into a Practical Application Moreover, this judicial exception is not integrated into a practical application. The possible “additional limitations” recited in the Claim that must be considered are as follows: A system adapted to automatically identify suspected mule accounts, the system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the computer readable medium comprising a plurality of instructions stored in association therewith that are accessible for each collected entity, considered as a seed entity, from the plurality of entities: identifying a first network of accounts associated with the seed entity, m transaction hops away from the seed entity, and looking at period t in history; if the first network of accounts includes at least one mule account, storing the network; for each network that is stored: computing a similarity score between each pair of accounts in the first network of accounts; based on the similarity scores, clustering the accounts into n clusters; for each cluster: determining a ratio of known mule accounts in the cluster to a total number of accounts in the cluster; if the ratio exceeds a mule account rate threshold value; creating a label identifying the cluster as a mule account cluster; if the ratio does not exceed the mule account rate threshold value; creating the label identifying the cluster as a non-mule account cluster; storing the seed entity, the accounts, cluster ID, and the label, into a pre-training dataset; using the pre-training dataset to define a relation between each pair of accounts in the network; labeling each relation between account pairs as either part of a mule ring or not part of a mule ring; with the account pairs and the labels, training a link prediction model using supervised machine learning; and in real time: receiving a transaction in a fraud management system for a transaction entity of the plurality of entities associated with the financial institution; identifying a second network of accounts associated with the transaction entity, m transactions hops away from the transaction entity, and looking at period t in history; with the link prediction model, for each pair of accounts in the second network of accounts: computing a link prediction score, representative of a likelihood that the accounts in the pair of accounts are mule accounts; if the link prediction score exceeds a second threshold value, adding the accounts in the pair of accounts to a suspected mules list; displaying the suspected mules list to a user. No additional computer components are mentioned in these limitations, and those quoted above are recited at a high level of generality. No other particular computer functions or computer component interactions within this system are recited. Analyzing tax related documents can also be construed as a mental process and is extremely common. Analyzing these additional limitations individually, and taking the claim as a whole and as an ordered combination, it is clear that these additional limitations do not serve to integrate the abstract idea into a practical application. They do not recite a technological solution to a technological problem. They do not improve the functioning of the computer system itself. In fact, there are very few computerized system components or functions recited. Thus, these limitations fail to recite with specificity any technical function or any improvement to the functioning of the computer system itself – if any. Therefore, the claim lacks the specificity required to transform the claim from one claiming only an outcome or a result – developing a training dataset for a machine learning (ML) model - to one claiming a specific way of achieving that outcome or result. Generating a graph or network of objects – such as entities, people, accounts, modes of communication and the like – is one of the most common and basic functions performed by a computer. Calculating similarity scores and clustering the results are also common and basic. These concepts are abstract and basic. Accordingly, the recitation of these generic components amounts to no more than mere instructions “to apply” the abstract idea exception using generic computer components. That is, the additional elements recited in the claim beyond the judicial exception(s) have been evaluated to determine whether those additional elements, considered individually and in combination, integrate the judicial exception(s) into a practical application. They do not. E. Step 2B: The Claim Does Not Recite Significantly More than the Abstract Idea This step involves the search for an “inventive concept.” However, it is clear from the case law and the MPEP that the considerations at issue are the same as those considered above with respect to the analysis of a practical application. See MPEP 2106.05(a) – (c) and (e). In other words, these analyses sharply overlap. Therefore, based on the above analysis, the identified additional limitations do not provide “significantly more” than the abstract idea. The claim is therefore ineligible under §101. The other independent claims are, likewise, ineligible for the same reasons as they are virtually identical to Claim 10. F. The Dependent Claims Do Not Recite Meaningful Additional Limitations Similarly, Claim 2 recites the same abstract idea as Claim 1 by virtue of its dependency on Claim 1. Like Claim 1, this claim does not recite sufficient additional elements to integrate the abstract idea into a practical application. Claim 2 merely recites the abstract concept of limiting a clustering display to a certain number of transactions and a certain number of clusters. Claim 3 merely recites the abstract concept of rate or ratio. Claim 4 merely recites the abstract concept of threshold. Claim 5 merely recites the abstract concept of a seed for a graph or network. Claim 6 merely recites the abstract concept of a sending entity. Claim 7 merely recites the abstract concept of a certain number of hops. Claim 8 merely recites the abstract concept of a period of time. Claim 9 merely recites the abstract concept of clustering criteria. Claims 10 – 18 are virtually identical to various of the aforementioned claims and are ineligible for the same reasons as set forth above. None of these claims provide any additional meaningful limitations, non-generic computer components, or specific assignments of functionality among those components. Likewise, if at all, these claims recite only generic, computer-related limitations which are recited at such a high level of generality as to be devoid of any meaningful limitations. These limitations do not recite improvements in the functioning of the computer or to any other technology or technical field. Therefore, these claims do not include additional elements that are sufficient to integrate the abstract idea into a practical application, nor do they amount to significantly more than the recited abstract idea because the additional elements, when considered both individually and as an ordered combination, constitute only a mere instruction to “apply” the abstract idea. Thus, Claims 1 - 18 constitute ineligible subject matter under 35 USC § 101 as being directed to an abstract idea without more. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 - 18 are rejected under 35 U.S.C. §103 as being unpatentable over U.S. Patent Publication No. 2021/0334822 to Pati et al. (hereinafter “Pati) in view of U.S. Patent No. 2021/0264318 to Butvinik (hereinafter “Butvinik”) and further in view of U.S. Patent Publication No. 2020/0394707 to Guo (hereinafter “Guo”). The Pati and Butvinik references are Applicant’s publications and should be well-known to Applicant. They are directly on point with the claimed invention and in the exact same field of endeavor. Extensive citations to these references are not necessary to explain the Rejection as a whole. The title of Pati is: Systems and methods for detecting unauthorized or suspicious financial activity The Abstract reads as follows: “In a method for detecting unauthorized or suspicious financial activity, a graph convolutional network for financial crime prevention, a separate node is created for each entity: each account, each person, each address (e.g. email address), etc. Separate attributes are provided to aggregate transactions in which the node acts as a sender; transactions in which the node acts as a receiver; transactions using a specific channel (e.g. ATM); and transactions of a specific type (e.g. online money transfer). In some embodiments, the attributes exclude data on individual transactions to reduce the amount of data and hence provide more effective computer utilization. The approach is suitable for many applications, including anti-money laundering. Other features are also provided, as well as systems for such detection..” (emphasis added) Thus, Pati is directly on point with the methodology of the present claims in that it relates to reducing the amount of data that is generated for training a ML model and balancing the training dataset between normal or regular behavior and fraudulent behavior. Broadest reasonable interpretation: While the term “mule” is not directly found in the specification of Pati, it is clear from Applicant’s specification (see, for example, [0003] – [0012]) that such a “mule” is merely a entity (i.e. a person) that moves illegally gotten money from one account to another to launder such illegal funds. It is well known by person of ordinary skill in the art that it is persons who commit money laundering – whether they are aware of it or not. Pati teaches various such entities or persons and their involvement in transactions in Figs. 2 – 3 as follows: PNG media_image6.png 560 692 media_image6.png Greyscale Furthermore, while the claimed invention is directed to a “network” of objects, it is clear that the similar terminology of a “graph” is synonymous or analogous to a person of ordinary skill in the art. Thus, a particularly salient teaching of Pati relating to graph techniques is as follows: “[0010] The inventors have discovered that both relatively high speed and high reliability can often be obtained using Graph Convolutional Neural Networks (GCN, sometimes also called GCNN or CGN) if the financial input data are suitably prepared. In this technique, financial data are represented as a graph. In a conventional graph representation for GCN, some information is associated with graph nodes (vertices), while other information is associated with graph edges. Selecting the kind of information to associate with a node or an edge is an important decision with regard to GCN reliability and speed. According to some embodiments of the disclosure, nodes are used to represent entities, and edges represent transactions or other relationships between the entities. Further, even related information on an entity can be represented by different nodes. For example, in some embodiments, an account entity is represented by one node; the account's owner is represented by a different node (rather than being an attribute of the account node); and the ownership relationship is represented by an edge between the two nodes. A family or business relationship between parties can also be represented by an edge. Thus, some embodiments use separate nodes to represent accounts, other nodes to represent parties, and optionally still other nodes to represent addresses, devices, etc. An address can be a complete street address, or a partial address (e.g. a country name and/or a city name and/or some other address portion). An address can also be an IP address, an email address, a layer-2 network address, or some other address recognized in computer networks. Such representation is believed to improve GCN reliability for many GCN types according to the present disclosure.” (emphasis added) “[0011] In some embodiments, information can be duplicated in different node types. For example, an account owner's name can be provided both in the owner's node and the account's node. In other embodiments, information is not duplicated: the account owner, the associated device, etc. are identified and described only in the respective nodes.” (Emphasis Added) Furthermore, Pati teaches that a transaction “type” can refer to various aspects of the suspect transaction. See, for example, the following: “[0022] In some embodiments, for each account 50, DB 40 stores data shown in FIG. 2. In particular, each account 50 is identified by some account ID (account number) 114. Further, account ownership data 120 specify the account owners (e.g. parties 60 or groups of parties), and the owners' addresses. Data 124 describe related accounts, e.g. accounts owned by the same owners. Data 130 describe the owners' relation to other entities, including family or other personal relationship, business relationships, or some other kind. For each transaction involving this account, transaction data 140 may describe the transaction as shown. In particular, data 144 specifies the transaction time. Opposite side data 148 specifies the opposite side, e.g. other account(s) 50 and/or parties 60. Source/Destination flag 150 specifies whether the account is the source or the destination in the transaction. Channel 160 indicates the technical infrastructure, e.g. Online banking, Phone banking, Check, etc. Type 170 indicates the type of activity, e.g. “login activity”, “information update”, “online monetary transfer”, “online banking”, etc. The Type 170 information may overlap with Channel 160 information. Some types of activities are performed only in specific channels. But a channel may allow multiple transaction types depending on the kind of channel [0023] “Other” data 178 may include other information, possibly dependent on the transaction type or channel. For example, for phone banking or online banking, other data 178 may include the device IDs (phone numbers, computer IDs, Network Interface Card IDs, etc.) involved in the transaction. [0024] For each party 60 (FIG. 3), computer system 10 may store pertinent information such as: party name(s), address(es), phone numbers, and possibly other identifying information (184). Computer system 110 also may store the party type 186 (individual, corporation, or other type of group or organization).” (Emphasis Added) Moreover, Pati illustrates the resulting graph structure in Fig. 4 as follows: PNG media_image7.png 454 674 media_image7.png Greyscale Therefore, with regard to Claim 1, Pati teaches: 1. A system adapted to automatically identify suspected mule accounts, the system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform operations which comprise: (See at least Fig. 1) from a plurality of entity types associated with a financial institution, selecting a seed entity type and collecting a plurality of entities of the selected type associated with the financial institution; for each collected entity, considered as a seed entity, from the plurality of entities: identifying a first network of accounts associated with the seed entity, m transaction hops away from the seed entity, and looking at period t in history; if the first network of accounts includes at least one mule account, storing the network; (See at least Fig. 4 reproduced above and accompanying description, including [0025] et seq. and Tables 1 and 2. As to a timeframe, see [0012]. As to a seed entity and transactions hops, please see [0174] – [0179].) for each network that is stored: computing a similarity score between each pair of accounts in the first network of accounts; (See at least [0012] – [0013], wherein it is respectfully submitted that the “aggregation” teachings in Pati relative to “averages and/or medians and/or maxima and/or minima and/or some other aggregated values” is considered to constitute the recited term “similarity score.” The bottom line effect of the claimed function of “similarity score” is the same as Pati’s aggregation – to reduce the dimensionality of the data. See [0012].) based on the similarity scores, clustering the accounts into n clusters; for each cluster: determining a ratio of known mule accounts in the cluster to a total number of accounts in the cluster; if the ratio exceeds a mule account rate threshold value; creating a label identifying the cluster as a mule account cluster; if the ratio does not exceed the mule account rate threshold value; creating the label identifying the cluster as a non-mule account cluster; (See at least [0055].) storing the seed entity, the accounts, cluster ID, and the label, into a pre-training dataset; (See at least [0057] – [0059].) using the pre-training dataset to define a relation between each pair of accounts in the network; (See at least [0072] et seq.) labeling each relation between account pairs as either part of a mule ring or not part of a mule ring; (See at least [0201] et seq.) with the account pairs and the labels, training a link prediction model using supervised machine learning; and in real time: (See at least [0045] – [0048].) receiving a transaction in a fraud management system for a transaction entity of the plurality of entities associated with the financial institution; (See at least [0021].) identifying a second network of accounts associated with the transaction entity, m transactions hops away from the transaction entity, and looking at period t in history; (See at least [0059].) with the link prediction model, for each pair of accounts in the second network of accounts: computing a link prediction score, representative of a likelihood that the accounts in the pair of accounts are mule accounts; if the link prediction score exceeds a second threshold value, adding the accounts in the pair of accounts to a suspected mules list; and (See at least [0052] – [0059].) displaying the suspected mules list to a user. (See at least [0021].) Therefore, Pati appears to teach the basic limitations of Claim 1. However, out of an abundance of caution, and subject to further consideration of the cited reference and subject to the broadest reasonable interpretation of the relevant limitation, Butvinik is cited for its relevance to clustering techniques to generate a superior training dataset. Butvinik is in the exact same field of endeavor as Pati and the claimed invention. The title is: Computerized-system and method for generating a reduced size superior labeled training dataset for a high-accuracy machine learning classification model for extreme class imbalance of instances Thus, the Abstract of Butvinik teaches as follows: “A computerized-system and method for generating a reduced-size superior labeled training-dataset for a high-accuracy machine-learning-classification model for extreme class imbalance by: (a) retrieving minority and majority class instances to mark them as related to an initial dataset; (b) retrieving a sample of majority instances; (c) selecting an instance to operate a clustering classification model on it and the instances marked as related to the initial dataset to yield clusters; (d) operating a learner model to: (i) measure each instance in the yielded clusters according to a differentiability and an indicativeness estimators; (ii) mark measured instances as related to an intermediate training dataset according to the differentiability and the indicativeness estimators; (e) repeating until a preconfigured condition is met; (f) applying a variation estimator on all marked instances as related to an intermediate training dataset to select most distant instances; and (g) marking the instances as related to a superior training-dataset..” (emphasis added) Furthermore, Butvinik teaches concepts that are considered to constitute the recited term “similarity score,” such as “differentiability estimators,” “indicativeness estimators,” and “variation estimator.” (See at least [0020] – [0021].) Therefore, it would have been obvious to one of ordinary skill in the relevant art at the time of filing the claimed invention to have modified the graph or network teachings of Pati to add the clustering techniques of Butvinik. The motivation to do so comes from Pati. As quoted above, Pati teaches that related objects can be clustered. It would greatly enhance the efficiency and accuracy of the system of Pati to deploy the advanced clustering estimators of Butvinik. Guo is cited for its teachings related to labeling a training dataset using ratios and thresholds. (See at least Guo: [0044] – [0050].) Therefore, it would have been obvious to one of ordinary skill in the relevant art at the time of filing the claimed invention to have modified the combined graph and clustering system of Pati in view of Butvinik to add the ratio and threshold teachings of Guo. The motivation to do so comes from Pati. As quoted above in Pati, it also teaches the “aggregation” of data for developing training datasets. It would greatly enhance the efficiency and reduced dimensionality of the training dataset of the combined system to use the ratio and threshold teachings of Guo. With regard to Claims 2 - 9, Pati in view of Butvinik teaches: 2. The system of claim 1, wherein n is at least 2 and no greater than 6, and wherein m is at least 2 and no greater than 6. (See at least [0012], since a person of ordinary skill in the art would readily understand that the number of hops or clusters – chosen to reduce the dimensionality of the data – is an arbitrary choice and therefore obvious to such a person of ordinary skill. Further, the criticality of the claimed numbers or percentages or thresholds has not been demonstrated.) 3. The system of claim 1, wherein the mule account rate threshold value is at least 50%. (See at least [0012], since a person of ordinary skill in the art would readily understand that the number of hops or clusters – chosen to reduce the dimensionality of the data – is an arbitrary choice.) 4. The system of claim 1, wherein the threshold score for the link prediction machine learning model is at least 70%. (See at least [0012], since a person of ordinary skill in the art would readily understand that the number of hops or clusters – chosen to reduce the dimensionality of the data – is an arbitrary choice and therefore obvious to such a person of ordinary skill. Further, the criticality of the claimed numbers or percentages or thresholds has not been demonstrated.) 5. The system of claim 1, wherein the seed entity is a sending account, a receiving account, a device, an internet protocol (IP) address, a bank branch, a physical address, a sending email, a receiving email, a phone number, a sending person’s name, or a receiving person’s name. (See at least [0008] – [0013].) 6. The system of claim 1, wherein the transaction entity is a sending account, a receiving account, a device, an internet protocol (IP) address, or a bank branch. (See at least [0008] – [0013].) 7. The system of claim 1, wherein m is at least 2 and no greater than 4 (See at least [0012], since a person of ordinary skill in the art would readily understand that the number of hops or clusters – chosen to reduce the dimensionality of the data – is an arbitrary choice and therefore obvious to such a person of ordinary skill. Further, the criticality of the claimed numbers or percentages or thresholds has not been demonstrated.) 8. The system of claim 1, wherein t is at least 6 months and no greater than 12 months. (See at least [0012].) 9. The system of claim 1, wherein criteria for identifying the first network and criteria for identifying the second network are the same. (See at least [0008] – [0013].) With regard to Claim 10, this claim is essentially identical to Claim 1 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 11, this claim is essentially identical to Claim 2 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 12, this claim is essentially identical to Claim 3 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 13, this claim is essentially identical to Claim 4 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 14, this claim is essentially identical to Claim 5 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 15, this claim is essentially identical to Claim 6 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 16, this claim is essentially identical to Claim 7 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 17, this claim is essentially identical to Claim 8 and is obvious for the same reasons as set forth above with respect to that claim. With regard to Claim 18, this claim is essentially identical to Claim 9 and is obvious for the same reasons as set forth above with respect to that claim. Conclusion 5. Applicant should carefully consider the following in connection with this Office Action
Read full office action

Prosecution Timeline

Apr 08, 2024
Application Filed
Jul 07, 2025
Non-Final Rejection — §101, §103
Oct 08, 2025
Response Filed
Dec 08, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598178
BIOMETRIC DATA SUB-SAMPLING DURING DECENTRALIZED BIOMETRIC AUTHENTICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591893
Techniques For Expediting Processing Of Blockchain Transactions
2y 5m to grant Granted Mar 31, 2026
Patent 12572902
SYSTEMS AND METHODS FOR LEAST COST ACQUIRER ROUTING FOR PRICING MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12572903
BRIDGING NETWORK TRANSACTION PLATFORMS TO UNIFY CROSS-PLATFORM TRANSFERS
2y 5m to grant Granted Mar 10, 2026
Patent 12555147
INFORMATION PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+94.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month