Prosecution Insights
Last updated: April 19, 2026
Application No. 18/151,405

SYSTEMS AND METHODS FOR DETECTING ANOMALOUS ACTIVITY OVER A COMPUTER NETWORK

Non-Final OA §101§103
Filed
Jan 06, 2023
Examiner
GREGG, MARY M
Art Unit
3695
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mastercard International Incorporated
OA Round
5 (Non-Final)
14%
Grant Probability
At Risk
5-6
OA Rounds
5y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
89 granted / 629 resolved
-37.9% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
63 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
31.3%
-8.7% vs TC avg
§103
37.2%
-2.8% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 629 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Non-Final Office Action in response to communications received November 13, 2025. No Claim(s) have been canceled. Claims 1-5, 9-11, 13, 15-17 and 19 have been amended. No new claims have been added. Therefore, claims 1-20 are pending and addressed below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17 (e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission has been entered. Priority Application 18151405 filed 01/06/2023 is a Continuation of 17065336, filed 10/07/2020. Applicant Name/Assignee: MASTERCARD INTERNATIONAL INCORPORATED Inventor(s): Thomson, Brett; Williams, Kyle; Senci, David Response to Amendment/Arguments Claim Rejections - 35 USC § 101 Applicant's arguments filed November 13, 2025 have been fully considered but they are not persuasive. In the remarks applicant points to the 2019 USPTO Peg guidance on patent eligibility analysis. Applicant argues the limitations recite a computer system comprising a processor for detecting anomalous activity over a computer network. Applicant recites the limitations performed by the system processor. Applicant argues that the previous Office action analysis of mental processes does not take into consideration the amended limitations “input the …stream of …messages into a detection model …a machine learning model to apply at least one …learning algorithm trained to detect …anomalous data message velocity associated with data attack …by detecting …that a velocity of …messages being processed …having a range of first identifiers having a common value second identifier exceeds a threshold amount…response to detecting anomalous data message velocity of…messages…having the common value second identifier using the …learning algorithm, the at least one processor …to disrupt the anomalous data message velocity attack …by declining …any transactions. These limitations cannot reasonably be performed using mental processes. The examiner respectfully disagrees. The human mind through observation is capable of receiving message content and identifying identifiers and able through analysis detect message velocity having a range of first identifiers having a common value second identifier exceeds a threshold amount. The human mind is capable of determining based on the analysis whether to make a decision to decline transactions. The rejection is maintained. In the remarks applicant points to the August 4, 2025 memo of the USPTO guidance and mental concepts. Applicant argues the human mind is not capable of inputting the stream of electronic messages into a model or disrupt the velocity attack by declining transactions or append a flag to electronic messages and if messages include a first flag or second flag cause enhanced authentication of the message by requesting a two-factor authentication and when a message includes a second flag cause a risk score associated with the message to be increased by a threshold amount. The claimed model merely is applied to automate the analysis of data. The human mind is capable of receiving data through observation and analyzing the data and making a decision based on the analysis to decline transaction. With respect to the first and second “flag” appended, as claimed the flag is nothing more than automated means to inform that other processes are required. The human mind is capable of remembering that different analysis results require further action and the human mind is capable of understanding conditions that require two-factor authentication. Two-factor authentication is merely requires identify or processes to be verified using two different factors. The human mind is capable of verifying identity using two different factors. For example through observation a factor of whether someone is male or female, with a second factor of whether someone is a child or an adult. The second flag is merely informing the user to adjust a risk score threshold associated with a message. The human mind is capable of understanding and mentally adjusting a risk score. For example a risk threshold could be originally set at 50 and could be adjusted/increased to 70. The rejection is maintained. In the remarks applicant argues that under step 2A prong 2, the claimed subject matter provides indications of patent eligibility that improve the functioning of computers thereby integrating any judicial exception into a practical application. Applicant points to example 47, of the July 2024 guidance claim 3 of anomaly detection. Example 47 recites applying an ANN model to detect malicious anomalies in network traffic using a trained ANN, determining anomaly associated with malicious network packets, detecting source address associated with the malicious packets, dropping the malicious packets and blocking future traffic from the source address. Example 47 patent eligibility improved the field of network intrusion taking remedial actions to prevent network intrusions. Applicant argues that the claimed limitations are analogous to example 47 claim 3. This is because Claim 1 of the current application improves the technical field of computer network attack detection pointing to para 0004 of the specification which explains the technical problems associated with fraud detection systems and para 0023 explains that such attack place a heavy network load on processing networks subject to attacks. Additionally para 0025 and 0031 of the specification discloses using learning algorithms to detect such attacks where the learning algorithm once attacks are detected causes associated transactions to be declined for a period of time for transactions associated with common account BINs. Applicant argues that similar to example 47, the claim limitations in response to detecting anomalous (fraudulent) message velocity of messages having a common value second identifier disrupt the message attack for a period of time by declining the transaction. The examiner respectfully disagrees with the premise of applicant’s argument. In the claim interpretations the term “attack” as claimed and referenced in the specification is not a network intrusion as set forth in example 47. The common meaning of network intrusion frequently entail the theft of valuable network resources and virtually always compromise a network security and/or data security (e.g. corruption of data, financial loss by requiring repairing damaged property, loss of data, theft of data, operational disruption, loss of reputation) because the computer/network operations themselves is affected by the malicious transmission. The specification of the current limitations disclose: [0004] Payment card transaction processors, such as payment networks and issuing banks, may monitor payment card transactions for signs of fraudulent activity. At least some known fraud detection systems monitor payment card transactions one payment card transaction at a time to determine whether the payment card transaction is potentially fraudulent. Such systems may not be able to detect certain types of widespread fraud attacks, such as the above-described common BIN fraud attacks. Moreover, these systems lack processes and infrastructure to effectively respond to these BIN attacks. Please note that paragraph 0004 describes a problem in BIN fraud which is a fundamental economic practice and therefore, abstract and not a problem rooted in technology. [0023] Additionally or alternatively, the detection models monitor the transaction streams for anomalously high PAN velocities ( e.g., PAN velocities that exceed a pre-defined threshold level, such as one or two standard deviations above a standard velocity or a percentage higher than a standard velocity). Specifically, where a same PAN is used to attempt an anomalously high number of transactions, including transactions attempted with varying (e.g., sequential or random) expirations dates and/or security codes, a BIN attack (e.g., a card- or account-testing attack) may be occurring. Please note para 0023 focuses on monitoring data streams in order to detect indications of fraud by analyzing high Pan velocity transaction with varying expiration dates and/or security codes. The specification focuses on fraud prevention an abstract idea and not a problem rooted in technology or process to improve or impact technological processes. [0025] It is also contemplated that account status inquiries (ASis) may also be monitored for BIN attack behavior, such as repeated ASis for a same PAN with varying expiration dates and/or security codes, or anomalously high ASI traffic (velocity) for a particular BIN. Please note the specification focuses on the conceptual idea of fraudulent activity/behavior and is silent with respect to technology. [0030] The ADR computing device determines, for each of the transactions initiated or attempted during the time period associated with the BIN attack, a respective issuer response. The issuer response may be an authorization, indicating that the attempted transaction was successfully authorized, such as a response field populated with a "00" data element. The issuer response may otherwise be a decline, indicating that the attempted transaction was not authorized ( e.g., due to an invalid PAN, expiration date, and/or security code). Each authorization indicates that the fraudster was successful in an attempted transaction, which in turn indicates that the PAN associated with the authorization may be compromised and vulnerable to future fraud attempts. [0031] Accordingly, the ADR computing device extracts a PAN from each transaction record associated with an authorized transaction. These PANs are considered compromised as successfully "tested" by fraudsters. The ADR computing device generates a fraud attack alert that includes all of these compromised PANs and transmits the fraud attack alert to the issuer ( or, in some cases, issuers) of the compromised PAN s. In the example embodiment, the fraud attack alert includes instructions that cause the issuer to record or flag all of the PANs identified in the fraud attack alert as compromised or potentially compromised. Accordingly, any time a compromised/potentially compromised PAN is used to initiate a future or subsequent transaction, that transaction will undergo enhanced authentication before being authorized. Enhanced authentication may include, for example, two-factor authentication that requires an additional authentication data element be provided by a user that initiated the transaction, such as a one-time password, biometric sample, and the like. This enhanced authentication requirement imposed on the compromised/potentially compromised PAN enables a true cardholder ( or other user of the payment card) to continue using the same PAN while preventing fraudulent use thereof. Please note the ADR is applied to determine for each transaction during a period of time fraud attempts with BINs. The specification discloses the ADR extracting PAN data form each transaction record and then determining whether the PANs are compromised as successfully tested by fraudster. The ADR generates a fraud attack alert and transmits the determine compromised BINs to the issuer with instructions for the issuer to flag PANs identified as fraudulent. The specification focuses on the conceptual idea of fraudulent activity/behavior and is silent with respect to technology. The ADR is merely applied as a tool in order to identify and determine fraudulent PANs which can be performed using any known means. The specification focuses on applying the ADR for fraud prevention (an abstract idea) and not a problem rooted in technology or process to improve or impact technological processes. In example 47, patent eligibility was not found in the use of the ANN as a tool to perform the analysis for identifying malicious data packets. Patent eligibility was found in the solution to the technical problem in the difficulty of determining the boundary between ordinary and anomalous data. This is because small variation may trigger an identification of an anomaly in network security or medicine while relatively larger deviations may be considered normal in less sensitive applications. Furthermore, malicious actors may attempt to make anomalies appear like ordinary activity. Example 47 provided the technical solution that is able to identify malicious packet and take remedial actions which include dropping suspicious packets and blocking traffic from suspicious source addresses improving upon network security. This is not the case of the current application. The current application is directed toward identifying fraudulent PANs/BINs and then preventing any further transactions using the identified PANs/BINs, which does not provide solutions to a problem in the underlying technology, but rather to prevent fraud. The fraudulent BINs are not impacting the network through which the data is transmitted, instead the fraudulent BINs/PANs provide problems with the issuer paying on the fraudulent BINs/PANs. Accordingly the claim limitations when considered in light of the specification are not directed toward improvement to technology or to provide solution to a problem rooted in the underlying or any other technology. The rejection is maintained. In the remarks applicant argues that the August 4, 2025 memo of the USPTO, requires that additional element to not be considered in a vacuum separate from the judicial exception and that the claim itself does not need to explicitly recite improvements described in the specification and that patent ineligibility must be established with a preponderance of evidence. Applicant argues the claimed limitations together include interdependencies between the operations and the judicial exception. Applicant’s argument is not persuasive. The examiner notes the applicant does not identify what technology has been improved or point to where in the specification there is discussion of improvement to the technology itself. The specification is silent with respect to a particular technical process to improve the capability of technology. Rather the specification discloses problems with detecting fraud events on payment card transactions targeted at accounts issued by specific issuers or within a certain account range, increased fraud attacks that include repeated transaction attempts within short periods of time and network usage due to undetected fraud attacks to determine card verification numbers through trial and error and inability to detect and respond to account range fraud attacks in real time (spec ¶ 0039). The focus of the specification is to address fraud attacks on transaction accounts and the verification of card numbers not technology. The rejection is maintained. In the remarks applicant argues that similar to BASCOM, the current limitations provide significantly more than the alleged abstract idea. Specifically the ordered combination of detecting and disrupting of attacks on a computer network. Applicants argument is not persuasive. In BASCOM the ordered combination of generic computer operations provided a technical solution to solve a problem rooted in technology. This is not the case of the current application. The ordered combination of detecting and disrupting process is not to solve a problem in technology but rather to perform fraud mitigation in a transaction. The “disrupting” step does not protect or provide a solution to issues in technology but rather decline a transaction to prevent fraud. The rejection is maintained. Claim Rejections - 35 USC § 103 Applicant’s amendments are sufficient to overcome the 103 rejection of claims 1-20. The 103 rejection is withdrawn. Claim Interpretation The claim language includes “detect an attack on the computer network by detecting …that a velocity of …messages for a range of identifiers having a common value second identifier”. The specification discloses that the range of identifiers are account numbers share a common bank BIN subject to fraud (para 0040) and teach that the ML algorithm is trained to identify high levels of transaction traffic associated with any BIN with a pre-defined threshold of standard velocities (see para 0020-0021). The specification further discloses that ML model is trained to recognize anomalously high velocities with high numbers of declines for invalid PAN’s (para 0024-0026) and based on this recognition decline all transaction with the BIN and/or at the merchant for a period. The specification also discloses that such transaction data is transmitted on/over a transaction network (FIG. 1; para 0002, para 0005, para 0051), and the application of a fraud analysis computer to monitor transaction streams to detect BIN attacks (para 0019). Accordingly, in light of the specification the examiner is interpreting the language “an attack on the computer network” to be detecting fraudulent BIN attacks/transaction that have been implemented through/on a transaction network, where the detecting is not an attack on the underlying network and its technology, but rather the attack is directed toward transaction which is receiving fraudulent BINs. The language attack on the network is not on the network itself but rather the data content that is being transmitted potentially contains fraudulent account data. The examiner is interpreting the language “disrupt” attack to be referring to disrupt fraudulent transaction by declining the transaction process. The “disrupt” step/operation does not stop or prevent the network from allowing the fraudulent data from being transmitted in/on the network. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the instant application is directed to non-patentable subject matter. Specifically, the claims are directed toward at least one judicial exception without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. In reference to Claims 1-8: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a system, as in independent Claim 1 and the dependent claims. Such systems fall under the statutory category of "machine." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The claimed invention is directed to an abstract idea without significantly more. System claim 1 recites functional process to 1) receive stream of messages 2) input messages (3) detect an (BIN) attack on/through a network by detecting velocity of messages for a range of identifiers exceeds a velocity threshold (4) declining/disrupting any transaction for a period of time (5) identify a time when anomalous velocity event likely to be associated with detected attack/fraud (6) identify a time period associated with an anomalous activity begins at preset time period before anomalous velocity event occurred (7) append anomalous activity first flag to messages initiated during time period associated with anomalous activity (8) append a second flag (9) determine if message includes first or second flag (10) if first flag included in message request two-factor authentication (11) if second flag included in message increase risk score. The claimed limitations which under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic processor for “receive”, “apply”, “decline/disrupt”, “identify”, “determine”, “requesting” “append” and “increasing score”. That is, other than reciting use of a processor and the additional element of applying a model for use in detecting velocity, with no positive recitation of technical implementation of the detecting function, nothing in the claim element precludes the step from practically being performed in the mind. This is includes the “appending an activity flag to messages” mimics mental process of memory where the human mind makes note to remember anomalous activity. The steps recite steps that can easily be performed in the human mind as mental processes because the steps of receive data, detect velocity of messages for a range of first identifiers, identify earliest message and identify a time period and append flag to messages initiated during the time period, mimic human thought processes of observation, evaluation and opinion, and memory, where the data interpretation is perceptible only in the human mind. See In re TLI Commc’ns LLC Patent Litig., 823 F.3d 607, 611 (Fed. Cir. 2016); FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016) Furthermore, when considered as a whole the claimed subject matter is directed toward receiving, analyzing message velocity for a range of identifiers where identifiers values exceeds a threshold used identify a time period associated with anomalous activity and appending an anomalous flag to messages initiated during the time period associated with anomalous activity for risk mitigation. The specification discloses in the background section, payment processing networks processing numerous payment card transactions where some are fraudulent (spec ¶0002-0004). Where fraudulent transactions attacks include bank BIN and that the focus of the invention is to detect account range fraud attacks on payment card networks to detect an occurrence of account range fraud attack in which a set of PAN share a common BIN number. (spec ¶0005, ¶ 0039). This process as a whole is directed toward risk mitigation. Such concepts can be found in the abstract category of fundamental economic activity. These concepts are enumerated in Section I of the 2019 revised patent subject matter eligibility guidance published in the federal register (84 FR 50) on January 7, 2019) is directed toward abstract category of mental processes and methods of organizing human activity. STEP 2A Prong 2: The identified judicial exception is not integrated into a practical application because the claims fail to provide indications of patent eligible subject matter that integrate the alleged abstract idea into a practical application. The additional elements recited in the claim beyond the abstract idea include a computing system comprising at least one processor, computer network, machine learning model The additional element system “processor” applied to perform the operation “receive …messages” over a computer network…”, “input…messages into…detection model…” which according to MPEP 2106.05(d) II (see also MPEP 2106.05(g)) is insignificant extra solution activity. The courts have recognized the following computer functions are claimed in a merely generic manner (e.g., at a high level of generality) where technology is merely applied to perform the abstract idea or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) The claim limitations (receive and input) are recited at a high level of generality without details of technical implementation and thus are insignificant extra solution activity. The additional element “detection model is a machine learning model” applied to “apply …learning algorithm trained to detect an anomalous data message velocity …” are high level functions with expected outcomes The additional element “machine learning algorithm” applied to perform the operation “detecting …velocity of …messages being processed …having a range of first identifiers having a common value second identifiers exceed threshold amount…” are high level functions with expected outcomes The additional element system “processor” applied to perform the operation “disrupt …message velocity attack…by declining …any transaction associated with …messages having common value in second identifier”, “identify a time when …anomalous velocity event like to be associated with the detected attack anomalous velocity occurred”, “identify a time period associated with the data attack …before the time when the first detected anomalous velocity occurred”, “append an …activity first flag…”, “append a second flag to messages…”, “determine if received message includes first or second flag…”, “cause requesting two-factor authentication” if message includes first flag, “cause risk score associated with message to be increased” if message includes second flag; are high level functions with expected outcomes. When considered individually, the additional element “processor” is merely applied to mitigate fraud by declining transaction in response to analyzing data where the results indicate fraudulent account identifiers and appending flags to messages to provide remediation processes of either requesting two-factor authentication or causing risk score to increase which are not processes directed toward a solution in technology. The “machine learning model” and/or, “machine learning algorithm” are merely applied to analyze data in order to detect fraudulent identifiers in messages or velocity of messages transmitted. The “computer network” is merely a field of use for data transmission and does not perform any other operations. Therefore, the claim limitations when considered individually fail to provide any indications of patent eligible subject matter, according to MPEP guidance (see MPEP 2106.05 (a)-(c), (e )-(h). When considered as an ordered combination the combination limitations (1) –(3) of the processes performed by the processor “receive…messages”, “input…messages” into detection model and “apply” model to detect anomalous data message velocity…having a range of first identifiers having common second value identifiers “…is not directed toward indications of patent eligibility, but rather analyzing data received in order to detect fraud in received messages. The learning algorithm/model is high level amounting to no more than mere instructions on analysis of data to detect fraud. The combination of limitations (1)-(3) and (4) is directed toward applying the processor to decline transaction based on the result of limitations (1)-(3). The combination of limitations (1)-(4) and (5)-(11) is directed toward applying a processor to identify the time of detection of anomalous velocity event and append a first and second flag prior to declining transaction based on associated time periods where the flags cause additional risk mitigation by requesting two-factor authentication or increased risk score for risk mitigation against fraud. The combinations of parts is not directed toward any of the indications of patent eligible subject matter under step 2A prong 2, but instead analyzing messages received in order to detect and mitigate fraud by declining transactions associated with message identifier and based on flags appended cause two-factor authentication or increase risk score threshold to apply in risk mitigation. The claim limitations when considered as a whole fail to provide any indications of patent eligible subject matter, according to MPEP guidance (see MPEP 2106.05 (a)-(c), (e )-(h). (i) an improvement to the functioning of a computer; (ii) an improvement to another technology or technical field; (iii) an application of the abstract idea with, or by use of, a particular machine; (iv) a transformation or reduction of a particular article to a different state or thing; or (v) other meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. The claim limitations as a whole, as an ordered combination and the combination of steps not integrate the judicial exception into a practical application as the claim process fails to impose meaningful limits upon the abstract idea as the processes performed by the processor and learning model/algorithm are not directed toward any of the underlying technology. . The claimed subject matter fails to provide additional elements or combination or elements that go beyond applying technology as a tool to perform the identified abstract idea. The functions recited by the system processor in the claims recite the concept of a financial activity. The claim limitations and specification lacks technical disclosure on what the technical problem was and how the claimed limitations provide a technical solution to a technical problem rather than a solution to a problem found in the abstract idea. Taking the claim elements separately, or as a combination, the operation performed by the mobile device processor and communication unit at each step of the process is purely in terms of results desired and devoid of implementation of details. Technology is not integral to the process as the claimed subject matter is so high level that any generic programming could be applied and the functions could be performed by any known means. Furthermore, the claimed functions do not provide an operation that could be considered as sufficient to provide a technological implementation or application of/or improvement to this concept (i.e. integrated into a practical application). The integration of elements do not improve upon technology or improve upon computer functionality or capability in how computers carry out one of their basic functions. The integration of elements do not provide a process that allows computers to perform functions that previously could not be performed. The integration of elements do not provide a process which applies a relationship to apply a new way of using an application. The limitations do not recite a specific use machine or the transformation of an article to a different state or thing. The limitations do not provide other meaningful limits beyond generally linking the use of the abstract idea to a particular technological environment. The resource claimed performing the steps is merely a “field of use” application of technology. The instant application, therefore, still appears only to implement the abstract idea to the particular technological environments apply what generic computer functionality in the related arts. The steps are still a combination made to perform a financial activity and does not provide any of the determined indications of patent eligibility set forth in the 2019 USPTO 101 guidance. The additional steps only add to those abstract ideas using generic functions, and the claims do not show improved ways of, for example, an particular technical function for performing the abstract idea that imposes meaningful limits upon the abstract idea. Moreover, Examiner was not able to identify any specific technological processes that goes beyond merely confining the abstract idea in a particular technological environment, which, when considered in the ordered combination with the other steps, could have transformed the nature of the abstract idea previously identified. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. This is because the claimed subject matter is directed toward identifying fraudulent messages and fails to provide additional elements (i.e. processor and learning algorithm and use of a processor) or combination or elements to apply or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The functions recited in the claims recite the concept of analyzing received identifiers to determine fraudulent activity which is a process directed toward a business practice. The analysis finds no indication in the claim language that the structure and/or the manner in which a model operates is changed in any way. Nor do we find any such indication elsewhere in the written description. The claim provides no technical details regarding how the “applying” operation is performed. Instead, similar to the claims at issue intellectual Ventures I LLC v. Capital One Financial Corp., 850 F.3d 1332 (Fed. Cir. 2017), “the claim language . . . provides only a result-oriented solution with insufficient detail for how a computer accomplishes it. Our law demands more.” Intellectual Ventures, 850 F.3d at 1342 (citing Elec. Power Grp. LLC vy. Alstom, S.A., 830 F.3d 1350, 1356 (Fed. Cir. 2016)). STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements recited in the claim beyond the abstract idea include a computing system comprising at least one processor, computer network, machine learning model. The additional element recited in the claim beyond the abstract idea includes a “processor” to receive, apply, decline transactions, append flag to message functions and cause two-factor authentication or cause risk score threshold to increase; and a machine learning algorithm model used to detect the velocity of electronic messages for a range of first identifiers having a common value second identifier exceeds a threshold. The machine learning application lacks technical disclosure or details of technical implementation. Nearly every computer system for implementing a processor will include a “processor” and “model” capable of performing the basic computer functions -of receive, apply, identify and append functions recited. Taking the claim elements separately, the function performed by the model is purely conventional. The functions of the processor of “receive”, “apply”, “identify” and “append” are recited conventional generic computer components employed in a customary manner. Using a model to detect velocity of messages of a specific condition ----are some of the most basic functions of a computer. When the claims are taken as a whole, as an ordered combination, the combination of steps does not add “significantly more” by virtue of considering the steps as a whole, as an ordered combination. The model function is generic, routine, conventional computer activities that are performed only for their conventional uses. See Elec. Power Grp. v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016). Also see In re Katz Interactive Call Processing Patent Litigation, 639 F.3d 1303, 1316 (Fed. Cir. 2011) The activity of the model is not used in some unconventional manner does not produce some unexpected result. The “applying” of the model step does no more than require a generic computer to perform generic computer functions. As to the data operated upon, "even if a process of collecting and analyzing information is 'limited to particular content' or a particular 'source,' that limitation does not make the collection and analysis other than abstract." SAP America, Inc. v. Invest Pic LLC, 898 F.3d 1161, 1168 (Fed. Cir. 2018). Considered as an ordered combination, claimed steps add nothing that is not already present when the steps are considered separately. The sequence of data reception-analysis and marking of messages is equally generic and conventional. See Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 715 (Fed. Cir. 2014) (sequence of receiving, selecting, offering for exchange, display, allowing access, and receiving payment recited as an abstraction), Inventor Holdings, LLC v. Bed Bath & Beyond, Inc., 876 F.3d 1372, 1378 (Fed. Cir. 2017) (sequence of data retrieval, analysis, modification, generation, display, and transmission), Two-Way Media Ltd. v. Comcast Cable Communications, LLC, 874 F.3d 1329, 1339 (Fed. Cir. 2017) (sequence of processing, routing, controlling, and monitoring). The ordering of the steps is therefore ordinary and conventional. The analysis concludes that the claims do not provide an inventive concept because the additional elements recited in the claims do not provide significantly more than the recited judicial exception. According to 2106.05 well-understood and routine processes to perform the abstract idea is not sufficient to transform the claim into patent eligibility. As evidence the examiner provides: The specification discloses: [0020] In particular, the fraud analysis computer system includes an attack detection and response (ADR) computing device configured to monitor the transaction streams using artificial intelligence and/or machine learning algorithms to detect a BIN attack. The artificial intelligence and/or machine learning algorithms may include one or more detection models trained to identify anomalously high levels of transaction traffic for a common account range or BIN. In particular, a standard or expected velocity associated with any BIN may be pre-defined, stored, and provided to the detection models. These standard velocities may be determined and pre-defined based upon analysis of a plurality of historical transactions (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc., of historical transactions) initiated using PANs sharing a same BIN. [0041] The resulting technical effect achieved by this system is at least one of: (i) reducing network-based fraud events through early detection, in particular, realtime detection (and, therefore, real-time response to) account-range fraud attacks; (ii) reducing future fraud events by flagging compromised accounts/account numbers; (iii) applying artificial intelligence and/or machine learning algorithms to monitor a variety of velocities to accurately and robustly detect account range fraud attacks; and/or (iv) alerting affected parties to fraud attacks to facilitate increased fraud prevention. Thus, the system enables enhanced fraud detection on the payment card transaction network. Once a pattern of fraudulent activity is detected and identified, further fraudulent payment card transaction attempts may be reduced or isolated from further processing on the payment card interchange network, which results in a reduced amount of fraudulent network traffic and reduced processing time devoted to fraudulent transactions, and thus a reduced burden on the network. [0043] As used herein, a "processor" may include any programmable system including systems using central processing units, microprocessors, microcontrollers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term "processor." [0046] The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. [0052] In the example embodiment, fraud analysis computing system 100 includes payment processing network 102, which itself includes a plurality of payment processors 104, as well as an attack detection and response (ADR) computing device 106 communicatively coupled to payment processing network 102 and to one or more databases 108. In some embodiments, as noted above, ADR computing device 106 is implemented as part of, or in association with, payment processing network 102. Payment processing network 102 may include any transaction processing network, scheme, or system suitable for processing online transactions, including payment card ( e.g., credit card, debit card, prepaid card, etc.) transactions, such as the Mastercard® interchange network. The Mastercard® interchange network is a set of proprietary communications standards promulgated by Mastercard International Incorporated® for the exchange of financial transaction data and the settlement of funds between financial institutions that are members of Mastercard International Incorporated®. (Mastercard is a registered trademark of Mastercard International Incorporated 1ocated in Purchase, New York). [0055] ADR computing device 106 is configured to monitor transaction streams ( e.g., transaction messages processed over payment processing network 102, such as authorization request messages and/or account status inquiries) using artificial intelligence and/or machine learning algorithms to detect a BIN attack. The artificial intelligence and/or machine learning algorithms may include one or more detection models 112 trained to identify anomalously high levels of transaction traffic in a common account range or with a common BIN (e.g., a common BIN 56). In particular, a standard or expected velocity associated with any BIN may be pre-defined, stored (e.g., in database 108), and provided to detection models 112. These standard velocities may be determined and pre-defined based upon analysis of a plurality of historical transactions ( e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc., of historical transactions) initiated using PANs sharing a same BIN. [0085] As used herein, "machine learning" refers to statistical techniques to give computer systems the ability to "learn" ( e.g., progressively improve performance on a specific task) with data, without being explicitly programmed for that specific task. "Artificial intelligence" refers to computer-executed techniques that allow a computer to interpret external data, "learn" from that data, and apply that knowledge to a particular end. Artificial intelligence may include, for example, neural networks used for predictive modelling. US Pub No. 2019/0073647 A1 by Zoldi et al- “it is well known that the plastic cards are targets for fraudsters who may obtain the card information illegally, often purchasing within BIN ranges (the BIN being the first 6 digits of the payment card and uniquely associated with the card issuer), and use the stolen card details for fraudulent purchases…Numerous algorithms and techniques have been utilized in the card transaction field aimed at detecting fraudulent payment card transactions. In general data mining algorithms are applied on historical transaction datasets, and artificial intelligence models are developed from the transactional patterns of legitimate and fraudulent transactions of the payment cards. One of the most prominent models … utilizes transaction profiling and neural network classification models for the majority of card issuers worldwide to detect fraudulent payment card transactions” (¶ 0002-0003). The prior art teaches “detection of cards purchased and in-play revolves around aggregate abnormality patterns associated with particular groups of cards at the BIN/ZIP level. What is needed are methods to include such information in risk assessment and detection to assist in the detection and measurement of payment card fraud risk” (¶ 0008); para 0011 wherein the prior art teaches identifying if group of cards shows abnormalities by the spending behaviors; US Patent No. 8,924,279 B2 by Liu et al (2014 filing date) -Col 10 “The issuer may specify that the business rules optimizer service focus its analysis upon a particular time period, a program, debit products only (i.e., debit cards instead of credit cards), a particular product (i.e., Products--VISA classic gold), one or more Bank Identification numbers (BIN) for the issuer, a group of BINs (BID) of the issuer, a particular geographic location, merchants dealing only in certain commodities, etc. Also, transactions of a certain category of peer-issuers may be part of the issuer's desired filtering criteria. The examination of peer-issuer transactions may help to better stop a type of fraud upon the issuer that has been tried upon one or more of its peer-issuers, as reflected in historical transaction data, thereby revealing patterns and trends of fraud.” ; US Pub No. 2014/0114840 A1 by Arnold et al – see para 0017-0020. US Pub No. 2010/0057623 A1 by Kapur et al- para 0095-0096. The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is generic components and functions in the related arts. The claim is not patent eligible. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 2-8 these dependent claim have also been reviewed with the same analysis as independent claim 1. Dependent claim 2 is directed toward detecting identifier matches flagged identifier and initiate enhanced authentication procedure to transmit message to issuer- risk mitigation and transaction activity- sales activity. Dependent claim 3 is directed toward flag indicating fraudulent activity- risk mitigation. Dependent claim 4 is directed toward authorization message- sales activity. Dependent claim 5 is directed toward store identifiers associated with flagged messages – risk mitigation. Dependent claim 6 is directed toward query anomalous activity to retrieve messages initiated during time period – insignificant extra solution activity, extract identifier and identify respective issuer associated with identifier- collect and analyze data for a sales activity and generate an alert -risk mitigation and transaction activity. Dependent claim 7 is directed toward appending a flag to each extracted PAN in issuer database – risk mitigation. Dependent claim 8 is directed toward issuers to issue new identifiers – transaction activity. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 1. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 2-8 are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. In reference to Claims 9-14: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a method, as in independent Claim 9 and the dependent claims. Such methods fall under the statutory category of "process." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The steps of Method claim 9 corresponds to the functions of system claim 1. Therefore, claim 9 has been analyzed and rejected as being directed toward an abstract idea of the categories of concepts directed toward mental processes and methods of organizing human activity previously discussed with respect to claim 1. STEP 2A Prong 2: The steps of Method claim 9 corresponds to the functions of system claim 1. Therefore, claim 9 has been analyzed and rejected as failing to provide limitations that are indicative of integration into a practical application, as previously discussed with respect to claim 1. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements beyond the abstract idea include a processor and machine learning algorithm model–is purely functional and generic. Nearly every computer implemented method for implementing a process will include a “processor” capable of performing the basic computer functions -of “receiving”, “applying algorithms”, “identifying”, “appending”, “cause request for two-step authentication” or “cause risk score threshold to increase” steps recited for fraud mitigation - As a result, none of the computer elements recited by the method for implementing the method of claim 9 offers a meaningful limitation beyond generally linking the use of the method to a particular technological environment, that is, implementation via computers. Method claim 9 corresponds to the functions of system claim 1. Therefore, claim 9. has been analyzed and rejected as failing to provide additional elements that amount to an inventive concept –i.e. significantly more than the recited judicial exception. Furthermore, as previously discussed with respect to claim 1, the limitations when considered individually, as a combination of parts or as a whole fail to provide any indication that the elements recited are unconventional or otherwise more than what is well understood, conventional, routine activity in the field. According to 2106.05 well-understood and routine processes to perform the abstract idea is not sufficient to transform the claim into patent eligibility. As evidence the examiner provides: The specification discloses: [0020] In particular, the fraud analysis computer system includes an attack detection and response (ADR) computing device configured to monitor the transaction streams using artificial intelligence and/or machine learning algorithms to detect a BIN attack. The artificial intelligence and/or machine learning algorithms may include one or more detection models trained to identify anomalously high levels of transaction traffic for a common account range or BIN. In particular, a standard or expected velocity associated with any BIN may be pre-defined, stored, and provided to the detection models. These standard velocities may be determined and pre-defined based upon analysis of a plurality of historical transactions (e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc., of historical transactions) initiated using PANs sharing a same BIN. [0041] The resulting technical effect achieved by this system is at least one of: (i) reducing network-based fraud events through early detection, in particular, realtime detection (and, therefore, real-time response to) account-range fraud attacks; (ii) reducing future fraud events by flagging compromised accounts/account numbers; (iii) applying artificial intelligence and/or machine learning algorithms to monitor a variety of velocities to accurately and robustly detect account range fraud attacks; and/or (iv) alerting affected parties to fraud attacks to facilitate increased fraud prevention. Thus, the system enables enhanced fraud detection on the payment card transaction network. Once a pattern of fraudulent activity is detected and identified, further fraudulent payment card transaction attempts may be reduced or isolated from further processing on the payment card interchange network, which results in a reduced amount of fraudulent network traffic and reduced processing time devoted to fraudulent transactions, and thus a reduced burden on the network. [0046] The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. [0055] ADR computing device 106 is configured to monitor transaction streams ( e.g., transaction messages processed over payment processing network 102, such as authorization request messages and/or account status inquiries) using artificial intelligence and/or machine learning algorithms to detect a BIN attack. The artificial intelligence and/or machine learning algorithms may include one or more detection models 112 trained to identify anomalously high levels of transaction traffic in a common account range or with a common BIN (e.g., a common BIN 56). In particular, a standard or expected velocity associated with any BIN may be pre-defined, stored (e.g., in database 108), and provided to detection models 112. These standard velocities may be determined and pre-defined based upon analysis of a plurality of historical transactions ( e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc., of historical transactions) initiated using PANs sharing a same BIN. [0085] As used herein, "machine learning" refers to statistical techniques to give computer systems the ability to "learn" ( e.g., progressively improve performance on a specific task) with data, without being explicitly programmed for that specific task. "Artificial intelligence" refers to computer-executed techniques that allow a computer to interpret external data, "learn" from that data, and apply that knowledge to a particular end. Artificial intelligence may include, for example, neural networks used for predictive modelling. US Pub No. 2021/0312451 A1 by Allbright et al. para 0017-0018, para 0037, para 0042-0043, para 0057, para 0062, para 0064; US Pub No. 2020/0342453 A1 by Braundmeier US Pub No. 2019/0073647 A1 by Zoldi et al- “it is well known that the plastic cards are targets for fraudsters who may obtain the card information illegally, often purchasing within BIN ranges (the BIN being the first 6 digits of the payment card and uniquely associated with the card issuer), and use the stolen card details for fraudulent purchases…Numerous algorithms and techniques have been utilized in the card transaction field aimed at detecting fraudulent payment card transactions. In general data mining algorithms are applied on historical transaction datasets, and artificial intelligence models are developed from the transactional patterns of legitimate and fraudulent transactions of the payment cards. One of the most prominent models … utilizes transaction profiling and neural network classification models for the majority of card issuers worldwide to detect fraudulent payment card transactions” (¶ 0002-0003). The prior art teaches “detection of cards purchased and in-play revolves around aggregate abnormality patterns associated with particular groups of cards at the BIN/ZIP level. What is needed are methods to include such information in risk assessment and detection to assist in the detection and measurement of payment card fraud risk” (¶ 0008); para 0011 wherein the prior art teaches identifying if group of cards shows abnormalities by the spending behaviors; US Patent No. 8,924,279 B2 by Liu et al-Col 10 “The issuer may specify that the business rules optimizer service focus its analysis upon a particular time period, a program, debit products only (i.e., debit cards instead of credit cards), a particular product (i.e., Products--VISA classic gold), one or more Bank Identification numbers (BIN) for the issuer, a group of BINs (BID) of the issuer, a particular geographic location, merchants dealing only in certain commodities, etc. Also, transactions of a certain category of peer-issuers may be part of the issuer's desired filtering criteria. The examination of peer-issuer transactions may help to better stop a type of fraud upon the issuer that has been tried upon one or more of its peer-issuers, as reflected in historical transaction data, thereby revealing patterns and trends of fraud.” ; US Pub No. 2014/0114840 A1 by Arnold et al – see para 0017-0020. US Pub No. 2010/0057623 A1 by Kapur et al- para 0095-0096/ The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is generic components and functions in the related arts. The claim is not patent eligible. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 10-14 these dependent claim have also been reviewed with the same analysis as independent claim 9. Dependent claim 10 is directed toward detecting first identifier matches flagged identifier and initiating enhanced authentication procedure- transaction process. Dependent claim 11 is directed toward flag indicates fraudulent activity- risk mitigation. Dependent claim 12 is directed toward messaged comprise authorization messages- sales activity. Dependent claim 13 is directed toward storing identifiers associated with flagged messages – insignificant extra solution activity and sales activity. Dependent claim 14 is directed toward query anomalous activity to retrieve messages initiated during time period – insignificant extra solution activity, extract identifier and identify respective issuer associated with identifier- collect and analyze data for a sales activity and generate an alert -risk mitigation and transaction activity. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 10. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 11-14 are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. In reference to Claims 15-20: STEP 1. Per Step 1 of the two-step analysis, the claims are determined to include a non-transitory computer readable storage medium, as in independent Claim 15 and the dependent claims. Such mediums fall under the statutory category of "manufacture." Therefore, the claims are directed to a statutory eligibility category. STEP 2A Prong 1. The instructions of manufacture claim 15 corresponds to the functions of system claim 1. Therefore, claim 15 has been analyzed and rejected as being directed toward an abstract idea of the categories of concepts directed toward mental processes and methods of organizing human activity previously discussed with respect to claim 1. STEP 2A Prong 2: The instructions of manufacture claim 15 corresponds to the functions of system claim 1. Therefore, claim 15 has been analyzed and rejected as failing to provide limitations that are indicative of integration into a practical application, as previously discussed with respect to claim 1. STEP 2B; The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to concepts of the abstract idea into a practical application. The additional elements beyond the abstract idea include a non-transitory computer-readable medium including instructions executed by a processor and machine learning algorithm model–is purely functional and generic. Nearly every non-transitory computer readable medium storing instructions will include a “processor” capable of performing the basic computer instructions -of “receiving”, “applying algorithms”, “identifying”, “appending”, “cause request for two-step authentication” or “cause risk score threshold score to increase” steps recited for risk mitigation- As a result, none of the computer elements recited by the method for implementing the instructions of claim 15 offers a meaningful limitation beyond generally linking the use of the method to a particular technological environment, that is, implementation via computers. The instructions of medium claim 15 corresponds to the functions of system claim 1. Therefore, claim 15. has been analyzed and rejected as failing to provide additional elements that amount to an inventive concept –i.e. significantly more than the recited judicial exception. Furthermore, as previously discussed with respect to claim 1, the limitations when considered individually, as a combination of parts or as a whole fail to provide any indication that the elements recited are unconventional or otherwise more than what is well understood, conventional, routine activity in the field. According to 2106.05 well-understood and routine processes to perform the abstract idea is not sufficient to transform the claim into patent eligibility. As evidence the examiner provides: The specification discloses: [0041] The resulting technical effect achieved by this system is at least one of: (i) reducing network-based fraud events through early detection, in particular, realtime detection (and, therefore, real-time response to) account-range fraud attacks; (ii) reducing future fraud events by flagging compromised accounts/account numbers; (iii) applying artificial intelligence and/or machine learning algorithms to monitor a variety of velocities to accurately and robustly detect account range fraud attacks; and/or (iv) alerting affected parties to fraud attacks to facilitate increased fraud prevention. Thus, the system enables enhanced fraud detection on the payment card transaction network. Once a pattern of fraudulent activity is detected and identified, further fraudulent payment card transaction attempts may be reduced or isolated from further processing on the payment card interchange network, which results in a reduced amount of fraudulent network traffic and reduced processing time devoted to fraudulent transactions, and thus a reduced burden on the network. [0045] In one embodiment, a computer program is provided, and the program is embodied on a computer readable medium. In an example embodiment, the system is executed on a single computer system, without requiring a connection to a server computer. In a further embodiment, the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Washington). In yet another embodiment, the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark ofX/Open Company Limited located in Reading, Berkshire, United Kingdom). The application is flexible and designed to run in various different environments without compromising any major functionality. In some embodiments, the system includes multiple components distributed among a plurality of computing devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium. [0055] ADR computing device 106 is configured to monitor transaction streams ( e.g., transaction messages processed over payment processing network 102, such as authorization request messages and/or account status inquiries) using artificial intelligence and/or machine learning algorithms to detect a BIN attack. The artificial intelligence and/or machine learning algorithms may include one or more detection models 112 trained to identify anomalously high levels of transaction traffic in a common account range or with a common BIN (e.g., a common BIN 56). In particular, a standard or expected velocity associated with any BIN may be pre-defined, stored (e.g., in database 108), and provided to detection models 112. These standard velocities may be determined and pre-defined based upon analysis of a plurality of historical transactions ( e.g., hundreds, thousands, tens of thousands, hundreds of thousands, etc., of historical transactions) initiated using PANs sharing a same BIN. ]0076]… Memory area 404 is any device allowing information such as executable instructions and/or written works to be stored and retrieved. Memory area 404 may include one or more computer readable media. [0085] As used herein, "machine learning" refers to statistical techniques to give computer systems the ability to "learn" ( e.g., progressively improve performance on a specific task) with data, without being explicitly programmed for that specific task. "Artificial intelligence" refers to computer-executed techniques that allow a computer to interpret external data, "learn" from that data, and apply that knowledge to a particular end. Artificial intelligence may include, for example, neural networks used for predictive modelling. [0086] As will be appreciated based on the foregoing specification, the above-discussed embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer readable and/or computer-executable instructions, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure. The computer readable media may be, for instance, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM) or flash memory, etc., or any transmitting/receiving medium such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the instructions directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network. [0087] As used herein, the term "non-transitory computer-readable media" is intended to be representative of any tangible computer-based device implemented in any method or technology for short-term and long-term storage of information, such as, computer-readable instructions, data structures, program modules and sub-modules, or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer readable medium, including, without limitation, a storage device and/or a memory device. Such instructions, when executed by a processor, cause the processor to perform at least a portion of the methods described herein. Moreover, as used herein, the term "non-transitory computer-readable media" includes all tangible, computer-readable media, including, without limitation, non-transitory computer storage devices, including, without limitation, volatile and nonvolatile media, and removable and non-removable media such as a firmware, physical and virtual storage, CD-ROMs, DVDs, and any other digital source such as a network or the Internet, as well as yet to be developed digital means, with the sole exception being a transitory, propagating signal. US Pub No. 2021/0312451 A1 by Allbright et al. para 0017-0018, para 0037, para 0042-0043, para 0057, para 0062, para 0064; US Pub No. 2020/0342453 A1 by Braundmeier. US Pub No. 2019/0073647 A1 by Zoldi et al- “it is well known that the plastic cards are targets for fraudsters who may obtain the card information illegally, often purchasing within BIN ranges (the BIN being the first 6 digits of the payment card and uniquely associated with the card issuer), and use the stolen card details for fraudulent purchases…Numerous algorithms and techniques have been utilized in the card transaction field aimed at detecting fraudulent payment card transactions. In general data mining algorithms are applied on historical transaction datasets, and artificial intelligence models are developed from the transactional patterns of legitimate and fraudulent transactions of the payment cards. One of the most prominent models … utilizes transaction profiling and neural network classification models for the majority of card issuers worldwide to detect fraudulent payment card transactions” (¶ 0002-0003). The prior art teaches “detection of cards purchased and in-play revolves around aggregate abnormality patterns associated with particular groups of cards at the BIN/ZIP level. What is needed are methods to include such information in risk assessment and detection to assist in the detection and measurement of payment card fraud risk” (¶ 0008); para 0011 wherein the prior art teaches identifying if group of cards shows abnormalities by the spending behaviors; US Patent No. 8,924,279 B2 by Liu et al-Col 10 “The issuer may specify that the business rules optimizer service focus its analysis upon a particular time period, a program, debit products only (i.e., debit cards instead of credit cards), a particular product (i.e., Products--VISA classic gold), one or more Bank Identification numbers (BIN) for the issuer, a group of BINs (BID) of the issuer, a particular geographic location, merchants dealing only in certain commodities, etc. Also, transactions of a certain category of peer-issuers may be part of the issuer's desired filtering criteria. The examination of peer-issuer transactions may help to better stop a type of fraud upon the issuer that has been tried upon one or more of its peer-issuers, as reflected in historical transaction data, thereby revealing patterns and trends of fraud.” ; US Pub No. 2014/0114840 A1 by Arnold et al – see para 0017-0020. US Pub No. 2010/0057623 A1 by Kapur et al- para 0095-0096 The instant application, therefore, still appears to only implement the abstract ideas to the particular technological environments using what is generic components and functions in the related arts. The claim is not patent eligible. The remaining dependent claims—which impose additional limitations—also fail to claim patent-eligible subject matter because the limitations cannot be considered statutory. In reference to claims 16-20 these dependent claim have also been reviewed with the same analysis as independent claim 15. Dependent claim 16 is directed toward detecting identifier matches flagged identifier and initiate authentication procedure- sales activity and risk mitigation. Dependent claim 17 is directed toward flag indicates fraudulent activity- risk mitigation. Dependent claim 18 is directed toward messaged comprise authorization messages- sales activity. Dependent claim 19 is directed toward storing identifiers associated with flagged messages – insignificant extra solution activity and sales activity. Dependent claim 20 is directed toward query anomalous activity to retrieve messages initiated during time period – insignificant extra solution activity, extract identifier and identify respective issuer associated with identifier- collect and analyze data for a sales activity and generate an alert -risk mitigation and transaction activity. The dependent claim(s) have been examined individually and in combination with the preceding claims, however they do not cure the deficiencies of claim 15. Where all claims are directed to the same abstract idea, “addressing each claim of the asserted patents [is] unnecessary.” Content Extraction & Transmission LLC v. Wells Fargo Bank, Nat 7 Ass ’n, 776 F.3d 1343, 1348 (Fed. Cir. 2014). If applicant believes the dependent claims 16-20 are directed towards patent eligible subject matter, they are invited to point out the specific limitations in the claim that are directed towards patent eligible subject matter. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. EP 3414670 B1 by Ramos et al; US Patent No. 10,504,076 B2 by Lorberg et al. ; US Patent No. 10,375,078 B2 by Burke et al Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARY M GREGG whose telephone number is (571)270-5050. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine Behncke can be reached at 571-272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARY M GREGG/Examiner, Art Unit 3695
Read full office action

Prosecution Timeline

Jan 06, 2023
Application Filed
May 22, 2024
Non-Final Rejection — §101, §103
Aug 28, 2024
Response Filed
Oct 28, 2024
Final Rejection — §101, §103
Jan 28, 2025
Request for Continued Examination
Jan 29, 2025
Response after Non-Final Action
Mar 08, 2025
Non-Final Rejection — §101, §103
Jun 13, 2025
Response Filed
Aug 08, 2025
Final Rejection — §101, §103
Nov 13, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Mar 14, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12450653
FIRM TRADE PROCESSING SYSTEM AND METHOD
2y 5m to grant Granted Oct 21, 2025
Patent 12443991
MINIMIZATION OF THE CONSUMPTION OF DATA PROCESSING RESOURCES IN AN ELECTRONIC TRANSACTION PROCESSING SYSTEM VIA SELECTIVE PREMATURE SETTLEMENT OF PRODUCTS TRANSACTED THEREBY BASED ON A SERIES OF RELATED PRODUCTS
2y 5m to grant Granted Oct 14, 2025
Patent 12217312
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Feb 04, 2025
Patent 11900469
Point-of-Service Tool for Entering Claim Information
2y 5m to grant Granted Feb 13, 2024
Patent 11861715
System and Method for Indicating Whether a Vehicle Crash Has Occurred
2y 5m to grant Granted Jan 02, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
14%
Grant Probability
28%
With Interview (+14.3%)
5y 3m
Median Time to Grant
High
PTA Risk
Based on 629 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month