Prosecution Insights
Last updated: April 19, 2026
Application No. 18/168,602

METHODS AND SYSTEMS FOR PREDICTING FRAUDULENT TRANSACTIONS BASED ON ACQUIRER-LEVEL CHARACTERISTICS MODELING

Non-Final OA §101§103§112
Filed
Feb 14, 2023
Examiner
LEE, CLAY C
Art Unit
3699
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mastercard International Incorporated
OA Round
3 (Non-Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
117 granted / 216 resolved
+2.2% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
60 currently pending
Career history
276
Total Applications
across all art units

Statute-Specific Performance

§101
32.7%
-7.3% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed August 14, 2025 has been entered. Claims 1, 4, 7, 9-10, 13, 16, and 18 remain pending in the application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4, 7, 9-10, 13, 16, and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under the Step 1 of the Section 101 analysis, Claims 1, 4, 7, and 9 are drawn to a method which is within the four statutory categories (i.e., a process), and Claims 10, 13, 16, and 18 are drawn to a system which is within the four statutory categories (i.e. a machine). Since the claims are directed toward statutory categories, it must be determined if the claims are directed towards a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). Based on consideration of all of the relevant factors with respect to the claim as a whole, claims 1, 4, 7, 9-10, 13, 16, and 18 are determined to be directed to an abstract idea. The rationale for this determination is explained below: Regarding Claims 1 and 10: Claims 1 and 10 are drawn to an abstract idea without significantly more. The claims recite “accessing, by a server system, historical transaction data of payment transactions associated with an acquirer server from a transaction database; determining, by the server system, acquirer features associated with the acquirer server and transaction features associated with an individual payment transaction based on the historical transaction data, the acquirer features including one or more of the following: (i) an average or maximum count of payment transactions processed per day, (ii) a maximum historical transaction amount, (iii) a ratio or count of merchants previously identified as fraudulent, and (iv) a registration-date timestamp of the acquirer server, and the transaction features including one or more of the following: a transaction amount, a transaction timestamp, a merchant identifier, geo-location data, and a payment instrument type; generating, by the server system, a combined feature vector defined as an ordered, fixed-length numeric vector obtained by aggregating the acquirer features and the transaction features; obtaining, by the server system via an embedding layer comprising a rectified-linear-unit (ReLU) layer, the embedding layer being implemented as a fully-connected linear transformation immediately followed by element-wise ReLU activation, a latent representation of the individual payment transaction by transforming the combined feature vector through the embedding layer; training, by the server system, using the latent representation as input, a fraud classifier configured to classify whether the individual payment transaction is fraudulent and an acquirer classifier configured to classify whether the acquirer server is fraudulent, based on the latent representation and a multi-component event-aware loss function, the training comprising: computing, by the server system, the multi-component event-aware loss function based on execution outputs of both the fraud classifier and the acquirer classifier for the individual payment transaction, the multi-component event-aware loss function comprising: a recency-based cross-entropy loss component that applies, for each payment transaction used as a training instance, a weight (w,) inversely proportional to a time interval (At1) between that payment transaction and a reference point, so that more recent transactions receive greater weight, a net-benefit loss component configured to optimize an overall net benefit of fraud detection based on true positives, false positives, and false negatives, a predicted event rate (PER) optimization loss component expressed as (PER - TER)^2, where PER is a ratio of predicted fraud instances to total predictions in a minibatch and TER is a corresponding true-event rate, and an acquirer classification loss component calculated as across-entropy term between ground-truth acquirer labels and outputs of the acquirer classifier; and updating, by the server system, network parameters of the fraud classifier, the acquirer classifier, and the embedding layer in a single joint update based on the multi-component event-aware loss function; and deploying, by the server system, the transaction monitoring model to generate a fraud prediction for subsequent payment transactions based on the updated network parameters.” Under the Step 2A Prong One, the limitations, as underlined above, are processes that, under its broadest reasonable interpretation, cover Certain Methods Of Organizing Human Activity such as fundamental economic principles or practices (including hedging, insurance, mitigating risk). For example, but for the “server system”, “acquirer server”, “database”, “timestamp”, “rectified-linear-unit (ReLU) layer”, and “network” language, the underlined limitations in the context of this claim encompass the human activity. The series of steps belong to a typical mitigating risk, because historical transaction data of payment transactions are processed for predicting fraudulent transactions. Under the Step 2A Prong Two, this judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “A computer-implemented method of training a transaction monitoring model for predicting fraud in payment transactions, the method, comprising:”, “A server system for training a transaction monitoring model for predicting fraud in payment transactions, the server system comprising: at least one processor; and a memory storing computer-executable instructions that when executed by the at least one processor, cause the at least one processor to perform operations comprising:”, “server system”, “acquirer server”, “database”, “timestamp”, “rectified-linear-unit (ReLU) layer”, and “network”. The additional elements are recited at a high-level of generality (i.e., performing generic functions of an interaction) such that it amounts no more than mere instructions to apply the exception using a generic computer component, merely implementing an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Additionally, regarding the specification and claims, there is no improvement in the functioning of a computer or an improvement to other technology or technical field present, there is no applying or using the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition present, there is no implementing the judicial exception with or using the judicial exception in conjunction with a particular machine or manufacture that is integral to the claim present, there is no effecting a transformation or reduction of a particular article to a different state or thing present, and there is no applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment present such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Accordingly, these additional elements, individually or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the process amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding Claims 4, 7, 9, 13, 16, and 18: Dependent claims 4, 7, 9, 13, 16, and 18 include additional limitations, for example, “acquirer server” (Claims 4 and 13); “acquirer server” (Claims 7 and 16); and “server system” and “payment server” (Claims 9 and 18), but none of these limitations are deemed significantly more than the abstract idea because, as stated above, they require no more than generic computer structures or signals to be executed, and do not recite any Improvements to the functioning of a computer, or Improvements to any other technology or technical field. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation or implementing the judicial exception on a generic computer. Therefore, whether taken individually or as an ordered combination, claims 4, 7, 9, 13, 16, and 18 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 4, 7, 9-10, 13, 16, and 18 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 10 recite the limitation “expressed as (PER - TER)^2, where PER is a ratio of predicted fraud instances to total predictions in a minibatch and TER is a corresponding true-event rate” and “calculated as across-entropy term between ground-truth acquirer labels and outputs of the acquirer classifier”, which is not described in the specification. The specification does not disclose anything of (PER - TER)^2, minibatch, and ground-truth acquirer labels. Claims 4, 7, 9, 13, 16, and 18 are rejected due to their dependency from claim 1 or 10. Appropriate correction/clarification is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 7, 9-10, 13, 16, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Adjaoute (US 20210248614 A1) in view of Chaturvedi (US 20210406896 A1), and in further view of Leptiev (US 20230385844 A1). Regarding Claims 1 and 10, Adjaoute teaches A computer-implemented method of training a transaction monitoring model for predicting fraud in payment transactions, the method, comprising (Adjaoute: Abstract; Paragraph(s) 0026): A server system for training a transaction monitoring model for predicting fraud in payment transactions, the server system comprising: at least one processor; and a memory storing computer-executable instructions that when executed by the at least one processor, cause the at least one processor to perform the operations comprising (Adjaoute: Abstract; Paragraph(s) 0084-0085, 0089, 0026-0027, 0103-0104): accessing, by a server system, historical transaction data of payment transactions associated with an … server from a transaction database (Adjaoute: Abstract; Paragraph(s) 0027, 0053, 0057, 0061 teach(es) The trainable general payment fraud models are trained with supervised and unsupervised data to produce a trained payment fraud model reflecting, for example, accountholder and historical transaction data); determining, by the server system, … features associated with the … server and transaction features associated with an individual payment transaction based on the historical transaction data (Adjaoute: Paragraph(s) 0089, 0093, 0098 teach(es) a process for cross-channel financial fraud protection comprises training a variety of real-time, risk-scoring fraud models with training data selected for each from a common transaction history to specialize each member in the monitoring of a selected channel), the … features including one or more of the following: (i) an average or maximum count of payment transactions processed per day, (ii) a maximum historical transaction amount, (iii) a ratio or count of merchants previously identified as fraudulent, and (iv) a registration-date timestamp of the acquirer server (Adjaoute: Paragraph(s) 0075, 0093 teach(es) a particular account number x shows a pattern of purchases at the local Horne Depot and Costco that average $100-$300, then an instantaneous incoming real-time transaction record 502 that reports another $200 purchase at the local Costco will raise no alarms. But a sudden, unique, inexplicable purchase for $1250 at a New York Jeweler will and should throw more than one exception; Each profile for each smart-agent comprises knowledge extracted field-by-field, such as merchant category code (MCC), time, amount for an mcc over a period of time, recursive profiling, zip codes, type of merchant, monthly aggregation, activity during the week, weekend, holidays, Card not present (CNP) versus card present (CP), domestic versus cross-border, etc.), and the transaction features including one or more of the following: a transaction amount, a transaction timestamp, a merchant identifier, geo-location data, and a payment instrument type (Adjaoute: Paragraph(s) 0093, 0066 teach(es) merchant category code (MCC), time, amount for an mcc over a period of time, recursive profiling, zip codes, type of merchant, monthly aggregation, activity during the week, weekend, holidays, Card not present (CNP) versus card present (CP), domestic versus cross-border, etc.); generating, by the server system, a combined feature … by aggregating the … features and the transaction features (Adjaoute: Paragraph(s) 0098, 0102, 0057-0058 teach(es) Neural Networks only learn from past transactions and cannot detect any new fraud schemes (that arise daily) if the neural network is not re-trained with this type of fraud; Incremental learning technologies are embedded in the machine algorithms and smart-agent technology to continually re-train from any false positives and negatives that occur along the way; Model trainer can be fed a very complete, comprehensive transaction history that can include both supervised and unsupervised data; The resulting filtered training data will produce a trained model that will be highly specific and sensitive to fraud in the filtered category); obtaining, by the server system via an embedding layer …, a latent representation of the individual payment transaction by transforming the combined feature … through the embedding layer (Adjaoute: Paragraph(s) 0027, 0067 teach(es) The trainable general payment fraud models are trained with supervised and unsupervised data to produce a trained payment fraud model reflecting, for example, accountholder and historical transaction data); training, by the server system, using the latent representation as input, a fraud classifier configured to classify whether the individual payment transaction is fraudulent and an … classifier configured to classify whether the … server is fraudulent, based on the latent representation and a multi-component event-aware loss … (Adjaoute: Paragraph(s) 0098, 0102, 0011, 0027, 0086 teach(es) Neural Networks only learn from past transactions and cannot detect any new fraud schemes (that arise daily) if the neural network is not re-trained with this type of fraud; Incremental learning technologies are embedded in the machine algorithms and smart-agent technology to continually re-train from any false positives and negatives that occur along the way; an artificial intelligence fraud management system of the present invention comprises a real-time analytics process for analyzing the behavior of a user from the transaction events they generate over a network. An initial population of smart agent profiles is stored in a computer file system and more smart agent profiles are added as required as transaction data is input), the training comprising: computing, by the server system, the multi-component event-aware loss … based on execution outputs of both the fraud classifier and the … classifier for the individual payment transaction (Adjaoute: Paragraph(s) 0026, 0092 teach(es) building trainable general payment fraud models that integrate several, but otherwise blank artificial intelligence classifiers, e.g., neural networks, case based reasoning, decision trees, genetic algorithms, fuzzy logic, and rules and constraints), the multi-component event-aware loss … comprising: a recency-based … loss component that applies, for each payment transaction used as a training instance, a weight (w_i) inversely proportional to a time interval (At1) between that payment transaction and a reference point, so that more recent transactions receive greater weight (Adjaoute: Paragraph(s) 0081, 0078, 0098, 0107, 0068, 0102 teach(es) A search process accepts a search key from register and reports any matches in the 15-minute window with an account activity velocity counter. Too much very recent activity can hint there is a fraudster at work, or it may be normal behavior; Neural network technology also has limits; it uses historical data to create a matrix weights for future data classification. The Neural network will use as input (first layer) the historical transactions and the classification (for fraud or not as an output). Neural Networks only learn from past transactions and cannot detect any new fraud schemes (that arise daily) if the neural network is not re-trained with this type of fraud; What was purchased and how long ago a transaction for a particular accountholder occurred, and when their other recent transactions occurred, can provide valuable insights into whether the transactions the accountholder is presently engaging in are normal and in character, or deviating), a net-benefit loss component configured to optimize an overall net benefit of fraud detection based on true positives, false positives, and false negatives, …, and an … classification loss component calculated as … term between ground-truth acquirer labels and outputs of the acquirer classifier (Adjaoute: Paragraph(s) 0098, 0102, 0106, 0003-0004 teach(es) Neural Networks only learn from past transactions and cannot detect any new fraud schemes (that arise daily) if the neural network is not re-trained with this type of fraud; Incremental learning technologies are embedded in the machine algorithms and smart-agent technology to continually re-train from any false positives and negatives that occur along the way; The meta-data block comprises the oldest one's time field, and a record length field. Transaction events are timestamped, recorded, and indexed by a specified atomic interval, e.g., ten minute intervals are typical, which is six hundred seconds); and updating, by the server system, network parameters of the fraud classifier, the … classifier, and the embedding layer in a single joint update based on the multi-component event-aware loss … (Adjaoute: Paragraph(s) 0098, 0102, 0090 teach(es) Data mining logic incrementally changes the decision trees by creating a new link or updating the existing links and weights. Neural networks update the weight matrix, and case-based reasoning logic updates generic cases or creates new ones. Smart-agents update their profiles by adjusting the normal/abnormal thresholds, or by creating exceptions; During real-time use, the real-time, long-term, and recursive profiles for each accountholder in each and all of the real-time, risk-scoring fraud models are maintained and updated with newly arriving data. If, during real-time use, a compromise, takeover, or suspicious activity of the accountholder's account in any one channel is detected, then the real-time, long-term, and recursive profiles for each accountholder in each and all of the other real-time, risk-scoring fraud models are updated to further include an elevated risk flag); and deploying, by the server system, the transaction monitoring model to generate a fraud prediction for subsequent payment transactions based on the updated network parameters (Adjaoute: Paragraph(s) 0027, 0064 teach(es) This trained payment fraud model can then be sold as a computer program library or a software-as-a-service applied payment fraud model). However, Adjaoute does not explicitly teach acquirer, combined feature vector defined as an ordered, fixed-length numeric vector obtained, an embedding layer comprising a rectified-linear-unit (ReLU) layer, the embedding layer being implemented as a fully-connected linear transformation immediately followed by element-wise ReLU activation, and a predicted event rate (PER) optimization loss component expressed as (PER - TER)^2, where PER is a ratio of predicted fraud instances to total predictions in a minibatch and TER is a corresponding true-event rate. Chaturvedi from same or similar field of endeavor teaches acquirer (Chaturvedi: Paragraph(s) 0016, 0041, 0069, 0012-0013 teach(es) The electronic transaction system also includes an acquirer host device, an issuer host device and a payment network, communicably coupled to the service provider server via the network; electronic payment providers can increase the success rate of authenticating transactions and improve the transaction success rate for merchants that are integrated with the billing product, including different types of transactions, such as a recurrent transaction. If the service provider passes certain risk flags to an issuer, the risk flag can indicate to the issuer that a transaction is a recurring payment. Issuers can reduce their amount of risk checks and even if a payment instrument (such as a credit card) is expired, the issuer can pull the charge on a newly-issued payment card), combined feature vector defined as an ordered, fixed-length numeric vector obtained (Chaturvedi: Paragraph(s) 0085-0086, 0016, 0068-0071, 0021, 0066, 0109 teach(es) The transaction periodicity forecast module can receive transaction details, evaluate the feature vector of those transactions, and mark that data as positive use cases with supervised learning; The electronic transaction system also includes an acquirer host device, an issuer host device and a payment network, communicably coupled to the service provider server via the network. In some implementations, the service provider server may be communicably coupled directly to each of the acquirer host device and/or the issuer host device; the transaction periodicity forecast module may train, using at least one processor of the service provider server, the machine learning-trained classifier with a training dataset), an embedding layer comprising a rectified-linear-unit (ReLU) layer, the embedding layer being implemented as a fully-connected linear transformation immediately followed by element-wise ReLU activation (Chaturvedi: Paragraph(s) 0099 teach(es) an activation function (e.g., sigmoid, rectified linear unit (ReLU)) may be applied to the machine learning-trained classifier during the training phase so that the output values are bounded between 0 and 1), a predicted event rate (PER) optimization loss component expressed as (PER - TER)^2, where PER is a ratio of predicted fraud instances to total predictions in a minibatch and TER is a corresponding true-event rate (Chaturvedi: Paragraph(s) 0072-0074, 0014, 0107, 0096, 0098 teach(es) The machine learning-trained classifier, in one implementation, may be adapted to analyze one or more device features of the communication device and generate a prediction result that indicates a likelihood that the transaction corresponds to a particular type of periodicity; The structure of the machine learning-trained classifier may include a neural network with a particular pattern of layers or number of neurons per layer that are used to provide scoring information, such as a transaction periodicity prediction; the training data set may include annotated or labeled data of particular flagged transactions and/or may be reviewed after processed and classified by the machine learning technique for false positives and/or correctly identified and flagged as a certain transaction periodicity; The automated script can attempt to emulate this processing based on the failure rates (referred to as the probability of the transaction attempting a retry for authentication, and after the first retry there is a probability of it failing and then attempting a second retry); the transaction evaluation engine may receive a prediction value of what is the probability that this tag is the correct value or not, and then the transaction evaluation engine may evaluate the error rate between the presented value and the actual value). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Adjaoute to incorporate the teachings of Chaturvedi for acquirer, combined feature vector, an embedding layer comprising a rectified-linear-unit (ReLU) layer, the embedding layer being implemented as a fully-connected linear transformation immediately followed by element-wise ReLU activation, and a predicted event rate (PER) optimization loss component expressed as (PER - TER)^2, where PER is a ratio of predicted fraud instances to total predictions in a minibatch and TER is a corresponding true-event rate. There is motivation to combine Chaturvedi into Adjaoute because Chaturvedi’s teachings of acquirer, feature vector, and PER optimization would facilitate handling fraudulent situations or transactions (Chaturvedi: Paragraph(s) 0016, 0107). However, the combination of Adjaoute and Chaturvedi does not explicitly teach a multi-component event-aware loss function, and cross-entropy loss component. Leptiev from same or similar field of endeavor teaches a multi-component event-aware loss function (Leptiev: Paragraph(s) 0126-0129), and cross-entropy loss component (Leptiev: Paragraph(s) 0126-0129, 0132 teach(es) in embodiments where the dispute-evaluator machine-learning model is a neural network, the provisional credit determination system can utilize a cross-entropy loss function, an L1 loss function, or a mean squared error loss function as the loss function; the provisional credit determination system performs the model fitting by modifying internal parameters (e.g., weights) of the dispute-evaluator machine-learning model to reduce the measure of loss for the loss function). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Adjaoute and Chaturvedi to incorporate the teachings of Leptiev for a multi-component event-aware loss function, and cross-entropy loss component. There is motivation to combine Leptiev into the combination of Adjaoute and Chaturvedi because Leptiev’s teachings of loss function and cross-entropy loss component would facilitate minimizing a loss from the loss function (Leptiev: Paragraph(s) 0127, 0130). Regarding Claims 4 and 13, the combination of Adjaoute, Chaturvedi, and Leptiev teaches all the limitations of claims 1 and 10 and acquirer above; however the combination does not explicitly teach wherein the acquirer classification loss component represents a value calculated based on the acquirer features associated with the acquirer server. Chaturvedi further teaches wherein the acquirer classification loss component represents a value calculated based on the acquirer features associated with the acquirer server (Chaturvedi: Paragraph(s) 0041, 0016 further teach(es) The acquirer host device includes an acquirer application and a network interface component, and is communicably coupled to the transaction processing server and/or the merchant server through the network interface component over the network. The acquirer host device may be a server operated by an acquiring bank. An acquiring bank is a financial institution that accepts payments on behalf of merchants. For example, a merchant may establish an account at an acquiring bank to receive payments made via various payment cards through the acquirer application). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Adjaoute, Chaturvedi, and Leptiev to incorporate the teachings of Chaturvedi for wherein the acquirer classification loss component represents a value calculated based, at least in part, on the acquirer features associated with the acquirer server. There is motivation to combine Chaturvedi into the combination of Adjaoute, Chaturvedi, and Leptiev because Chaturvedi’s teachings of acquirer would facilitate handling fraudulent situations or transactions (Chaturvedi: Paragraph(s) 0016). Regarding Claims 7 and 16, the combination of Adjaoute, Chaturvedi, and Leptiev teaches all the limitations of claims 1 and 10 and acquirer above; and Adjaoute further teaches wherein the historical transaction data comprises information of both fraudulent and non-fraudulent payment transactions performed at the … server (Adjaoute: Paragraph(s) 0087, 0089 teach(es) A suspicious or outright fraudulent transaction detected by a first selected applied fraud channel model for a particular customer in one channel is cause for a risk adjustment for that same customer in all the other applied fraud models for the other channels). Regarding Claims 9 and 18, the combination of Adjaoute, Chaturvedi, and Leptiev teaches all the limitations of claims 1 and 10 above; and Adjaoute further teaches wherein the server system is a payment server (Adjaoute: Paragraph(s) 0084-0085 teach(es) A real-time cross-channel monitoring payment network server can be constructed by running several of these selected applied fraud models in parallel). Response to Arguments Applicant's arguments filed August 14, 2025 have been fully considered but they are not persuasive. Regarding applicant’s argument under Claim Rejections - 35 USC § 101 that “the joint update mechanism eliminates redundant forward passes and back-propagation steps, thereby reducing inference latency, while the recency and PER terms dynamically bias the gradient so that the model adapts to fresh fraud patterns without over flagging legitimate transactions,” examiner respectfully argues that such technical details and contexts are not recited in the claims, enough to provide any improvements in the functioning of the computer and other technology or technical fields. The embedding layer including a ReLU layer, two distinct classifiers, and their technical context are recited without technical details. Therefore, the features such as “reducing inference latency” and “adapting to fresh fraud patterns without over flagging” can be interpreted as intended use. In addition, elements including ReLU-based embedding layer feeding a shared latent space, dual classifiers that simultaneously consume that latent space, and four term event-aware loss whose gradients update all upstream layers in one pass are not sufficient to amount to significantly more than the judicial exception. It is recommended for the applicant to amend the claims further with technical details and contexts of, for example, ReLU layer, two distinct classifiers, latent space, gradients with respect to the loss function, etc. Regarding applicant’s argument under Claim Rejections - 35 USC § 103 that “Adjaoute fails to show or even hint at the particular claim limitations reciting the embedding-layer transformation of a unified acquirer-plus-transaction feature vector,” examiner respectfully argues that the claims do not recite any sufficient technical details of the latent representation, combined feature vector, unified/shared latent space, etc. enough to overcome the cited references, as stated above with respect to the 103 rejections. It is also noted that updating the network parameters of the fraud classifier, the acquirer classifier, and the embedding layer “in a single joint update” may be obvious, especially when the claims do not recite sufficient technical details and contexts of the above elements. It is recommended for the applicant to amend the claims further with more technical details and contexts of elements and related functions such as a single feature vector, latent representation, unified latent space, multi-loss design, transforming of the combined feature vector, execution outputs of the classifiers, etc. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jin (US 20150379517 A1) teaches System And Method For Enhanced Detection Of Fraudulent Electronic Transactions. Branco (US 20210248448 A1) teaches Interleaved Sequence Recurrent Neural Networks For Fraud Detection, including fully connected layer and cross-entropy. Ameisen (US 20240095741 A1) teaches Systems And Methods For Block Rate Matching When Updating A Machine Learning Model Based Fraud Detection System, including fraudulent transactions, fraud detection, analyzing attributes associated with a transaction, historical, parameter, loss, recent, machine learning, and training. Babu (US 20230186308 A1) teaches Utilizing A Fraud Prediction Machine-Learning Model To Intelligently Generate Fraud Predictions For Network Transactions, including history, event, loss, train, machine learning, bank system to identify one or more of transaction data, network account data, and device data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAY LEE whose telephone number is (571)272-3309. The examiner can normally be reached Monday-Friday 8-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neha Patel can be reached on (571)270-1492. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CLAY C LEE/Primary Examiner, Art Unit 3699
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Dec 17, 2024
Non-Final Rejection — §101, §103, §112
Mar 18, 2025
Response Filed
May 21, 2025
Final Rejection — §101, §103, §112
Jul 16, 2025
Response after Non-Final Action
Aug 14, 2025
Request for Continued Examination
Aug 19, 2025
Response after Non-Final Action
Jan 30, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597019
Post-Provisioning Authentication Protocols
2y 5m to grant Granted Apr 07, 2026
Patent 12591639
RESOURCE BASED LICENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12572907
UNIVERSAL PAYMENT CHANNEL
2y 5m to grant Granted Mar 10, 2026
Patent 12561654
SYSTEMS AND METHODS FOR EXECUTING REAL-TIME ELECTRONIC TRANSACTIONS USING A ROUTING DECISION MODEL
2y 5m to grant Granted Feb 24, 2026
Patent 12561712
LOYALTY POINT DISTRIBUTIONS USING A DECENTRALIZED LOYALTY ID
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+57.1%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month