Prosecution Insights
Last updated: April 19, 2026
Application No. 17/697,724

SYSTEM AND METHOD FOR PREDICTING TRANSACTIONAL BEHAVIOR IN A NETWORK

Final Rejection §101§103§112
Filed
Mar 17, 2022
Examiner
SMITH, KEVIN LEE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Mastercard International Incorporated
OA Round
2 (Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 134 resolved
-18.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
45 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. Applicant’s submission filed 04 December 2025 [hereinafter Response], where: Claims 1-5, 7-13, and 15-20 have been amended. Claims 1-20 are pending. Claims 1-20 are rejected. Claim Objections 3. The objection to claims 8 and 16 because of the informalities is WITHDRAWN in view of the Applicant’s Amendments to the claims. Claim Rejections - 35 U.S.C. § 112 4. The rejection to claims 1-20 under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention is WITHDRAWN in view of the Applicant’s amendments to the claims. Claim Rejections - 35 U.S.C. § 101 5. 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a system, which is a machine, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(b)] predict information for the account based on the density function of the LSTM-RNN.” The limitation of “[(c)] predict” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are mental processes, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 1 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “memory configured to store instructions of a predictor model,” and “transaction manager comprising a processor configured to execute the instructions,” which are recited at a high-level of generality, and are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites a “predictor model” and a “LSTM-RNN,” which are also recited at a high level of generality, and thus are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites the limitation of “[(a)] train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] train” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites more details or specifics to the additional element of “[(a)] train,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim also recites “[(c)] automatically transmit a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a post-processing, insignificant extra solution activity of result transmission, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Therefore, claim 1 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include “memory configured to store instructions of a predictor model,” and “transaction manager comprising a processor configured to execute the instructions,” which are recited at a high-level of generality, and are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim also recites a “predictor model” and a “LSTM-RNN,” which are also recited at a high level of generality, and thus are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites the limitation of “[(a)] train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] train” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites more details or specifics to the additional element of “[(a)] train,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim also recites “[(c)] automatically transmit a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a well-understood, routine, and conventional activity of receiving or transmitting data over a network, (MPEP § 2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Thus, claim 1 is subject-matter ineligible. Claim 9 recites a method, which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(b)] predicting information for a user account based on a density function of the LSTM-RNN.” The activity of “[(c)] predicting” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are mental processes, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 9 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “LSTM-RNN” which is recited at a high level of generality, and thus is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites “[(a)] training a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] training” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim recites more details or specifics to the additional element of “[(a)] train,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim also recites the limitation of “[(c)] automatically transmitting a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a post-processing, insignificant extra solution activity of result transmission, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Therefore, claim 9 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include a “LSTM-RNN” which is recited at a high level of generality, and thus are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites “[(a)] training a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] training” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim recites more details or specifics to the additional element of “[(a)] train,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim also recites the limitation of “[(c)] automatically transmit a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a well-understood, routine, and conventional activity of receiving or transmitting data over a network, (MPEP § 2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Thus, claim 9 is subject-matter ineligible. Claim 17 recites a non-transitory, computer-readable media, which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(b)] predicting information for the account based on the density function.” The limitation of “[(b)] predicting,” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are mental processes, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 17 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “non-transitory, computer-readable media having computer-readable instructions stored thereon, the computer-readable instructions being capable of being read by a transaction manager configured to execute instructions stored on a memory,” which are recited at a high-level of generality, and are generic computer components used to implement the abstract idea. (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites a “LSTM-RNN” which is recited at a high level of generality, and thus is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites “[(a)] training a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] train” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim recites more details or specifics to the additional element of “[(a)] train,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim recites the limitation of “[(c)] automatically transmitting a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a post-processing, insignificant extra solution activity of result transmission, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Therefore, claim 1 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements recited in the claim beyond the identified judicial exception include “non-transitory, computer-readable media having computer-readable instructions stored thereon, the computer-readable instructions being capable of being read by a transaction manager configured to execute instructions stored on a memory,” which are recited at a high-level of generality, and are generic computer components used to implement the abstract idea. (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim also recites a “LSTM-RNN” which is recited at a high level of generality, and thus is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites “[(a)] training a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts corresponding to different merchant category codes (MCCs) and transaction amounts by feeding at each time step, transactions associated with merchants and cardholders, the merchants and contexts of the merchants and the cardholders to the LSTM RNN,” where the additional element of “[(a)] train” is an activity of using the generic computer components (LSTM-RNN) to implement the abstract idea, that does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim recites more details or specifics to the additional element of “[(a)] training,” “[(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins,” and “[(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption,” and accordingly, are merely more specific to the additional element. The claim recites the limitation of “[(c)] automatically transmitting a notification of a feature to an owner of the account based on the changing of the state of the account,” which is a well-understood, routine, and conventional activity of receiving or transmitting data over a network, (MPEP § 2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics to the additional element of “[(c)] automatically transmit a notification,” “[(c.1)] wherein the information includes a time of a future transaction linked to a correlated marker of the payment network,” and accordingly, is merely more specific to the additional element. Thus, claim 17 is subject-matter ineligible. Claim 2 depends from claim 1. Claim 10 depends from claim 9. Claim 18 depends from claim 17. The claims recite more details or specifics to the additional element of “[(a)] training,” (claims 2, 10, and 18: “[(a.3)] learning a distribution of an expenditure pattern relating the account over time”), and accordingly is merely more specific to the abstract idea. Thus, claims 2, 10, and 18 are subject-matter ineligible. Claim 3 depends directly or indirectly from claim 1. Claim 11 depends directly or indirectly from claim 9. Claim 19 depends directly or indirectly from claim 17. The claims recite more details or specifics to the additional element of [(a)] training, (claims 3, 11, and 19: [(c.3)] providing information related to a cardholder of the account transacting at time t; [(c.4)] providing information related to a merchant being transacted at by the cardholder at time t; and [(c.5)] providing a context of the cardholder and the merchant at time t), and accordingly, are merely more specific to the abstract idea. Thus, claims 3, 11, and 19 are subject-matter ineligible. Claim 4 depends directly or indirectly from claim 1. Claim 12 depends directly or indirectly from claim 9. Claim 20 depends directly or indirectly from claim 17. The claims recite more details or specifics to the abstract idea of “[(b)] predicting” (claims 4, 12, and 20: [(b.1)] predicting transactions behavior of the account during a period”) and accordingly, are merely more specific to the abstract idea. Thus, claims 4, 12, and 20 are subject-matter ineligible. Claim 5 depends directly or indirectly from claim 1. Claim 13 depends directly or indirectly from claim 9. The claims recite more details or specifics to the abstract idea of “[(b)] predicting,” (claims 5 and 13: [(b.1)] linking the account to a vector attribute of the future transaction for the correlated marker”), and accordingly, are merely more specific to the abstract idea. Therefore, claims 5 and 13 are subject-matter ineligible. Claim 6 depends directly or indirectly from claim 1. Claim 14 depends directly or indirectly from claim 9. The claims recite more details or specifics to the abstract idea of “[(b)] predicting,” (claims 6 and 14: “[(b.2)] wherein the feature includes at least one of a discount, offer, conditional reward, or incentive of a merchant corresponding to the correlated marker.”), and accordingly, are merely more specific to the abstract idea. Therefore, claims 6 and 14 are subject-matter ineligible. Claims 7 and 8 depend directly or indirectly from claim 1. Claims 15 and 16 depend directly or indirectly from claim 9. The claims recite more details or specifics to the abstract idea of “[(b)] predicting,” (claims 7 and 15: “wherein the correlated marker corresponds to a merchant category code (MCC) of the payment network;” claims 8 and 16: the offer is an existing or future offer provided by a merchant in the MCC corresponding to the correlated marker), and accordingly, are merely more specific to the abstract idea. Therefore, claims 7, 8, 15, and 16 are subject-matter ineligible. Claim Rejections – 35 U.S.C. § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 9. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 10. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Published Application 20190278378 to Yan et al. [hereinafter Yan] in view of US Published Application 2022/0166782 to Zoldi et al. [hereinafter Zoldi]. Regarding claims 1, 9, and 17, Yan teaches [a] system for managing a payment network (Yan, Abstract, teaches “a deep learning attribution system”)of claim 1, [a] method for managing a payment network (Yan ¶ 0006 teaches “methods for generating and utilizing a touchpoint attribution attention neural network to identify significant touchpoints and/or measure performance of touchpoints in digital content campaigns”) of claim 9, and [a] non-transitory, computer-readable media . . . capable of instructing the transaction manager (Yan ¶ 0006 teaches “, non-transitory computer-readable media . . . for generating and utilizing a touchpoint attribution attention neural network to identify significant touchpoints and/or measure performance of touchpoints in digital content campaigns) of claim 17, comprising: a memory configured to store instructions of a predictor model; and a transaction manager comprising a processor (Yan ¶ 0208 teaches “one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices, such as a client device or server device [(that is, a memory configured to store instructions of a predictor model; and a transaction manager comprising a processor)]”) configured to execute the instructions to: train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts (Yan ¶ 0059 teaches “the deep learning attribution system 104 monitors interaction data [(that is, learn transaction patterns of accounts)] that includes, but is not limited to, data requests (e.g., URL requests, link clicks), time data (e.g., a time stamp for clicking a link, a time duration for a web browser accessing a webpage, a time stamp for closing an application), path tracking data (e.g., data representing webpages a user visits during a given session), demographic data (e.g., an indicated age, sex, or socioeconomic status of a user), geographic data (e.g., a physical address, IP address, GPS data), and transaction data (e.g., order history, email receipts)”) by feeding transactions associated with merchants and cardholders, the merchants, the cardholders, and contexts of the merchants and the cardholders to the LSTM RNN (Yan, Fig. 4A teaches touchpoint attribution attention neural network 410 having a LSTM-RNN being trained by touchpoint data [Examiner annotations in dashed-line text boxes]: PNG media_image1.png 1088 998 media_image1.png Greyscale Yan ¶ 0097 teaches “[u]pon being encoded (e.g., using one-hot encoding), the touchpoint encoding layer 402 outputs encoded touchpoint vectors 404, shown as x1, x2, . . . xT in FIG. 4A, which is a sequential time series of the training touchpoint sequence. In one or more embodiments, the encoded touchpoint vectors 404 for a training touchpoint sequence is represented as xt, t ∈ [0, T]; xt ∈ R v t p , where vtp is the total number of all possible touchpoints types and T is the length of the training touchpoint sequence in the touchpoint path P, which varies for each training touchpoint sequence [(that is, train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts by feeding transactions associated with merchants and cardholders, the merchants, the cardholders, and contexts of the merchants and the cardholders to the LSTM RNN)]”), wherein the transaction patterns correspond to different merchant category codes (MCCs) (Yan ¶ 0076 teaches “the deep learning attribution system 104 obtains touchpoint data from a database that maintains touchpoint information related to an entity and/or product, where each touchpoint includes a touchpoint identifier, a user identifier, and an interaction time (e.g., timestamp)”) and transaction amount bins (Yan ¶ 0080 teaches “[u]sing the touchpoint sequence and the conversion information, the deep learning attribution system 104 can generate one or more touchpoint paths. As described above, a touchpoint path includes touchpoint sequence for a user combined with a conversion indication (e.g., a conversion or non-conversion). To illustrate, FIG. 3 shows various touchpoint paths that include touchpoints (e.g., “DI” or display impression, “DC” or display click, “ES” or email sent, “EO” or email opened, “EC” or email clicked, “FT” or free trial sign-up, and “PS” or paid search) as well as conversion indicators (e.g., “C” or conversion and “NC” or non-conversion)”), the contexts are vectors representing encoded parameters (Yan ¶ 0097 teaches “encoded touchpoint vectors 404, shown as x1, x2, . . . xT in FIG. 4A,) including parameter a corresponding to a category class (Yan ¶ 0035 teaches “the term product hereafter refers to both products and services and includes subscriptions, bundles, and on-demand/one-time purchasable products [(that is, parameter a corresponding to a category class)]”), parameter b corresponding to a class of progress percentage (Yan ¶ 0080 teaches “’DI’ or display impression, ‘DC’ or display click” and “’FT’ or free trial sign-up, and ‘PS’ or paid search [(that is, parameter b corresponding to a class of progress percentage)]”), parameter c corresponding to a class of proximity (Yan ¶ 0080 teaches “’ES’ or email sent, ‘EO’ or email opened, ‘EC’ or email clicked [(that is, ”parameter c corresponding to a class of proximity)]”), and category d corresponding to a class of historical redemption (Yan ¶ 0080 teaches “conversion indicators (e.g., “C” or conversion and “NC” or non-conversion) [(that is, category d corresponding to a class of historical redemption)]”); predict information for a user account (Yan, Fig. 5B, teaches a deep learning attribution system 104 employing the trained touchpoint attribution attention neural network to determine conversion predictions for target touchpoint sequences [Examiner annotations in dashed-line text boxes]: PNG media_image2.png 817 902 media_image2.png Greyscale Yan ¶ 0164 teaches “the deep learning attribution system 104 identifies the highest conversion probability (and the potential touchpoint corresponding to the highest conversion probability) as the conversion prediction 514. Accordingly, in these embodiments, the conversion prediction 514 identifies which of the potential touchpoints 510 that, if next served to the target user, will most likely result in a conversion [(that is, “most likely” is predict information for a user account)]”) based on a density function of the LSTM RNN (Yan, Fig. 6B, teaches a graphical user interface 602b that includes touchpoint attribution density distributions [Examiner annotations in dashed-line text boxes]: PNG media_image3.png 1043 1435 media_image3.png Greyscale Yan ¶ 0175 teaches “the area under the curve (i.e., AUC) of the density function [(that is, based on a density function of the LSTM RNN)] represents the probability of getting specific attribution values between the displayed range) automatically transmit a notification of a feature to an owner of the user account based on the information (Yan ¶ 0166 teaches “if the deep learning attribution system 104 recommends a display impression [(that is, a feature to an owner of the user account based on the information)], the conversion prediction 514 can recommend one or more digital content media channels (e.g., browser, in-app, push notification) to utilize to best trigger the recommended touchpoint [(that is, automatically transmit a notification of a feature to an owner of the user account based on the information)]”), wherein the information includes a time of a future transaction linked to a correlated marker of the payment network (Yan ¶ 0021 teaches “the deep learning attribution system can generate and utilize a touchpoint attribution attention neural network to efficiently and accurately generate accurate touchpoint attributions for a digital content campaign as well as generate conversion predictions for future touchpoints in digital content campaigns [(that is, “future touchpoints” are the information includes a time of a future transaction)]”; Yan ¶ 0038 teaches “the term conversion includes the act of a user committing to a product offered by an entity, selecting (e.g., clicking) a digital link within digital content, or navigating to a particular website. Specifically, the term conversion includes the user converting from a non-paying customer into a paying customer (e.g., by purchasing a product or license) [(that is, “paying customer” pertains to the payment network)]”; Yan ¶ 0076 teaches “the deep learning attribution system 104 obtains touchpoint data from a database that maintains touchpoint information related to an entity and/or product, where each touchpoint includes a touchpoint identifier, a user identifier, and an interaction time (e.g., timestamp) [(that is, a “user identifier” is a correlated marker of the payment network)]”). Though Yan teaches each touchpoint includes a touchpoint identifier, a user identifier, and an interaction time (e.g., timestamp), Yan, however, does not explicitly teach “merchant category codes (MCCs).” But Zoldi teaches “merchant category codes,” where “monitor moving averages of adversarial transactions by various indicator variables related to the transaction over various periods of time. The indicator variables include merchant category code (MCC) of the transaction, merchant code, country of transaction, etc. [(that is, a merchant category code (MCC))], country, time of day, etc.” (Zoldi ¶ 0069). Yan and Zoldi are from the same or similar field of endeavor. Yan teaches using a touchpoint attribution attention neural network to generate conversion predictions for target touchpoint sequences and to provide targeted digital content over specific digital media channels to client devices of individual users. Zoldi teaches various indicator variables relevant to the business problem such as merchant category code (MCC). Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Yan pertaining to entity touchpoint identifiers with the merchant category code (MCC) of Zoldi. The motivation to do so is because, “[b]y attempting various combinations of input parameters at scale, fraudsters are able to probe for combinations of inputs and model phase spaces that maximize the reward feature, while minimizing changes in the outcome of the model to evade detection. Improved AI systems are needed that are resistant to this kind of probing and manipulation.” (Zoldi ¶ 0005). Regarding claims 2, 10, and 18, the combination of Yan and Zoldi teach all of the limitations of claims 1, 9, and 17, respectively, as described above in detail. Yan teaches - wherein the processor is configured to execute the instructions to additionally learn the transaction pattern by: learning a distribution of an expenditure pattern relating the account over time (Yan ¶ 0098 teaches “Using the encoded touchpoint vectors 404, the deep learning attribution system 104 can continue to train the touchpoint attribution attention neural network 400 a. In particular, in various embodiments, the deep learning attribution system 104 performs the act 404 of providing the encoded touchpoint vectors as input to the embedding layer 406. In general, the embedding layer 406 quantifies and categorizes hidden contextual similarities between touchpoint types based on the touchpoint's distribution given a large sample of training touchpoint paths 434, which overcomes the issue of touchpoint representation sparsity”); [Examiner notes that the plain meaning of an “expenditure pattern” is the description of how money is spent, revealing regularities and tendencies in spending habits across different categories of goods and services, and accordingly, the broadest reasonable interpretation of the term “expenditure pattern” covers the training touchpoint paths, which is not inconsistent with the Applicant’s disclosure (MPEP § 2111)]). Regarding claims 3, 11, and 19, the combination of Yan and Zoldi teach all of the limitations of claims 1, 9, and 17, respectively, as described above in detail. Yan teaches - wherein the processor is configured to execute the instructions to additionally train the LSTM RNN by: providing information related to a cardholder of the account transacting at time t (Yan ¶ 0167 teaches “the deep learning attribution system 104 employs the trained time-decay parameter to identify a time or window of time that optimizes the likelihood of conversion for a potential touchpoint [(that is, providing information related to a cardholder of the account transacting at time t)]”); providing information related to a merchant being transacted at by the cardholder at time t (Yan ¶ 0164 teaches “the conversion prediction 514 identifies which of the potential touchpoints 510 that, if next served [(that is, at time t)] to the target user, will most likely result in a conversion [(that is, “potential touchpoints” is providing information related to a merchant being transacted at by the cardholder at time t)]”); and providing a context of the cardholder and the merchant at time t (Yan ¶ 0076 teaches “the deep learning attribution system 104 can filter touchpoints to a given time window (e.g., touchpoints within the past week, month, or year) [(that is, “touchpoints to a given time window” is providing a context of the cardholder and the merchant at time t)]”). Regarding claims 4, 12, and 20, the combination of Yan and Zoldi teach all of the limitations of claims 1, 9, and 17, respectively, as described above in detail. Yan teaches - wherein the processor is configured to execute the instructions to additionally predict the information by: predicting transactions behavior of the user account during a period (Yan ¶¶ 0120-21 teaches “, the deep learning attribution system 104 applies the following formula shown in Equation 10 to determine the conversion prediction 426 (i.e., p) [(that is, predicting transactions behavior of the user account)]. PNG media_image4.png 38 485 media_image4.png Greyscale . . . [W]hen determining touchpoint attributions to predict conversion, the probability for users to have a conversion is often greater for users with at least some exposure to an entity (e.g., touchpoints are present for the user) than for users for which there is no exposure ( e.g., no touchpoint observations) [(that is, “exposure” is during a period)]”). Regarding claims 5 and 13, the combination of Yan and Zoldi teach all of the limitations of claims 1 and 9, respectively, as described above in detail. Yan teaches - wherein the processor is configured to execute the instructions to additionally change a state of the user account by: linking the user account to a vector attribute of the future transaction for the correlated marker (Yan ¶ 0071 teaches that “the touchpoint attributions 210 include a weight, coefficient, number, or other values indicating how influential each touchpoint in the touchpoint sequence 204 was in leading to the reported conversion. In many embodiments, the sum of touchpoint attributions 210 adds to one. For example, the deep learning attribution system 104 determines that the first touchpoint 202 a (i.e., display impression) has an attribution value of 15%, the second touchpoint 202 b (i.e., email) has an attribution value of 35%, and the free trial sign-up has an attribution scale of 50% [(that is, the “touchpoint attribute sum” is a vector attribute)]. In alternative embodiments, the sum of touchpoint attributions 210 does not add to one or is above one”; Yan ¶ 0076 teaches a correlated marker where “each touchpoint includes a touchpoint identifier, a user identifier, and an interaction time (e.g., timestamp) [(that is, linking the user account to a vector attribute of the future transaction for the correlated marker )]”). Regarding claims 6 and 14, the combination of Yan and Zoldi teach all of the limitations of claims 5 and 13, respectively, as described above in detail. Yan teaches – wherein the feature includes at least one of a discount, offer, conditional reward, or incentive of a merchant corresponding to the correlated marker (Yan ¶ 0036 teaches “a touchpoint sequence is further limited to touchpoints between the user and the entity with respect to a given product offered by the entity [(that is, the feature includes at least one of a . . . offer . . . of a merchant corresponding to the correlated marker)]”; as noted above, Yan ¶ 0076 teaches a correlated marker where “each touchpoint includes a touchpoint identifier, a user identifier, and an interaction time (e.g., timestamp) [(that is, linking the user account to a vector attribute of the future transaction for the correlated marker )]”). Regarding claims 7 and 15, the combination of Yan and Zoldi teach all of the limitations of claims 1 and 9, respectively, as described above in detail. Zoldi teaches - wherein the correlated marker corresponds to a MCC of the payment network (Zoldi ¶ 0069 teaches “monitor moving averages of adversarial transactions by various indicator variables related to the transaction over various periods of time. The indicator variables include merchant category code (MCC) of the transaction, merchant code, country of transaction, etc.”; Zoldi ¶ 0080 teaches “to look at how the score is affected by these indicators. This information can be utilized to understand where the attack is occurring in terms of the indicators such as MCC, merchant [(that is, “MCC” with “merchant” is a the correlated marker corresponds to a merchant category code (MCC))], country, time of day, etc.”). Regarding claims 8 and 16, the combination of Yan and Zoldi teach all of the limitations of claims 1 and 9, respectively, as described above in detail. Yan teaches – wherein the feature includes an existing offer or a future offer provided by a merchant . . . (Yan ¶ 0036 teaches “a touchpoint sequence is further limited to touchpoints between the user and the entity with respect to a given product offered by the entity [(that is, the feature includes an existing offer . . . provided by a merchant)]”). Zoldi teaches - [wherein the offer] . . . . . . associated with the MCCs corresponding to the correlated marker (Zoldi ¶ 0069 teaches “monitor moving averages of adversarial transactions by various indicator variables related to the transaction over various periods of time. The indicator variables include merchant category code (MCC) of the transaction, merchant code, country of transaction, etc.”; Zoldi ¶ 0080 teaches “to look at how the score is affected by these indicators. This information can be utilized to understand where the attack is occurring in terms of the indicators such as MCC, merchant [(that is, “MCC” with “merchant” is a the MCC corresponding to the correlated marker)], country, time of day, etc.”). Response to Arguments 11. Examiner has fully considered the Applicant’s arguments, and responds below accordingly. Section 101 12. “With respect to Prong One, Applicant respectfully submits that representative claim 1 is not drawn towards a judicial exception. . . . Representative claim 1 recites: A system for managing a payment network, comprising: * * * [(a)] train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts by feeding transactions associated with merchants and cardholders, the merchants, the cardholders, and contexts of the merchants and the cardholders to the LSTM RNN, [(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins, [(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption; * * * Claim 1 does not recite a "observations, evaluations, judgments, and opinions." (Response at pp. 9-10). Examiner’s Response: Examiner respectfully disagrees because the rejection, under Step 2A Prong Two of the SME, that identifies the judicial exception by referring to what is recited (i.e., set forth or described) in the claim and explain why it is considered an exception. For example, if the claim is directed to an abstract idea, the rejection should identify the abstract idea as it is recited (i.e., set forth or described) in the claim and explain why it is an abstract idea. (MPEP § 2106.07(a)). For example, exemplar claim 1 recites the limitation to “[(b)] predict information for the account based on the density function of the LSTM-RNN.” The limitation of “[(c)] predict” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are mental processes, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2); see also claims 9 & 17). Thus, the claims recite an abstract idea under the Office SME guidance. 13. Under Step 2A Prong Two, “[e]ven if the Office maintains that claim 1 is drawn towards a judicial exception, Applicant submits that claim I should be found to be patent eligible at least at Prong Two by reciting additional elements to integrate the exception into a practical application. Claim 1 is amended to emphasize features relating to training a LSTM RNN, which specifically is a machine learning model (a specific computer technology) as noted above in the emphasized features. Moreover, Applicant notes that in the recent Memorandum issued by the USPTO on August 4, 2025, §101 rejections should only be made when claims are more likely than not to be ineligible. . . . Similarly, the claims recite a combination of features in addition to (beyond) the alleged judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception.” (Response at p. 11). Also, Applicant submits that “the claims do not pre-empt training an accurate model, except in conjunction with all the other recited features. See, Diamond v. Diehr, 450 U.S. 175, 187 (finding that the claimed "process admittedly employs a well-known mathematical equation, but they do not seek to pre-empt the use of that equation. Rather, they seek only to foreclose from others the use of that equation in conjunction with all of the other steps in their claimed process."). Thus, the claims are not directed to an abstract idea.” (Response at p. 11). Examiner’s Response: Examiner respectfully disagrees because under Step 2A Prong Two, the rejection identifies any additional elements (specifically point to claim features/limitations/steps) recited in the claim beyond the identified judicial exception; and evaluate the integration of the judicial exception into a practical application by explaining that the claim as a whole, looking at the additional elements individually and in combination, does not integrate the judicial exception into a practical application using the considerations set forth in MPEP §§ 2106.04(d), 2106.05(a)- (c) and (e)- (h). (MPEP § 2106.07(a)). Under Step 2A Prong Two, “integration” may be based on the improvements in the functioning of a computer or an improvement to any other technology or technical field. (MPEP § 2106.04(d)(1)). The evaluation requires, [i]n sum, that (1) the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. Next, (2) if the specification sets forth such an improvement, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. Under Desjardins, the Appeals Review Panel determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. (Advance notice of change to the MPEP in light of Ex Parte Desjardins at p. 2 (05 December 2025) [hereinafter Advance Notice] (emphasis added by Examiner)).The Applicant’s disclosure does recite, for example, that “[t]he predictions may be used as a basis of increasing efficiency and productivity of the network, as well as allow the network to better serve its participating entities in transacting business,” (Specification ¶ 0029), and “the ability to target incentives to relevant account holders inures both to the benefit of the account holders and merchants, and also to the credit card company through an increase in loyalty of its card users.” (Specification ¶ 0039). However, the Specification does not provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement improvements in the functioning of a computer or an improvement to any other technology or technical field as set out by the example of the Advance Notice. Instead, the claims are directed to generic computer components (e.g., processor, memory, LSTM-RNN) that are used to implement the abstract idea, (MPEP § 2106.05(f)), and also additional elements of post-processing communications relating to the abstract idea of “predicting.” Accordingly, the claims are subject-matter ineligible, as set out above in detail. Section 103 14. “Applicant traverses this rejection and respectfully asserts that the applied references, alone or in combination, fail to satisfy a prima facie case of obviousness because all of the claimed limitations are not disclosed, taught or suggested by the references or rendered obvious by market forces present at the time the claimed invention was made. Representative claim 1 recites: A system for managing a payment network, comprising: * * * [(a)] train a long short-term memory recurrent neural network (LSTM RNN) to learn transaction patterns of accounts by feeding transactions associated with merchants and cardholders, the merchants, the cardholders, and contexts of the merchants and the cardholders to the LSTM RNN, [(a.1)] wherein the transaction patterns correspond to different merchant category codes (MCCs) and transaction amount bins, [(a.2)] the contexts are vectors representing encoded parameters including parameter a corresponding to a category class, parameter b corresponding to a class of progress percentage, parameter c corresponding to a class of proximity, and category d corresponding to a class of historical redemption; * * * [(claim 1, lines 5-12 (emphasis added by Applicant))]. . . .[T]he above feature is not disclosed or suggested. Accordingly, Applicant requests that the Examiner withdraw the instant rejection.” (Response at pp. 11-13). Examiner’s Response: Examiner agrees that the Applicant’s amendments overcome the teachings of Pati. Accordingly, Examiner cites the prior art of Yan as teaching these features, as set out above in detail. Conclusion 15. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 16. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: (US Published Application 20210248448 to Branco et al.) teaches handling interleaved sequences using RNNs includes receiving data of a first transaction, retrieving a first state (e.g., a default or a saved RNN state for an entity associated with the first transaction), and determining a new second state and a prediction result using the first state and an input data based on the first transaction. (Shikov et al., “Forecasting Purchase Categories by Transactional Data: A Comparative Study of Classification Methods,” (2019)) teaches Forecasting purchase behavior of bank clients allows for development of new recommendation and personalization strategies and results in better Quality-of-Service and customer experience. In this study, we consider the problem of predicting purchase categories of a client for the next time period by the historical transactional data. We study the predictability of expenses for different Merchant Category Codes (MCCs) and compare the efficiency of different classes of machine learning models including boosting algorithms, long-short term memory networks and convolutional network. 17. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.S./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Mar 17, 2022
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103, §112
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 04, 2025
Response Filed
Mar 27, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591815
METHOD AND SYSTEM FOR UPDATING MACHINE LEARNING BASED CLASSIFIERS FOR RECONFIGURABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12585917
REINFORCEMENT LEARNING USING ADVANTAGE ESTIMATES
2y 5m to grant Granted Mar 24, 2026
Patent 12547759
PRIVACY PRESERVING MACHINE LEARNING MODEL TRAINING
2y 5m to grant Granted Feb 10, 2026
Patent 12530613
SYSTEMS AND METHODS FOR PERFORMING QUANTUM EVOLUTION IN QUANTUM COMPUTATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518214
DISTRIBUTED MACHINE LEARNING SYSTEMS INCLUDING GENERATION OF SYNTHETIC DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
55%
With Interview (+18.0%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month