DETAILED ACTION
Acknowledgements
This action is in response to Applicant’s filing on Mar. 3, 2026, and is made Non-Final. This action is being examined by James H. Miller, who is in the eastern time zone (EST), and who can be reached by email at James.Miller1@uspto.gov or by telephone at (469) 295-9082.
Interviews
Examiner interviews are available by telephone or, preferably, by video conferencing using the USPTO’s web-based collaboration platform. Applicants are strongly encouraged to schedule via the USPTO Automated Interview Request (AIR) portal at http://www.uspto.gov/interviewpractice. Interviews conducted solely for the purpose of “sounding out” the examiner, including by local counsel acting only as a conduit for another practitioner, are not permitted under MPEP § 713.03. The Office is strictly enforcing established interview practice, and applicants should ensure that every interview request is directed toward advancing prosecution on the merits in compliance with MPEP §§ 713 and 713.03.
For after-final Interview requests, supervisory approval is required before an interview may be granted. Each AIR should specifically explain how the After-Final Interview request will advance prosecution—for example, by identifying targeted arguments responsive to the rejection of record, alleged defects in the examiner’s analysis, proposed claim amendments, or another concrete basis for discussion. See MPEP § 713. If the AIR form’s character limits prevent inclusion of all pertinent details, Applicants may send a contemporaneous email to the examiner at James.Miller1@uspto.gov.
The examiner is generally available Monday through Friday, 10:00 a.m. to 4:00 p.m. EST.
For any GRANTED Interview Request, Applicant can expect an email within 24 hours confirming an interview slot from the dates/times proposed and providing collaboration tool access instructions. For any DENIED Interview Request, the record will include a communication explaining the reason for the denial.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on Mar. 3, 2026, has been entered.
Claim Status
The status of claims is as follows:
Claims 1, 4–17, 19, 20, 24, and 25 are now pending and examined with Claims 1, 12, and 17 in independent form.
Claims 1, 12, and 17 are presently amended.
Claim 22 and 23 are cancelled.
Claim 25 is new.
Response to Amendment
Applicant's Amendment has been reviewed against Applicant’s Specification filed Jun. 30, 2023, [“Applicant’s Specification”] and accepted for examination.
Response to Arguments
35 U.S.C. § 101 Argument
Applicant argues that that the Office improperly characterizes the claims as reciting the organizing human activity exception because the claims recite specific machine learning (ML) training methodologies that cannot be practically performed in the human min. Applicant’s Reply at 11–2. Specifically, Applicant points to the amended “train” and “retrain” limitations as a specific ML training methodology using labeled positive and negative example, which is not a mental process. Id. at 12. Applicant further argues that the “Receive, in real-time during the transaction …” limitation describes a technological integration that cannot be practically performed in the human mind. Id.
Examiner respectfully disagrees.
Examiner agrees that the “store” and “retrain” amended limitations add specificity over a pre-amendment language. Nevertheless, the claims as a whole remain directed to the abstract idea of organizing human activity. Specifically, managing spending permissions between a primary and secondary account holder, which is a fundamental economic practice that humans have performed for centuries (e.g., giving a child an allowance and authorizing exceptions). The fact that machine learning models and computer processors are used to implement the concept does not remove the abstract character of the core idea. Examiner maintains that the claims recite, at their core, a method of organizing human activity, with the machine learning steps serving as tools for carrying out that activity, consistent with Alice and Recentive Analytics.
The specification teaches the claimed advance is not to a solution specifically arising in the realm of computers but rather to the problem of denying transactions that exceed current spending limits. Spec. ¶ 4. The specification does not disparage prior art processors, computer hardware, or GUIs but rather, disparages exceeding spending limits. Spec. ¶ 4. Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1337 (Fed. Cir. 2016) (a specification's disparagement is an indicium of whether the claims are directed to an abstract idea or technological improvement at step-one).
A claim whose entire scope can be performed mentally, as here, cannot be said to improve computer technology. MPEP 2106.05(a) (citation omitted). The amended limitations do not remove the claims from the mental process exception as explained infra.
The computer equipment is exemplary (generic) and known in the prior art. Final Act. 15–20 (citing Spec., ¶¶ 17, 25, 26, 27, 30, 31, 32, 33, 34, 37, 40, 41, 44, 55, 59, 85, Fig. 2). The known and generic computer equipment, here, performs functions that are programmed by software. Spec. ¶ 33. This is a computer doing what it is designed to do—performing directions it is given to follow.
Even if the specification teaches a technological improvement in a substantive manner, the claims do not reflect the improvement in technology. Rather, the claims recite the training and retraining of the ML model using labeled data (which Applicant avers is the technical improvement). The training and retraining describes the well-known practice of supervised machine learning1 and “patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Recentive Analytics, Inc. v. Fox Corp., 134 F.4th 1205, 1216 (Fed. Cir. 2025).
Applicant argues in the alternative that even if the claims recite an exception, any exception is integrated into a practical application because the claims reflect an improvement in how machine learning models themselves operate, analogizing to Desjardins and Ex Parte Carmody. Applicant’s Reply at 13–7. Like eligible claims in Desjardins (which specified how a model “adjust[s] parameters to optimize performance … while protecting performance on [a prior] task”), the present clams specify how the models are retrained “by storing labeled training data (the manual override paired with transaction data) and using it to adjust predictive weights.” Id. at 14. Applicant further analogizes the USPTO Example 47 (Claim 3) which was found eligible because it recited “dropping one or more malicious network packets in real time,” and the Specification’s teaching at ¶ 17 that “examples of the present disclosure improve the speed with which computers can determine recurring transactions and spending limitations and allows spending limitation overrides to be conducted in near real time, unlike current methods which only use lagging indicators and/or require manual input from a primary account holder.” Id. at 15.
Examiner respectfully disagrees.
Examiner is not persuaded any exception is integrated into a practical application that constitutes an improvement to the machine learning model itself. Unlike Desjardins, where the PTAB found the specification disclosed and the claims recited, a specific improvement in the ML model itself (i.e., to avoid catastrophic forgetting—a technical problem with machine learning) and technical improvements to “artificial intelligence (AI) systems to ‘us[e] less of their storage capacity’ and enables ‘reduced system complexity’”, here, the present claims merely apply a standard supervised machine learning retraining loop (storing override decisions as labeled training data and retraining to adjust predictive weights) to spending limit authorization (the abstract idea). See, NPL Leskovec, Chapter 12. This is the type of “apply [abstract idea] with a machine” claim cautioned against in Alice. The Specification’s statement in ¶ 17 that the system “improve[s] the speed with which computers can determine recurring transactions and spending limitations” is a result-based recitation of a benefit, not a description of how the machine learning model itself is structurally improved. Additionally, the amended claims do not recite the specific technical means by which this speed improvement is achieved so as to integrate the exception into a practical application. Examiner also notes that as here, “relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible.” OIP Technologies, Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1263 (Fed. Cir. 2015) (citing Alice Corp. Pty. Ltd. v. CLS Bank Intern., 134 S.Ct. 2347, 2359 (2014)); MPEP § 2106.05(f)(2).
Applicant argues that even under Step 2B, the claims recite an inventive concept because the combination of (1) retraining on labeled data with both positive and negative examples, (2) a specific retraining mechanism that stores labeled data and adjusts predictive weights, and (3) real-time integration with merchant systems constitutes “more than well-understood, routine, conventional activity,” citing Berkheimer. Applicant’s Reply at 17–19. Applicant contends that Amended Claim 12 and 17 now recite “store, in a database,” and “retrain the first predictive model and the second predictive model” and that these specific training and retraining features are not WURC at the time of filing.
Examiner respectfully disagrees.
Examiner agrees the WURC inquiry is a question of fact. However, when viewed as a whole and in light of the specification, the amended claims still recite the well-understood, routine, and conventional use of supervised machine-learning models in a payment-transaction context. The limitations amount to: collecting historical card-transaction data and associated user outcomes (override/no-override), forming labeled training data, training and retraining predictive models on those labels, and deploying those models to make real-time authorization decisions based on incoming transaction features—steps that are characteristic of standard supervised-learning pipelines and conventional card-authorization scoring systems already evidenced in the record. See, NPL Leskovec (Chapter 12). While the claims add domain-specific labels (spending-limit overrides) and describe storing new override decisions and retraining models, these are presented at a high level and do not recite any non-conventional model architecture, training algorithm, or parameter-update technique that changes how the computer itself functions, as opposed to automating a financial authorization policy. Applicant’s Specification describes the ML models as “a unique computer technology that involves training the models to complete tasks, such as labeling, categorizing, and identifying recurring transactions. Spec, ¶ 17. Thus, Applicant’s Specification confirms that training and retraining techniques themselves are a recognized category of existing technology rather than a novel technical contribution. The combination of these elements also does not amount to significantly more because each element is applied in it ordinary capacity to the abstract financial authorization concept, and their combination does not produce a result that is different from the sum of the abstract idea plus conventional ML tools. To move the analysis toward non-WURC activity in a subsequent amendment, Applicant should consider (i) more clearly tying the claims to a specific, articulated technical problem in model operation (for example, how the system avoids degradation of recurring-transaction detection when learning new override behavior, or how it reduces particular computational or latency constraints of traditional payment systems), and (ii) positively reciting in the claims the concrete model-update behavior or architectural features that implement that solution, rather than describing only the business-level effect of “learning user overrides” and “automatically authorizing similar future transactions.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4–17, 19, 20, 24, and 25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Analysis
Step 1: Claims 1, 4–17, 19, 20, 24, and 25 are directed to a statutory category. Claims 1, 4–11, 24, and 25 recite “a system” and are therefore, directed to the statutory category of “a machine.” Claims 12–16 also recite “a system” and are therefore, directed to the statutory category of “a machine.” Claims 17, 19, and 20 also recite “a system” and are therefore, directed to the statutory category of “a machine.”
Representative Claim
Claim 1 is representative [“Rep. Claim 1”] of the subject matter under examination and recites, in part, emphasis added by Examiner to identify limitations with normal font indicating the abstract idea exception, bold limitations indicating additional elements. Each limitation is identified by a letter for later use as a shorthand notation in referencing/describing each limitation. Portions of the claim use italics to identify intended use limitations2 and underline, as needed, in further describing the abstract idea exception:
[A] 1. A system comprising: one or more processors; and memory in communication with the one or more processors and storing instructions that, when executed by the one or more processors, are configured to cause the system to:
[B] train one or more machine learning models using historical transaction data associated with past transactions and associated indications of whether a first user did or did not manually execute a spending limitation override for each past transaction;
[C] implement one or more predictive model systems based on first and second data respectively associated with the first user and a second user of an account, wherein the one or more predictive model systems comprise the one or more machine learning models configured to dynamically authorize spending limitations;
[D] receive a first user input from the first user via a graphical user interface, the first user input corresponding to a spending limitation associated with the second user;
[E] identify a transaction associated with the second user that exceeds the spending limitation;
[F] receive, in real-time during the transaction and prior to the completion of the transaction, transaction data from a merchant system, the transaction data comprising a transaction amount and a merchant identifier;
[G] automatically override the spending limitation when the transaction exceeds the spending limitation by less than the threshold;
[H] when the transaction exceeds the spending limitation by the threshold or greater than the threshold: automatically reject the spending limitation override;
[I] modify the graphical user interface to comprise a notification associated with the rejected spending limitation override, the notification providing an option for the first user to manually approve the spending limitation override;
[J] receive a second user input from the first user via the graphical user interface, the second user input comprising a manual override instruction; and
[K] store, in a database, the transaction data and the received manual override instruction as labeled training data; and
[L] retrain the one or more machine learning models using the labeled training data to adjust predictive weights, wherein the retrained one or more machine learning models automatically authorize subsequent transactions having transaction data similar to the stored transaction data without requiring manual override from the first user.
Claims are directed to an abstract idea exception.
Step 2A, Prong One: Rep. Claim 1 recites “when the transaction exceeds the spending limitation by less than a threshold” “automatically override the spending limitation,”3 (Limitation G); and “when the transaction exceeds the spending limitation by the threshold or greater than the threshold” “automatically reject the spending limitation override”2 (Limitation H), which recites commercial or legal interactions under the organizing human activity exception because overriding or rejecting a spending limitation is a pre-sale or sale activity, MPEP § 2106.04(a)(2)(II)(B), or a fundamental economic principle/practice also under organizing human activity because said Limitations “describe concepts relating to the economy and commerce,” such as “mitigating risks” of overspending on, for example, a credit card. MPEP § 2106.04(a)(2)(II)(A). Limitations B–F, and I–L are the required steps to automatically override or automatically reject the spending limitation and thus, recite the same exception.
Alternatively4, Limitations B–L, as drafted, recite the abstract idea exception of mental processes that under the broadest reasonable interpretation, cover performance in the human mind or with pen and paper, but for the recitation of the generic computer components indicated in bold. MPEP § 2106.04(a)(2)(III).
Claims recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include:
• a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016);
. . .
• a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011).
MPEP § 2106.04(a)(2)(III)(A). For example, but for the generic computer components claim language, here, Limitations B–L, recite collecting information (Limitations D, F, J, K), analyzing it (Limitations B, C, E, G, H, L) and displaying certain results of the collection and analysis (Limitation I) where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind. For example, Limitation B and C are mental processes that are practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “train one or more machine learning models using historical transaction data associated with past transactions and associated indications of whether a first user did or did not manually execute a spending limitation override for each past transaction” (Limitation B) and “implement [use of] one or more predictive model systems based on first and second data respectively … wherein the one or more predictive model systems comprise the one or more machine learning models to dynamically authorize spending limitations” (Limitation C). Applicant’s Specification teaches that training of the machine learning model may be performed in any way and that training is recited in one location. Spec. ¶ 17. The Speciation, ¶¶ 16, 33, 41, describes what data is used (primary/secondary account data, spending behavior, repayment schedule, etc.) and the use of a machine learning model. Claim 1 does not require any particular data volume. It just says, “historical transaction data … and associated indications,” which could be a handful of transactions. There is no big data or scale requirement. “Machine learning model” is claimed very broadly. Under BRI, a simple linear model (e.g. a least-squares fit line on two features) is a type of machine learning model. Training such a model on a small dataset could be done manually. Thus, training is described at a conceptual level—not with a particular algorithm, loss function, or architecture. Additionally, NPL Leskovec is prior art and additional evidence of a human’s ability to train, implement and use a machine learning model without the aid of a computer. NPL Leskovec, p. 464–65 (Example 12.1) (cited herein on PTO-892). Limitation E is a mental process that is practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to identify a transaction that exceeds a spending limitation. Collecting and comparing known information (i.e., the transaction amount and the spending limitation) are steps that can be practically performed in the human mind under Classen. Limitations G and H are mental processes because the transaction exceeding the spending limitation by less or more than a threshold is a decision merely requiring the comparison of known information (i.e., the transaction amount, the spending limitation, and the threshold) under Classen. Similarly, the subsequent actions taken to “automatically reject” or automatically override” the spending limitation based on whether the transaction exceeds the spending limitation by less or more than a threshold (i.e., the comparison) are mental processes that are practically performed in the human mind because it requires mere “observation, evaluation, judgment, and/or opinion” to decide whether to override or reject the spending limitation override. Last, Limitation L is a mental process that is practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “retrain the one or more machine learning models …” for the same reasoning as Limitation B. The use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation, but simply accounts for variations in memory capacity from one person to another” or a multi-step mental process. MPEP § 2106.04(a)(2)(III)(B).
If a claim limitation under BRI, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract idea exception. MPEP § 2106.04(a)(2)(III). Accordingly, the pending claims recite the combination of these abstract idea exceptions.
Step 2A, Prong Two: Rep. Claim 1 does not contain additional elements that integrate the abstract idea exception into a practical application because the additional elements are mere instructions to apply the abstract idea exception. MPEP § 2106.05(f). The additional elements are: a system comprising one or more processors, and memory storing instructions; train one or more machine learning models; one or more predictive model systems comprising the one or more machine learning models; a graphical user interface; a merchant system; a database; and retrain the one or more machine learning models.
Regarding the system comprising one or more processors, and memory storing instructions; train one or more machine learning models; one or more predictive model systems comprising the one or more machine learning models; a graphical user interface; a merchant system; a database; and retrain the one or more machine learning models, Applicant’s Specification does not otherwise describe them or describes them using exemplary language as a general-purpose computer, as a part of a general-purpose computer, or as any known and exemplary (generic) computer component known in the prior art. Thus, Applicant takes the position that such hardware/software is so well known to those of ordinary skill in the art that no explanation is needed under 35 U.S.C. § 112(a). Lindemann Maschinenfabrik GMBH v. Am. Hoist & Derrick Co., 730 F.2d 1452, 1463 (Fed. Cir. 1984) (citing In re Meyers, 410 F.2d 420, 424 (CCPA 1969) (“[T]he specification need not disclose what is well known in the art”). E.g., Spec. ¶ 25 (“Network 150 may be of any suitable type”); ¶ 27 (“A peripheral interface, for example, may include the hardware, firmware and/or software that enable(s) communication with various peripheral devices”); ¶ 32 (“the transaction management system 110 may include the memory 230 that includes instructions to enable the processor 210 to execute one or more applications, such as server applications, network communication processes, and any other type of application or software known to be available on computer systems.”); ¶ 34 (“The memory 230 may also include any combination of one or more databases controlled by memory controller devices (e.g., server(s), etc.) or software”); ¶ 37 (“the transaction management system 110 may include any number of hardware and/or software applications that are executed to facilitate any of the operations”); ¶ 44 (“the secondary user is free to transact with any merchant with transactions that are not otherwise limited by the spending limitation”); ¶ 44 (“the spending limitation may be a combination of any of the above-described spending limitations”); ¶ 55 (These computer-executable program instructions may be loaded onto a general-purpose computer”); ¶ 85 (any known “mobile” or “portable computing” device); ¶¶ 26, 30, 31, Fig. 2 (known and generic (exemplary) processor, memory, and instructions); ¶ 29 (any “mobile network interface … known in the art.”); ¶ 31 (“The processor 210 may be one or more known processing devices”); ¶ 59 (“well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description”) ¶ 17 (“Machine learning models are a unique computer technology that involves training the models to complete tasks, such as labeling, categorizing, and identifying recurring transactions.”). The generic processor, here, appears to perform calculations (functions) that are programmed by software. Spec. ¶ 33, Rep. Claim 1. This is a computer doing what it is designed to do—performing directions it is given to follow.
Regarding the one or more predictive model systems comprising one or more machine learning models, train one or more machine learning models, and retrain the one or more machine learning models, Applicant’s Specification discloses the model may built with “software known to be available on computer systems.” Spec., ¶ 32. The computer system may be a “general purpose computer.” Id. ¶ 55. Applicant’s Specification discloses the model may be a “machine learning model” and implemented “by feeding the primary user account data and the secondary user account data into a [second] machine learning model to produce a primary user predictive model and a secondary user predictive model.” Id. ¶ 41. Applicant’s Specification and the claims do not describe how the one or more predictive model systems or one or more machine learning models is otherwise developed because these models are trained, retrained, or developed outside the scope of the claimed invention. The models here, like any model, receive input and produce and output. Id. ¶ 41. There is no description of how the “second” ML model is developed (described in Spec., ¶ 41) and its development happens outside the scope of the claims. The same logic applies to the retraining. On this record, one or more predictive model systems comprising one or more machine learning models could be about anything, including mental processes. The training of the machine learning models is not described, and “training” is recited in one location. Spec. ¶ 17. NPL Leskovec is prior art and additional evidence of a human’s ability to train, retain, implement and use a machine learning model without the aid of a computer or by hand. NPL Leskovec, p. 464–65 (Example 12.1) (cited herein on PTO-892) Accordingly, Examiner finds the one or more predictive model systems comprising one or more machine learning models, train one or more machine learning models, and retrain the one or more machine learning models are generic. Examiner further finds that Applicant’s own disclosure is sufficient by itself to readily conclude that the additional elements do not individually limit the abstract idea exception and do not amount to a practical application. See also, Recentive Analytics, Inc. v. Fox Corp., 2025 U.S.P.Q.2d 628 (Fed. Cir. 2025) (“hold[ing] only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101”).
Regarding the graphical user interface, Applicant’s Specification does not otherwise describe it or describes it using exemplary language that is part of a general-purpose computer. Spec., ¶¶ 40, 55. Thus, Examiner concludes Applicant intended merely a generic and known graphical user interface, such as a web browser. The claims recite the function of the interface receiving a first user input (Limitation D) and receiving a second user input (Limitation J) and displaying a notification having an option for the first user to manually approve the spending limitation override (Limitation I). This functions merely invoke computers or other machinery in its ordinary capacity to receive, store, display, or transmit data. MPEP § 2106.05(f)(2). The displaying step fails to transform the claims into patent eligible material, as this is part of the field of use and technical environment in which the abstract idea is being implement and does not result in an improvement to additional elements, a practical application, or inventive concept. MPEP 2106.05(h) (citing Electric Power Group). Further, requiring the use of software to tailor information and provide it to the user on a generic computer also does not provide a practical application or inventive concept. MPEP § 2106.05(f)(2) (citing Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015)).
Limitation A describes the processor executing instructions stored in memory to perform the steps of the claimed invention. This takes a generic piece of hardware and describe the functions of receiving, storing, and sending data (instructions) between the processor and memory, which merely invokes computers or other machinery in its ordinary capacity to receive, store, or transmit data. MPEP § 2106.05(f).
Limitations B–L describe the processor, memory, and instructions, performing the steps of the claimed invention, which represents the abstract idea exception itself. Performing the steps of the abstract idea exception itself simply adds a general-purpose computer after the fact to an abstract idea exception, MPEP § 2106.05(f)(2), or generically recites an effect of the judicial exception. MPEP § 2106.05(f)(3).
Therefore, the claim as a whole, looking at the additional elements individually and in combination, are no more than mere instructions to apply the exception using generic computer components and is not a practical application. MPEP § 2106.05(f). The additional elements do not integrate the abstract idea exception into a practical application because they do not impose any meaningful limits on the abstract idea exception. Accordingly, Rep. Claim 1 is directed to an abstract idea.
Rep. Claim 1 is not substantially different than Independent Claims 12 and 17 and includes all the limitations of Rep. Claim 1. Independent Claims 12 and 17 contain no additional elements. The Claim 12 difference of reciting a first and second predictive model (instead of one or more predictive model systems as recited in Rep. Claim 1) does not distinguish the claims for the same reasons as articulated for the one or more predictive model systems in Rep. Claim 1. What models are claimed and how they are developed are outside the scope of the claims. Therefore, Independent Claims 12 and 17 are also directed to the same abstract idea.
The claims do not provide an inventive concept.
Step 2B: Rep. Claim 1 fails Step 2B because the claim as whole, looking at the additional elements individually and in combination, are not sufficient to amount to significantly more than the recited judicial exception. As discussed with respect to Step 2A, Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer and/or generic computer components. MPEP § 2106.05(f). The same analysis applies here in Step 2B. Mere instructions to apply an exception using a generic computer and/or generic computer components cannot provide an inventive concept. MPEP § 2106.05(I).
The additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception.
The pending claims in their combination of additional elements is not inventive. First, the claims are directed to an abstract idea. Second, each additional element represents a currently available generic computer technology, used in the way in which it is commonly used (individually generic). Last, Applicant’s Specification discloses that the combination of additional elements is not inventive. Spec., ¶ 53 (steps/functions may be performed in any order); Spec., ¶¶ 17, 25, 26, 27, 30, 31, 32, 33, 34, 37, 40, 41, 44, 55, 59, 85, Fig. 2 (known and generic (exemplary) computer equipment as explained and cited supra.).
Thus, Examiner finds the additional elements of Rep. Claim 1 are elements that have been recognized as well-understood, routine, and conventional (“WURC”) activity in the particular field of this invention based on Applicant’s own disclosure5. Spec., ¶¶ 25, 26, 27, 30, 31, 32, 33, 34, 37, 40, 41, 44, 55, 59, 85, Fig. 2; MPEP § 2106.05(d). Specifically, Applicant’s Specification discloses the recited additional elements (i.e., the system comprising one or more processors, and memory storing instructions; train one or more machine learning models; one or more predictive model systems comprising the one or more machine learning models; a graphical user interface; a merchant system; a database; and retrain the one or more machine learning models) are limited to generic computer components. These elements do no more than “apply” the recited abstract idea(s) on a known computer (e.g., processor) and computer-related components (e.g., memory storing instructions).
Additionally, generic supervised machine learning steps (forming labeled training data from historical observations, training ML models, adjusting predictive weights, storing labeled data, retraining based on new labels) are well known and taught in NPL Leskovec.
Applying such predictive models to financial transaction data to pre-authorize or score card transactions is conventional in the art, as evidenced by U.S. Pat. No. 6,430,539 (Aug. 6, 2002) (“Predictive Modeling of Consumer Financial Behavior”), which describes building predictive models from historical account/transaction data to predict future spending behavior, suing features and labels derived from past transactions, and U.S. Pat. Pub. No. 2010/0280927 (Nov. 4, 2010) (“Pre-authorization of a Transaction Using Predictive Modeling”), which describes using predictive scoring models on transaction data to authorize payment card transactions in real time or near real time.
Using AI/ML in payment transaction processing, including receiving transaction data from merchant systems and generating authorization decisions is convention in the art, as evidenced by U.S. Pat. Pub. No. 2018/0183737 (“Processing Payment Transactions Using Artificial Intelligence Messaging Services”).
The Examiner also finds the functions of training, implementing, receiving, identifying, receiving, overriding (transmitting), rejecting (not transmitting), modifying, receiving, storing, and retraining, and processing (e.g., performing mathematical operations on) data, described in Limitations A–L are all normal functions of a generic computer.
There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the additional elements in combination adds nothing that is not already present when looking at the elements individually. Their collective functions merely provide conventional computer implementation of the abstract idea at a high level of generality. Thus, Rep. Claim 1 does not provide an inventive concept.
Rep. Claim 1 is not substantially different than Independent Claims 12 and 17 and includes all the limitations of Rep. Claim 1. Independent Claims 12 and 17 contain no additional elements. Therefore, Independent Claims 12 and 17 also do not recite an inventive concept.
Dependent Claims Not Significantly More
The dependent claims have been given the full two-part analysis including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. § 101. Dependent claims are dependent on Independent Claims and include all the limitations of the Independent Claims. Therefore, all dependent claims recite the same Abstract Idea. Dependent claims do not contain additional elements that integrate the abstract idea exception into a practical application or recite an inventive concept because the additional elements: (1) are mere instructions to apply the abstract idea exception; and/or (2) further limit the abstract idea exception of the Independent Claims. The abstract idea itself cannot provide the inventive concept or practical application. MPEP §§ 2106.05(I), 2106.04(d)(III).
Dependent Claims 4–11, 13–16, 19, 20, and 25 all recite “wherein” clauses or limitations that further limit the abstract idea of the Independent Claims and contain no additional elements not otherwise analyzed with Rep. Claim 1. New Claim 25, is not an additional element because it does not change how the ML model operates or is trained. The decision logic is not altered. Thus, this feature is mere extra solution activity or post solution activity. Once an abstract decision “approve/deny” s made, sending a signal to a merchant system is a conventional way to communicate a result in the payment system arts, not a non-conventional computer specific improvement. The timing of the transmission is in furtherance of the abstract idea and forms part of the same abstract idea. An inventive concept or practical application cannot be furnished by an abstract idea exception itself. MPEP §§ 2106.05(I), 2106.04(d)(III).
Dependent Claim 24 recites additional limitations that form part of the same abstract idea exception as recited in Independent Claims and contain no additional elements. An inventive concept or practical application cannot be furnished by an abstract idea exception itself. MPEP §§ 2106.05(I), 2106.04(d)(III).
Conclusion
Claims 1, 4–17, 19, 20, 24, and 25 are therefore drawn to ineligible subject matter as they are directed to an abstract idea without significantly more. The analysis above applies to all statutory categories of invention. As such, the presentment of Rep. Claim 1 otherwise styled as another statutory category is subject to the same analysis.
Examiner Statement of Prior Art—No Prior Art Rejections
Based on the prior art search results, the prior art of record fails to anticipate or render obvious the claimed subject matter of the instant application. While some individual features of Claims 1, 4–17, 19, 20, 24, and 25 may be shown in the prior art of record—no known reference, alone or in combination, would provide the invention of Claims 1, 4–17, 19, 20, 24, and 25. The prior art most closely resembling the applicant’s claimed invention are:
Marx et al. (U.S. Pat. Pub. No. 2014/0379576) is pertinent because it discloses a primary user of a payment account may designate one or more secondary users who may access and utilize the primary user's payment account. When a secondary user is making a transaction using the payment account, the primary user may be notified in real time.
Zhang et al. (U.S. Pat. Pub. No 2020/0314101) is pertinent because it discloses machine learning models trained on historical authorization requests and predictive model systems that automatically authorize/decline transactions based on scores versus thresholds.
Unnerstall et al. (U.S. Pat. Pub. No. 2018/0121913) is pertinent because it discloses location sensing and proximity checks using geofencing and whether a transaction satisfies or violates a geofence rule.
Daetz (U.S. Pat. Pub. No. 2019/0385166) is pertinent because it discloses system and methods to enable a user to set a spend limits associated with a payment account, a current balance on the account, and a time period. In the event of a transaction amount above the spending limit, the system automatically may approve or deny is the amount is “below a certain amount (e.g., $10, $100, etc.). ¶ 70.
Sahni et al. (U.S. Pat. No. 10,445,739) is pertinent because it discloses systems and methods for sharing financial accounts via a mobile wallet system where the mobile wallet system allows for a master wallet associated with a primary account holder to provide limited access to an account of the primary account holder to secondary users. The primary account holder can limit a secondary user's level of access to the funds in the account by establishing spending rules and limits for each secondary user, which limits restrict the secondary users' abilities to spend funds in the account.
FOR: (CA. Pat. App. CA 2790529 A1) is pertinent because it discloses a system and method for managing transaction restrictions for an account in a family of accounts through a network.
NPL: System and Method for Credit Constraints (2004) is pertinent because it discloses a method and system to provide the ability to constrain the usage of an individual's credit card or similar payment device, such as in the parent-child relationship.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES H MILLER whose telephone number is (469)295-9082. The examiner can normally be reached M-F: 10- 4 PM (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett M Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES H MILLER/ Primary Examiner, Art Unit 3694
1 Leskovec, Jure, Anand Rajaraman, and Jeffrey David Ullman. "Mining of Massive Datasets." (2019) (“NPL Leskovec”)
2 Statements of intended use fail to limit the scope of the claim under BRI. MPEP § 2103(I)(C).
3 Examiner modified the as-claimed limitation by placing the condition first.
4 “It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping. … Accordingly, examiners should identify at least one abstract idea grouping, but preferably identify all groupings to the extent possible, if a claim limitation(s) is determined to fall within multiple groupings and proceed with the analysis in Step 2A Prong Two.” MPEP § 2106.04(a).
5 See Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision (Berkheimer v. HP, Inc.), 3-4, https://www.uspto.gov/sites/default/files/documents/memo-berkheimer-20180419.PDF (April, 18, 2018) (That additional elements are well-understood, routine, or conventional may be supported by various forms of evidence, including "[a] citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s).").