DETAILED ACTION
This office action is in response to Applicant’s request for continued examination of 2/10/2026. Amendments to claims 1, 11 and 20 have been entered. Claims 1-20 are pending and have been examined. The rejection and response to arguments are stated below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims do fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to a system, claim 11 is directed to a method and claim 20 is directed to a non-transitory computer-readable medium; Step 1-yes.
Under Step 2A, prong 1, representative claim 1 recites a series of steps for computing a risk assessment score based on a time series of transactions, i.e. fraud detection for mitigating risk, which is a fundamental economic practice and thus grouped as “Certain Methods of Organizing Human Activity”. The claim as a whole and the limitations in combination recite this abstract idea. Specifically, the limitations of representative claim 1, in bold below and stripped of all additional elements, recite the abstract idea as follows.
1. (Currently Amended) A system, comprising:
a non-transitory memory having instructions stored thereon; and
at least one processor operatively coupled to the non-transitory memory, wherein
the instructions, when executed, cause the at least one processor to:
receive, from a database, labelled training data, wherein the labelled
training data comprises a plurality of transactions, and wherein each of the plurality of transactions is labelled as fraudulent or non-fraudulent; and
train a plurality of machine learning models based on the labelled training
data and generate corresponding risk scores, wherein each of the plurality of
machine learning models have corresponding hyperparameters comprising a
threshold that each respective machine learning model uses to detect whether a
transaction is fraudulent or not;
generate impact scores based on a comparison of the risk scores generated
by the plurality of machine learning models and their corresponding thresholds:
receive, from a computing device and over a network, a risk assessment
request regarding a user device associated with a transaction;
receive, from the database, a time series of transactions associated with
the user device;
generate sequence data based on the time series of transactions
associated with the user device;
select at least one machine learning model from the plurality of machine
learning models based on the impact scores:
compute, using the at least one machine learning model, risk score data of
the user device based on the sequence data, wherein the risk score data
comprises a time series of risk scores each corresponding to a respective
transaction in the time series of transactions and a trend status of the user device,
the trend status computed based on data points of the time series of risk scores;
and
transmit, in response to the risk assessment request and over the network,
the risk score data of the user device to the computing device to process the
transaction in accordance with the risk score data.
The claimed limitations, identified above, recite a process that, under its broadest reasonable interpretation, covers performance of a fundamental economic practice but for the recitation of generic computer components. That is, other than the mere nominal recitation of at least one processor operatively coupled to the non-transitory memory, a computing device, a network, a database, a user device and at least one machine learning model selected from a plurality of machine learning models, i.e. an iterative mathematical algorithm for computing values as programmed to do see paragraph [0040], in claim 1 and similarly in claims 11 and 20, there is nothing in the claim element which takes the steps out of the methods of organizing human activity abstract idea grouping. Furthermore, the computing falls under mathematical concepts as the machine learning is a programmed algorithm leveraged for producing a value. Also, the specification discloses equations that are used for computing scores, see at least [0109] and [0112], Thus, the claim recites an abstract idea as do claims 11 and 20.
Under step 2A, prong 2, this judicial exception is not integrated into a practical application. In particular, the claim only recites using generic, commercially available, off-the-shelf computing devices, i.e. processors suitably programmed for computing and communicating over a generic network, to perform the abstract idea steps. The computer components are recited at a high-level of generality (i.e., as generic processors with memory suitably programmed communicating information over a generic network, see at least FIG.1 and FIG.2 and at least paragraphs [0026], [0030-0037], [0040] and [0045-0052] of the specification) such that it amounts no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform the abstract idea, see MPEP 2106.05(f). Accordingly, the additional elements claimed do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea as are claims 11 and 20.
Under step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using generic computer processors with memory suitably programmed communicating over a generic network to perform the limitation steps amounts no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform the abstract idea, see MPEP 2106.05(f). Mere instructions to apply an exception using generic computer components interacting in a conventional manner cannot provide an inventive concept. Claims 1, 11 and 20 are not patent eligible.
For instance, in the process of claim 1, the limitation steps, claimed at a high level of generality, recite steps that are considered mere instructions to apply an exception akin to a commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd.; Gottschalk and Versata Dev. Group, Inc.; see MPEP 2106.05(f)(2).
Applicant has leveraged generic computing elements to perform the abstract idea of computing a risk assessment score based on a time series of transactions, i.e. fraud detection for mitigating risk, without significantly more.
Dependent claims 2-10 and 12-19 when analyzed as a whole and in an ordered combination are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea, as detailed below. The additional recited limitations in the dependent claims only refine the abstract idea.
For instance, claims 2 and 12 further refine the abstract idea by defining each risk score among the time series of scores as a probability of a transaction being fraudulent. These are values, i.e. numbers, that have been calculated or determined. Claims 3 and 13 further refine the abstract idea by merely describing the risk score data comprising coefficients that make up a mathematical curve of points. The trend is determined based on the calculated curve which can be performed through mental observations. Claims 4 and 14 merely define types of data found in the transaction data to be applied in performing the abstract idea. Claims 5 and 15 recite steps, at a very high level of generality, categorizing obtained data from various users to train the at least one machine learning model. There are no technical implementation detail as to how the training occurs other than what is known in the field. The generic devices and machine learning model are additional elements recited and do not provide a practical application or significantly more than the abstract idea. Claims 6, 7, 8, 16 and 17 recite using mathematically related algorithmic steps to process and generate data which is refining the abstract idea. Claims 9, 10, 18 and 19 further refine the abstract idea by describing how machine learning models are optimized and selected to be applied to the abstract idea.
Clearly, the additional recited limitations in the dependent claims only refine the abstract idea further. Further refinement of an abstract idea does not convert an abstract idea into something concrete.
The claims merely amount to the application or instructions to apply the abstract idea (i.e. a series of steps for computing a risk assessment score based on a time series of transactions, i.e. fraud detection for mitigating risk) on one or more computers, and are considered to amount to nothing more than requiring a generic computer system (e.g. processors suitably programmed and communicating over a network) to merely carry out the abstract idea itself. As such, the claims, when considered as a whole, are nothing more than the instruction to implement the abstract idea (i.e. a series of steps for computing a risk assessment score based on a time series of transactions, i.e. fraud detection for mitigating risk) in a particular, albeit well-understood, routine and conventional technological environment.
Accordingly, the Examiner concludes that there are no meaningful limitations in the claims that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself or integrate the judicial exception into a practical application.
Response to Arguments
Applicant’s arguments with respect to the 35 U.S.C. 101 rejection of claims 1-20 filed in the Remarks of 2/10/2026 have been fully considered but they are not persuasive.
On page 16 of the remarks Applicant argues ‘“Here, among other features, Applicant's independent claim 1 recites "labelled training data compris[ing] a plurality of transactions, [where] each of the plurality of transactions is labelled as fraudulent or non-fraudulent," and "train[ing] a plurality of machine learning models based on the labelled training data and generat[ing] corresponding risk scores, wherein each of the plurality of machine learning models have corresponding hyperparameters comprising a threshold that each respective machine learning model uses to detect whether a transaction is fraudulent or not." As such, claim 1 includes the training of specific machine learning models with hyperparameters that include a threshold that each respective machine learning model uses to detect whether a transaction is fraudulent or not. Thus, in accordance with the Desjardins Appeals Review Panel Decision, which the USPTO designated precedential on November 4, 2025, claim 1 recites subject matter that improves computer functionality, and thus is not directed to an abstract idea.”’ Examiner respectfully disagrees.
Applicants limitations of training machine learning models using specific data, hyperparameters and thresholds to produce an outcome, e.g. a value, does not improve the machine learning model itself. The training steps are claimed at a very high level of generality such that this is how any model is trained to produce an iterative algorithmic outcome to be used to improve the abstract idea. This is different from the Desjardins Appeals Review Panel Decision because “…the claims were directed to training a machine learning model on multiple tasks, while preserving prior tasks performed, which properly integrated an otherwise abstract idea into a practical application;…” which directly led to ‘“…the claims for improving the functioning of the machine learning model itself, citing reduced storage requirements, lowered system complexity, and the prevention of "catastrophic forgetting."”’ Applicant is not just claiming to train a plurality of machine learning models at a very high level of generality, from which a machine learning model is selected based on calculated scores which are compared to known thresholds, but claiming that the outcome of the computing produces a score used to determine possible fraud. Applicants claims are similar to Example 47, claim 2.
On page 17 of the Remarks Applicant argues ‘“Here, rather than "merely claiming the idea of a "fundamental economic process," the claims recite how the machine learning models are trained, and how the trained machine learning models are used to generate the risk score data. Indeed, rather than merely reciting elements that are "insignificant extra-solution activity" recited "at a high levels of generality" and that merely perform a fundamental economic process, the claims, at least as amended, provide several technical benefits.
For instance, the claimed subject matter includes the training of machine learning models that include, as a hyperparameter, a threshold that each respective machine learning model uses to detect whether a transaction is fraudulent or not. From the trained machine learning models, one is selected based on generates impact scores. The selected trained machine learning model is then used to compute risk score data that can be used for allowing, or disallowing a transaction in real-time. See Specification, paras. [0094], [0115].2 As such, the claimed subject matter can allow for the real-time detection of fraudulent transactions, thereby preventing those transactions from fulfilling, which can reduce time and costs associated with tracking down what would otherwise be allowed fraudulent transactions. See, e.g., id., paras. [0003], [0026]-[0028].”’ On page 18, Applicant argues ‘“Similarly, the claims here recite the features of "labelled training data compris[ing] a plurality of transactions, [where] each of the plurality of transactions is labelled as fraudulent or non-fraudulent," and "train[ing] a plurality of machine learning models based on the labelled training data and generat[ing] corresponding risk scores, wherein each of the plurality of machine learning models have corresponding hyperparameters comprising a threshold that each respective machine learning model uses to detect whether a transaction is fraudulent or not," thereby reciting an improvement in the training of machine learning models to generate risk score data, and thus are similarly patent eligible.”’ Examiner respectfully disagrees.
Labelling training data based on all sequence data and labels generated to indicate a time series of transactions is part of the abstract idea. Categorizing and organizing data/information is an abstract idea in and of itself. Determining and ranking models based on impact scores, i.e. values, to select the optimal model is also part of the abstract idea. These steps do not provide any technical implementation details such that a human being cannot carry out these determining, ranking and selecting steps to improve the abstract idea. Furthermore, there is no way of measuring any decrease of cost or time through implementing the instant process steps. There are no concrete values that point to this being valid. The “real-time” argument for solving the abstract idea of detecting fraudulent transactions is due to relying on and leveraging generic computer processors with memory programmed to carry out the abstract idea steps.
Under Prong Two of Step 2A, and considering the October 2019 Update: Subject Matter
Eligibility, ‘“As explained in the 2019 PEG, the evaluation of Prong Two requires the use of the
considerations (e.g. improving technology, effecting a particular treatment or prophylaxis,
implementing with a particular machine, etc.) identified by the Supreme Court and the Federal
Circuit, to ensure that the claim as a whole “integrates [the] judicial exception into a practical
application [that] will apply, rely on, or use the judicial exception in a manner that imposes a
meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception.”64 These considerations are set forth in the 2019 PEG, MPEP 2106.05(a) through (c), and MPEP 2106.05(e) through (h). Note, a specific way of achieving a result is not a stand-alone consideration in Step 2A Prong Two.”’(underlined for emphasis). The question is: do the claim limitation in combination impose a meaningful limit on the judicial exception? The answer is no in the instant case because, the claim limitations, as a whole and in combination, do not integrate the abstract idea of computing a risk assessment score based on a time series of transactions, i.e. fraud detection for mitigating risk, into a practical application. The invention relies on generic, commercially available, off-the-shelf, processors to apply the steps of the abstract idea to solve a business problem relating to fraud detection. There is nothing in the claims which recite improvements to the functioning of the computing element or the technology leveraged to carry out the judicial exception or the machine learning training models, no application or use to effect a particular treatment or prophylaxis, no use of a particular machine, no transformation or reduction of a particular article to a different state or thing or application of the judicial exception in some meaningful way beyond generally linking the judicial exception to a particular technological environment, such that the claim is no more than a drafting effort designed to monopolize the exception. Any improvements realized by Applicant’s invention are solely in the abstract idea itself which does not make the claimed limitations patent eligible over 35 U.S.C. 101. It is clear that an improved abstract idea is still abstract as pointed out in SAP America v. Investpic *2-3, (‘“We may assume that the techniques claimed are “groundbreaking, innovative, or even brilliant,” but that is not enough for eligibility. Association for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 591 (2013); accord buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1352 (Fed. Cir. 2014). Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 89-90 (2012); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“[A] claim for a new abstract idea is still an abstract idea.”’. As such, Examiner maintains that the instant invention does not integrate the abstract idea into a practical application.
On page 19 of the Remarks, Applicant argues ‘“Here, at least as amended, the claims recite significantly more than any "fundamental economic practice." Indeed, the claims recite "specific limitation[s] other than what is well-understood, routine, conventional activity in the field," and "add[] unconventional steps that confine the claim to a particular useful application," as the prior art fails to teach or suggest the claimed subject matter. See M.P.E.P. § 2106.05; see also Final Office Action (lack of 35 U.S.C. § 102 and 35 U.S.C. § 103 rejections). Moreover, the claimed subject matter is confined to a "particular useful application" that provides for the training of machine learning models that include, as a hyperparameter, a threshold that each respective machine learning model uses to detect whether a transaction is fraudulent or not, as well as the generation of risk score data that can be used to detect, in real-time, fraudulent transactions, thereby preventing those transactions from fulfilling, among other benefits, as described above.”’ Examiner respectfully disagrees.
Examiner would like to point out that the criteria for rejections under 35 U.S.C. 102/103 are different from the criteria for rejections under 35 U.S.C. 101. The grounds of rejection under 35 U.S.C. 101 is based on the interpretation of abstract idea provided by the Alice decision. It is enough to recognize that there is no meaningful distinction between the concept of risk hedging in Bilski, the concept of intermediated settlement in Alice and the concept of fraud detection through data analysis at issue here. All of these fall squarely within the realm of abstract ideas as interpreted in the Alice decision. Also, as was pointed out in Ultramercial, the addition of merely novel or non-routine components to the claimed idea does not necessarily turn an abstraction into something concrete (See Ultramercial, Inc. v. Hulu, LLC, _ F.3d_, 2014 WL 5904902, (Fed. Cir. Nov. 14, 2014). On the other hand, the rejections under 35 U.S.C. 102/103 are based on prior art. The presence of novel or non-obvious components (with respect to prior art of record) in a claim makes the claim allowable over prior art. The criteria for rejections under 35 U.S.C. 102/103 are different from the criteria for rejections under 35 U.S.C. 101.
The abstract idea steps may improve the abstract idea of mitigating risk through fraud detection but an improvement in an abstract idea is not 101 eligible. The steps of the claims, taken individually or as an ordered combination, have already been identified in the rejection above as corresponding to abstract ideas. The additional elements in the claims are at least one processor operatively coupled to the non-transitory memory, a computing device, a network, a database, a user device and at least one machine learning model selected from a plurality of machine learning models, i.e. an iterative mathematical algorithm for computing values as programmed to do see paragraph [0040], to execute the claimed steps. Applicant’s specification at paragraph [0031] discloses “…each of the fraud risk computing device 102 and the processing device(s) 120 can be a computer, a workstation, a laptop, a server such as a cloud-based server, or any other suitable device.” Paragraph [0032], discloses “…each of the multiple user computing devices 110, 112, 114 can be a cellular phone, a smart phone, a tablet, a personal assistant device, a voice assistant device, a digital assistant, a laptop, a computer, a laser-based code scanner, or any other suitable device.” Paragraph [0040], discloses “…the fraud risk computing device 102 may execute one or more models (e.g., programs or algorithms), such as a machine learning model, deep learning model, statistical model, etc., to generate risk score data for a user device.”, also see paragraphs [0046-0050]. As such, the claims at issue do not require any nonconventional computer, network, or other components, or even a non-conventional and non-generic arrangement of known, conventional pieces but merely call for performance of the claimed functions on a set of generic computer components. The elements of the instant process, when taken alone, each execute in a manner conventionally expected of these elements. The elements of the instant underlying process, when taken in combination, together do not offer substantially more than the sum of the functions of the elements when each is taken alone.
For these reasons and those stated in the rejection above, rejection of claims 1-20 under 35 U.S.C. 101 is maintained by the Examiner.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure are listed on the enclosed PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J BRIDGES whose telephone number is (571)270-5451. The examiner can normally be reached 7:00am-3:30pm M-F EDT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mike Anderson can be reached at 571-270-0508. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER BRIDGES/Primary Examiner, Art Unit 3693