Prosecution Insights
Last updated: April 19, 2026
Application No. 18/739,845

DATA CLASSIFICATION FOR FRAUD DETECTION

Final Rejection §101§103
Filed
Jun 11, 2024
Examiner
JUNG, HENRY H
Art Unit
3695
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Wells Fargo Bank N A
OA Round
2 (Final)
24%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
25 granted / 104 resolved
-28.0% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
30 currently pending
Career history
134
Total Applications
across all art units

Statute-Specific Performance

§101
37.2%
-2.8% vs TC avg
§103
37.4%
-2.6% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 104 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Application Claims 1-6 and 11-16 have been examined in this application. The filling date of this application number recited above is 11 June 2024. No priority has been claimed in the Application Data Sheet, thus the examination will be undertaken in consideration of the effective filing date as the priority date. No additional information disclosure statement (IDS) has been filed to date. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6 and 11-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The Claims are directed to an abstract idea, Mental Process, Mathematical Concepts, and/or Certain Methods of Organizing Human Activity. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea. As per Claims 1 and 11, the claims recite “a method for classifying financial data as fraudulent, comprising: generating a fraud detection model, the fraud detection model including a first number of layers; receiving a financial data set associated with an organization; … selecting optimal attributes of the financial data set using an optimization algorithm to extract optimal features required to classify the financial data set; dynamically changing, based on a volume of data of the financial data set, the first number of layers of the fraud detection model while [updating] the fraud detection model with the financial data set and the optimal features, using a feedback loop to enhance efficiency of the fraud detection model, wherein the feedback loop is configured to iteratively compare outputs to inputs comprising known good or bad statements to determine when classification by the fraud detection model provides a desired level of accuracy as measured by a [updating] loss value and a validation loss value, with the [updating] loss value providing a difference between model predictions and actual data labels, and the validation loss value that measures performance of the fraud detection model on validation data not used for [updating], and wherein the fraud detection model comprises a multi-layer neural network architecture with multiple convolutional layers for feature extraction and pattern recognition, at least one pooling layer for dimensionality reduction and feature aggregation, and at least one average pooling layer for spatial averaging and feature summarization; and classifying the financial data set to indicate fraud by executing the fraud detection model in the second number of layers using the optimal features.” The limitation of the claims recited above, considering the claims without the additional elements (e.g. computer, processor, etc.) under its broadest reasonable interpretation (BRI), recites Mental Process. The method recited above is a process of receiving data, selecting data using algorithm, using a model to analyze data, updating the model using feedback loop and layers, and classifying data using the model. All these steps recited by the claims can be practically performed in the human mind, or by a human using a pen and paper. See MPEP 2106.04(III)(A): “In contrast, claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: • a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); • claims to "comparing BRCA sequences and determining the existence of alterations," where the claims cover any way of comparing BRCA sequences such that the comparison steps can practically be performed in the human mind, University of Utah Research Foundation v. Ambry Genetics, 774 F.3d 755, 763, 113 USPQ2d 1241, 1246 (Fed. Cir. 2014); • a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011); and • a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential).” Although the claim may recite using a computer to receive and store data, performing a mental process on a generic computer still recite a mental process. See MPEP 2106.04(III)(C): “Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer").” Therefore, the claim recites an abstract idea, mental process. Additionally, the claims also recite Mathematical Concepts, wherein the method involves calculations using algorithm (e.g. equation) and classifying data using model. The claim limitations associated with performing data analysis using algorithm and model, and updating the model, may be interpreted as mathematical calculations under BRI. Therefore, the claim recites an abstract idea, mathematical concepts. Additionally, the limitation of the claim recited above, under BRI, recites Certain Methods of Organizing Human Activity, specifically under fundamental economic principles or practices and/or commercial or legal interactions. The method recited above is a process of receiving, analyzing, and classifying financial data, wherein the goal is reduce fraud (e.g. mitigate risk) associated with the financial data, which is fundamental economic principles or practices. The method may involve various interactions to receive and store information from the organization, which involves financial data set, which is commercial or legal interactions. Therefore, the claim recites an abstract idea, certain methods of organizing human activity. This judicial exception is not integrated into practical application. In particular, the claims recite an additional element of “computer system”, “processor”, and “non-transitory computer-readable storage media” to perform the method recited above by instructing the abstract idea to be performed “by” these generic computer components. These general computer components are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer system, as disclosed by Specification: [0017] “Each of the devices may be implemented as one or more computing devices with at least one processor and memory. Example computing devices include a mobile computer, a desktop computer, a server computer, or other computing device or devices such as a server farm or cloud computing used to generate or receive data”; and [0043] “Computer-readable data storage media include volatile and non-volatile, removable, and non-removable media implemented in any method or technology for storage of information such as computer-readable software instructions, data structures, program modules, or other data”. These additional elements are generic, off-the-shelf components available to the public, and does not require any specialized hardware or equipment to perform the claimed method, but are merely applied to perform its basic functionalities, such as: receive data, select data, determine data, and classify data. Mere instructions to implement the abstract idea on a computer, or merely using a computer as a tool to perform the abstract idea (e.g. mere “apply it”) is not indicative of integration into a practical application; see MPEP 2106.05(f). Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, when analyzed as a whole, considering the additional elements individually and/or as an ordered combination, the additional element of using a computer based system is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer system. The claims lack sufficient technical details to provide how these limitations may provide technological steps or technical details on how it is particularly implemented on a computer to improve its system or any of its underlying hardware or components (e.g. how it is performed on the computer, how it could improve the computer itself, how it could manipulate the computer to function in a specific way other than its generic functionality, and/or how it could improve any of the underlying technology), but merely applies the generic computer system to perform its generic functionalities, such as receiving and analyzing data. Merely using the generic computer system as a tool to perform the abstract idea (e.g. mere “apply it”) is not indicative of an inventive concept (aka “significantly more”). As disclosed above, the judicial exception is not applied with or used by a particular machine. As held in Parker v. Flook, 437 U.S. 584, 590, 198 USPQ 193, 199 (1978) and Bancorp Services v. Sun Life, 687 F.3d 1266, 1276, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012), “the routine use of a computer to perform calculations cannot turn an otherwise ineligible mathematical formula or law of nature into patentable subject matter.” The claims are not patent eligible. Regarding dependent claims, they are still directed to an abstract idea without significantly more. Claims 2 and 12 recite “wherein the financial data set includes financial statements, balance sheets, and income/profit/loss statements associated with the organization.” The claims provide further details regarding the data, which is still part of the abstract idea, and mere “apply it” is not indicative of integration into a practical application. Claims 3 and 13 recite “wherein the optimal attributes are variables associated with the financial data set.” The claims provide further details regarding the data, which is still part of the abstract idea, and mere “apply it” is not indicative of integration into a practical application. Claims 4 and 14 recite “wherein the optimal features are predictors associated with classification of the financial data set.” The claims provide further details regarding the data, which is still part of the abstract idea, and mere “apply it” is not indicative of integration into a practical application. Claims 5 and 15 recite “comprising further instructions which, when executed by the one or more processors, causes the computer system to classify the financial data set into good, manipulated, and bad buckets.” The claims provide further steps of classifying data, which is still part of the abstract idea, and mere “apply it” is not indicative of integration into a practical application. Claims 6 and 16 recite “wherein the optimization algorithm is a Particle Swarm Optimization algorithm.” The claims provide further details regarding the algorithm, which is still part of the abstract idea, and mere “apply it” is not indicative of integration into a practical application. These additional steps of each claims fail to remedy the deficiencies of their parent claim above because they are merely further limiting the rules used to conduct the previously recited abstract idea, and are therefore rejected for at least the same rationale as applied to their parent claim above. Claims 2-6 and 12-16, when analyzed as a whole, considering the additional elements individually and/or as an ordered combination, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are sufficient to integrate into a practical application and do not amount to significantly more than the judicial exception. Similarly to the independent claims, each claim recites using a generic computer component to perform the abstract idea as mentioned above. Merely using the generic computer system as a tool to perform the abstract idea (e.g. “apply it”) is not indicative of an inventive concept (aka “significantly more”). Therefore, prong 2 and step 2B analysis are similar to above and these claims are not eligible. Therefore, Claims 1-6 and 11-16 are not drawn to eligible subject matter as they are directed to an abstract idea without significantly more. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-4, 11, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over BEN-OR et al. (US 20190354993 A1) in view of Singh et al. (US 20230370408 A1), and in view of Yin et al. (US 20200242382 A1). As per Claims 1 and 11, BEN-OR discloses a computer system for classifying financial data as fraudulent, comprising: one or more processors; and non-transitory computer-readable storage media encoding instructions which, when executed by the one or more processors (See Figure 6, as disclosed [0087] “Reference is made to FIG. 6, showing a high-level block diagram of an exemplary computing device according to some embodiments of the present invention. Computing device 600 may include a controller 605 that may be, for example, a central processing unit processor (CPU), a graphics processing unit (GPU), a chip or any suitable computing or computational device, an operating system 615, a memory 620, executable code 625, storage or storage device 630, input devices 635 and output devices 640”), causes the computer system to: … receive a financial data set associated with an organization (See Figure 1 – operation 102, as disclosed [0050] “In operation 102 input data describing an event of interest may be obtained. The input data may be provided at a specific format e.g., data structure” wherein [0031] “An event of interest, also referred to herein as a case, may be, for example, a real-world (physical) event which may be represented by data, such as a person conducting a financial transaction. The data may be any type of data that may be described by a set of entities”); automatically select optimal attributes of the financial data set using an optimization algorithm to extract optimal features required to classify the financial data set (See Figure 1 – operation 104, as disclosed [0051] “In operation 104 the input data may be transformed or changed into a property graph representing a model of a network … The process of transforming the input data into a property graph may be referred to herein as mapping”); dynamically change, based on a volume of data of the financial data set, the first number of layers of the fraud detection model to a second number of layers of the fraud detection model while training the fraud detection model with the financial data set and the optimal features … (See Figure 5A – block 530, as disclosed [0086] “In operation 530 an ML classifier, or other type of an ML model may be trained, e.g., to detect one or more types of fraudulent events using real-world datasets and case-based datasets, as indicated by blocks 510 and 520, respectively” or see also [0043] “The generated case-based datasets may be used as inputs for training ML classifiers, together with real-world datasets related to normal behavior, thus providing balanced datasets. Training an ML model with balanced datasets typically results in a classifier that is sensitive to the event of interest. Thus, the generated case-based datasets may be used by financial institutions, governmental agencies, the police, etc., to train ML classifiers for detecting fraudulent events”); and classify the financial data set to indicate fraud by executing the fraud detection model in the second number of layers using the optimal features (See Figure 5A – block 540, as disclosed [0086] “In operation 540 the ML model or classifier may be used for analyzing real world data for detecting the same types of fraudulent events the ML classifier was trained for” or see also [0048] “A risk score or rating may be provided to describe for example the likelihood or severity of fraud occurring with an entity or sub-network. The risk score may be produced or based, at least in part, on expert models or predictive models known in the art. These models may use different algorithms to predict or classify events based on historical data or analysis” which is disclosed in Figure 1 – operation 106). Although BEN-OR teaches of training the fraud detection model, the prior art does not seem to explicitly disclose of utilizing the feedback loop. However, Singh discloses: dynamically change, based on a volume of data of the financial data set, the first number of layers of the fraud detection model to a second number of layers of the fraud detection model while training the fraud detection model with the financial data set and the optimal features, using a feedback loop to enhance efficiency of the fraud detection model, wherein the feedback loop is configured to iteratively compare outputs to inputs comprising known good or bad statements to determine when classification by the fraud detection model provides a desired level of accuracy as measured by a training loss value and a validation loss value, with the training loss value providing a difference between model predictions and actual data labels, and the validation loss value that measures performance of the fraud detection model on validation data not used for training … ([0119] “In some cases, the messaging system 106 can utilize the determined loss value to learn parameters of the machine learning model (e.g., via backpropagation). For example, the messaging system 106 can iteratively utilize a machine learning model with a training data set to determine predicted content-graphical element recommendations and compare the predictions to ground truth selections to determine a loss value. Moreover, the messaging system 106 can utilize the iterative loss value to iteratively learn parameters (e.g., as a feedback loop) of the machine learning model until a desired level of accuracy is achieved (e.g., satisfying an error threshold by being less than or equal to the error threshold). In one or more embodiments, the messaging system 106 utilizes various loss functions (or loss values). For example, the messaging system 106 can utilize loss functions (or loss values), such as, but not limited to mean square error, quadratic loss, L2 loss, squared error loss, and/or absolute error loss”); It would have been obvious to one of ordinary skill in the art at the time of the invention to utilize feedback loop to train the machine learning model as in Singh in the system executing the method of BEN-OR with the motivation of offering to [0036-0038] improve accuracy, efficiency, and flexibility as taught by Singh over that of BEN-OR. Although BEN-OR teaches of training the fraud detection model, the prior art does not seem to explicitly disclose of utilizing the multi-layer neural network architecture. However, Yin discloses: generate a fraud detection model, the fraud detection model including a first number of layers ([0033] “As shown in FIG. 1, the convolutional neural network 100 may include a plurality of convolutional layers 101-1˜101-N and a second fully-connected layer 102”); … dynamically change, based on a volume of data of the financial data set, the first number of layers of the fraud detection model to a second number of layers of the fraud detection model while training the fraud detection model with the financial data set and the optimal features, … wherein the fraud detection model comprises a multi-layer neural network architecture with multiple convolutional layers for feature extraction and pattern recognition, at least one pooling layer for dimensionality reduction and feature aggregation, and at least one average pooling layer for spatial averaging and feature summarization ([0008] “According to a first aspect of the embodiments of this disclosure, there is provided a deep learning model used for driving behavior recognition, including: a convolutional neural network configured to extract features of a plurality of temporally consecutive input images … a recursive neural network configured to perform temporal and spatial fusion on the features extracted by the convolutional neural network; a first fully-connected layer configured to perform dimension reduction on an output result of the recursive neural network, and output a plurality of groups of class features corresponding to the plurality of input images; and a probability layer configured to determine and output probabilities of classes of driving behaviors of a user driving the vehicle according to the plurality of groups of class features outputted by the first fully-connected layer”); It would have been obvious to one of ordinary skill in the art at the time of the invention to utilize the multi-layer neural network architecture to train the machine learning model as in Yin in the system executing the method of BEN-OR with the motivation of offering to [0003-0006] improve accuracy of the models as taught by Yin over that of BEN-OR. As per claims 3 and 13, BEN-OR teaches the computer system of claim 1, and the method of claim 11, wherein the optimal attributes are variables associated with the financial data set ([0051] “In operation 104 the input data may be transformed or changed into a property graph representing a model of a network … The process of transforming the input data into a property graph may be referred to herein as mapping”). As per claims 4 and 14, BEN-OR teaches the computer system of claim 3, and the method of claim 13, wherein the optimal features are predictors associated with classification of the financial data set ([0061] “In operation 106 a score or rating of the first network may be calculated … In some embodiments the score may be a risk score” wherein [0048] “The risk score may be produced or based, at least in part, on expert models or predictive models known in the art. These models may use different algorithms to predict or classify events based on historical data or analysis”). Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over BEN-OR, in view of Singh, in view of Yin, and in view of WANG (CN 116993489 A). As per claims 2 and 12, BEN-OR may not explicitly disclose, but WANG teaches the computer system of claim 1, and the method of claim 11, wherein the financial data set includes financial statements, balance sheets, and income/profit/loss statements associated with the organization ([Page 6 Lines 13-15] “obtaining the financial data of the listed company, the financial data of the listed company specifically comprises: financial data of balance sheet, profit sheet and cash flow sheet in the financial statement database”). It would have been obvious to one of ordinary skill in the art at the time of the invention to utilize financial data set including various data as in WANG in the system executing the method of BEN-OR with the motivation of offering to [Page 6] improve the accuracy of the financial fraud identification model as taught by WANG over that of BEN-OR. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over BEN-OR, in view of Singh, in view of Yin, and in view of Stevens et al. (US 20210157858 A1). As per claims 5 and 15, BEN-OR may not explicitly disclose, but Stevens teaches the computer system of claim 1, and the method of claim 11, comprising further instructions which, when executed by the one or more processors, causes the computer system to classify the financial data set into good, manipulated, and bad buckets ([1226] “(ii) classifying the first data snippet as positive, negative, or unknown in accordance with the comparison of the first data snippet with the second training set”). It would have been obvious to one of ordinary skill in the art at the time of the invention to utilize classifying the data set into three different buckets as in Stevens in the system executing the method of BEN-OR with the motivation of offering to improve techniques for generating models that produce results more quickly than conventional methods with improved accuracy as taught by Stevens over that of BEN-OR. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over BEN-OR, in view of Singh, in view of Yin, and in view of Patten, Jr. et al. (US 20210241279 A1). As per claims 6 and 16, BEN-OR may not explicitly disclose, but Patten, Jr. teaches the computer system of claim 1, and the method of claim 11, wherein the optimization algorithm is a Particle Swarm Optimization algorithm ([0045] “At step 312, a selected set of analytical models are used to perform data analysis on the clustered data … Analytical models are selected from a common set of machine learning models, including but not limited to: particle-swarm optimization”). It would have been obvious to one of ordinary skill in the art at the time of the invention to utilize particle-swarm optimization model as in Patten, Jr. in the system executing the method of BEN-OR with the motivation of offering to improve the data analysis for fraud detection as taught by Patten, Jr. over that of BEN-OR. Response to Arguments Applicant's arguments, see pages 6 to 8, filed 15-September-2025, with respect to 35 U.S.C. 101 rejection have been fully considered but they are not persuasive. As discussed above under 35 U.S.C. 101 rejection, claim limitations adding more details regarding the layers, pooling, and using feedback loop to train and use the model is still part of the abstract idea, and the additional elements of a generic computer system is merely applied to implement the abstract idea by performing its basic functionalities, which is not indicative of integration into a practical application. The claims lack sufficient technical details to provide how these limitations may provide technological steps or technical details on how it is particularly implemented on a computer to improve its system or any of its underlying hardware or components (e.g. how it is performed on the computer, how it could improve the computer itself, how it could manipulate the computer to function in a specific way other than its generic functionality, and/or how it could improve any of the underlying technology), but merely applies the generic computer system to perform its generic functionalities, such as receiving and analyzing data. Mere “apply it” is not “significantly more”. Therefore, the 35 U.S.C. 101 rejection is maintained. Applicant’s arguments, see pages 8 to 10, with respect to 35 U.S.C. 103 rejection have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Stanisljevic et al. (US 20240354845 A1) discloses [0025] “The term “classifier model” refers to a statistical model that is configured to describe parameters, hyper-parameters, and/or stored operations of a model to process a set of context-based testing scenario data to classify user credit profile data. In some embodiments, the classifier model is a trained machine learning model … Alternatively, the classifier model may be a rules-based model configured to follow a defined set of rules and/or operations to classify customer financial data as fraudulent or non-fraudulent”; and Li (US 20200126086 A1) discloses [Abstract] “By a computing platform, a classification sample set is obtained from a user operation record, where the classification sample set includes calibration samples, where each calibration sample includes a user operation sequence and a time sequence. For each calibration sample and at a convolution layer of a fraudulent transaction detection model … the fraudulent transaction detection model is trained using the classification result. A fraudulent transaction is detected using the trained fraudulent transaction detection model”. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY H JUNG whose telephone number is (571)270-5018. The examiner can normally be reached Mon - Fri 9:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christine M Tran (Behncke) can be reached at (571) 272-8103. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY H JUNG/ Examiner, Art Unit 3695 /CHRISTINE M Tran/ Supervisory Patent Examiner, Art Unit 3695
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Jun 25, 2025
Non-Final Rejection — §101, §103
Aug 24, 2025
Interview Requested
Sep 11, 2025
Examiner Interview Summary
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 15, 2025
Response Filed
Feb 13, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602686
DETAILING SECURE SERVICE PROVIDER TRANSACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12400234
MICROTRANSACTION DETECTION AND AUTHORIZATION SYSTEMS AND METHODS
2y 5m to grant Granted Aug 26, 2025
Patent 12346971
INFORMATION SHARING PORTAL ASSOCIATED WITH MULTI-VENDOR RISK RELATIONSHIPS
2y 5m to grant Granted Jul 01, 2025
Patent 12307529
SENSOR DATA INTEGRATION AND ANALYSIS
2y 5m to grant Granted May 20, 2025
Patent 12293368
SYSTEMS AND METHODS FOR AUTHENTICATING ONLINE USERS AND PROVIDING GRAPHIC VISUALIZATIONS OF AN AUTHENTICATION PROCESS
2y 5m to grant Granted May 06, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
24%
Grant Probability
55%
With Interview (+31.1%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 104 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month