Prosecution Insights
Last updated: April 19, 2026
Application No. 17/978,578

INTERACTIVE SWARMING

Non-Final OA §101§102
Filed
Nov 01, 2022
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Raise Marketplace LLC
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 11/01/2022. Claims 1-7 are pending and have been considered below. Election/Restrictions 3. Claims 8-20 are withdraw from further consideration pursuant to 37 CFR 1.142(b) as being drawn to nonelected group II. Election was made without traverse in the reply filed on 09/17/2025. Information Disclosure Statement 4. The information disclosure statement (IDS(s)) submitted on 11/01/2022 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. Step 1, the claims are directed to the statutory category of a method. Step 2A Prong 1, Claim 1 recites, in part, evaluating a transaction to identify potential fraud related to a first aspect of the transaction using a first member to generate a first cooperative prediction related to the first aspect of the transaction, wherein the first cooperative prediction is based, at least in part, on a first affinity value, and wherein the first affinity value represents a weight afforded by the first member to predictions generated by one or more other members with respect to the transaction; evaluating the transaction to identify potential fraud related to a second aspect of the transaction using a second member to arrive at a second cooperative prediction related to the transaction, wherein the second cooperative prediction is based, at least in part, on a second affinity value, wherein the second affinity value represents a weight afforded by the second member to predictions generated by one or more other members with respect to the transaction; and generating a prediction related to the transaction based on both the first cooperative prediction and the second cooperative prediction, wherein the prediction includes a disposition decision indicating how the transaction is to be processed. These are steps of observation, evaluation, judgment, and decision-making. It is noted that these steps can be performed in the human mind or with pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. In addition, wherein the first cooperative prediction is based, at least in part, on a first affinity value, and wherein the first affinity value represents a weight afforded by the first member to predictions generated by one or more other members with respect to the transaction, wherein the second cooperative prediction is based, at least in part, on a second affinity value, wherein the second affinity value represents a weight afforded by the second member to predictions generated by one or more other members with respect to the transaction; and generating a prediction related to the transaction based on both the first cooperative prediction and the second cooperative prediction are directed to “Mathematical Concept” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2, this judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of “a first swarm member”, “a second swarm member”, “one or more other swarm members”, and “a swarm prediction”. The computer components in the claim are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04.(a)(2).III.C. The additional elements in the claims merely used as a tool to implement the abstract idea. Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a first swarm member”, “a second swarm member”, “one or more other swarm members”, and “a swarm prediction” to perform the steps of the claim amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Please see MPEP §2106.05(b) and (g). The claim is not patent eligible. Claim 2 provides further limitations “obtaining an actual outcome of the transaction; comparing the actual outcome of the transaction to the swarm prediction to generate a prediction accuracy; and adjusting the first affinity value of the first swarm member based on the prediction accuracy” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claim 3 provides further limitations “determining that after successive adjustments to the first affinity value, cooperative predictions generated by the first swarm member differ, by more than a threshold amount, from expert predictions made by the first swarm member, wherein the expert predictions are generated independent of affinity values; and in response to the determining, retraining the first swarm member” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claim 4 provides further limitations of “incrementally updating an accuracy of the first swarm member”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim 5 provides further limitations “obtaining historical data associated with historical transactions, the historical data including an actual historical outcome associated with a first historical transaction, and a result of an evaluation of the first historical transaction by the second swarm member; generating a plurality of first cooperative predictions related to the first historical transaction using the first swarm member, wherein the plurality of first cooperative predictions are generated using different test affinity values of the first swarm member to the one or more other swarm members; comparing the plurality of first cooperative predictions to the actual historical outcome; and setting the first affinity value of the first swarm member based on comparisons of the plurality of first cooperative predictions to the actual historical outcome” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claim 6 provides further limitations of “using different first affinity values for different transaction contexts”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim 7 provides further limitations of “the first affinity value of the first swarm member is selected based on a context of the transaction”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim Rejections - 35 USC § 102 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Harris et al. (U.S. Patent Application Pub. No. US 20180330258 A1). Claim 1: Harris teaches a method comprising: evaluating a transaction to identify potential fraud related to a first aspect of the transaction (i.e. At step 410, an autonomous learning algorithm may first receive updated data of the information space. The information space may include a graph of nodes having a defined topology, where most of the nodes correspond to different inputs and some of the nodes correspond to outputs. An input may be any known or pre-defined characteristic of the information space, and an output may be any predicted outcome that depends on one or more inputs. Examples of an input may include transaction data, user profile data, text in a conversation, etc. Examples of outputs may include a targeted recommendation, a probability of fraud, a response to initiate a purchase, etc; para. [0062]) using a first swarm member (i.e. The term “solver” may refer to a computational component that searches for a solution. For example, one or more solvers may be used to calculate a solution to an optimization problem. Solvers may additionally be referred to as “agents.” A plurality of agents that work together to solve a given problem, such as in the case of ant colony optimization, may be referred to as a “colony.”; para. [0026]) to generate a first cooperative prediction related to the first aspect of the transaction (i.e. a colony of agents may be initialized for the purpose of determining good paths for detecting fraud, such that a path that is more indicative of fraudulent behavior (i.e. signal) versus non-fraudulent behavior (i.e. noise) may be considered a low cost path and may be reinforced at a later iteration. The cost of a given solution may be characterized as the desirability or attractiveness of a given move or state transition; para. [0102, 0103]), wherein the first cooperative prediction is based, at least in part, on a first affinity value, and wherein the first affinity value (i.e. the probability, pxy k, of moving from state x to state y may depend on the attractiveness of the move ηxy, which may be based on the move's effect on the cost function, and the trail level τxy of the move, which may be an indication of a move's desirability based on previous iterations as communicated by the gradient (e.g., error) of each agent, which are referred to as local pheromones. The agents may move from node to node along the edges of the graph of the information space, and may be influenced by the information provided by the pheromones previously recorded. The amount of pheromone recorded ΔT may depend on the quality of the trial solution found. On each subsequent iteration, the agents utilize the pheromone information to converge towards a region of the information space in which a solution that meets the required cost and/or achieves the target goal exists; para. [0113, 0114]) represents a weight afforded by the first swarm member to predictions generated by one or more other swarm members with respect to the transaction (i.e. the agents searching for good paths within the information space may be updated depending on if the current solution state has met a predetermined cost requirement. If the agents have found a set of paths that do not meet the predetermined cost requirement, then each of the agents may update information regarding the region that was searched so that the agents may move closer to less costly paths in the next iteration. In one embodiment, each of the agents may update a local pheromone for a given trail or path within the graph according to: (equation) where τxy can be the amount of pheromone deposited for a state transition xy, ρ can be the pheromone evaporation coefficient, and Δτxy k can be the amount of pheromone deposited by the kth agent; para. [0120, 0121]); evaluating the transaction to identify potential fraud related to a second aspect of the transaction (i.e. At step 410, an autonomous learning algorithm may first receive updated data of the information space. The information space may include a graph of nodes having a defined topology, where most of the nodes correspond to different inputs and some of the nodes correspond to outputs. An input may be any known or pre-defined characteristic of the information space, and an output may be any predicted outcome that depends on one or more inputs. Examples of an input may include transaction data, user profile data, text in a conversation, etc. Examples of outputs may include a targeted recommendation, a probability of fraud, a response to initiate a purchase, etc; para. [0062]) using a second swarm member (i.e. The term “solver” may refer to a computational component that searches for a solution. For example, one or more solvers may be used to calculate a solution to an optimization problem. Solvers may additionally be referred to as “agents.” A plurality of agents that work together to solve a given problem, such as in the case of ant colony optimization, may be referred to as a “colony.”; para. [0026]) to arrive at a second cooperative prediction related to the transaction (i.e. a colony of agents may be initialized for the purpose of determining good paths for detecting fraud, such that a path that is more indicative of fraudulent behavior (i.e. signal) versus non-fraudulent behavior (i.e. noise) may be considered a low cost path and may be reinforced at a later iteration. The cost of a given solution may be characterized as the desirability or attractiveness of a given move or state transition; para. [0102, 0103]), wherein the second cooperative prediction is based, at least in part, on a second affinity value, wherein the second affinity value (i.e. the probability, pxy k, of moving from state x to state y may depend on the attractiveness of the move ηxy, which may be based on the move's effect on the cost function, and the trail level τxy of the move, which may be an indication of a move's desirability based on previous iterations as communicated by the gradient (e.g., error) of each agent, which are referred to as local pheromones. The agents may move from node to node along the edges of the graph of the information space, and may be influenced by the information provided by the pheromones previously recorded. The amount of pheromone recorded ΔT may depend on the quality of the trial solution found. On each subsequent iteration, the agents utilize the pheromone information to converge towards a region of the information space in which a solution that meets the required cost and/or achieves the target goal exists; para. [0113, 0114]) represents a weight afforded by the second swarm member to predictions generated by one or more other swarm members with respect to the transaction (i.e. the agents searching for good paths within the information space may be updated depending on if the current solution state has met a predetermined cost requirement. If the agents have found a set of paths that do not meet the predetermined cost requirement, then each of the agents may update information regarding the region that was searched so that the agents may move closer to less costly paths in the next iteration. In one embodiment, each of the agents may update a local pheromone for a given trail or path within the graph according to: (equation) where τxy can be the amount of pheromone deposited for a state transition xy, ρ can be the pheromone evaporation coefficient, and Δτxy k can be the amount of pheromone deposited by the kth agent; para. [0120, 0121]); and generating a swarm prediction related to the transaction based on both the first cooperative prediction and the second cooperative prediction (i.e. The variables corresponding to selected features in an updated feature list may be organized into index tables, in which one or more features are linked or indexed to their corresponding predicted outcome. This may provide a flattened graph that allows the AI to perform simple key value look-ups when learning and predicting outcomes; para. [0145, 0146]), wherein the swarm prediction includes a disposition decision indicating how the transaction is to be processed (i.e. At step 410, an autonomous learning algorithm may first receive updated data of the information space. The information space may include a graph of nodes having a defined topology, where most of the nodes correspond to different inputs and some of the nodes correspond to outputs. An input may be any known or pre-defined characteristic of the information space, and an output may be any predicted outcome that depends on one or more inputs. Examples of an input may include transaction data, user profile data, text in a conversation, etc. Examples of outputs may include a targeted recommendation, a probability of fraud, a response to initiate a purchase, etc; para. [0062, 0171]). Claim 2: Harris teaches the method of claim 1. Harris further teaches comprising: training the first swarm member to generate the first cooperative prediction (i.e. The term “epoch” may refer to a period of time, e.g., in training a machine learning model. During training of learners in a learning algorithm, each epoch may pass after a defined set of steps have been completed. For example, in ant colony optimization, each epoch may pass after all computational agents have found solutions and have calculated the cost of their solutions. In an iterative algorithm, an epoch may include an iteration or multiple iterations of updating a model. An epoch may sometimes be referred to as a “cycle.”; para. [0027, 0178, 0179]), wherein the training includes: obtaining an actual outcome of the transaction (i.e. Step 1110 comprises receiving updated data of an information space that includes a graph of nodes having a defined topology, e.g., as described in section III. The updated data may include historical data of requests to the artificial intelligence model and output results associated with the requests, wherein different categories of input data corresponds to different input nodes of the graph. The output results can include a measure of whether the output results were successful, e.g., that a use followed a recommendation, provided feedback that a particular response was helpful, whose chat response indicated the response was suitable, etc.; para. [0185, 0124]); comparing the actual outcome of the transaction to the swarm prediction to generate a prediction accuracy (i.e. The error structure of the paths relative to the target goal may be evaluated to determine if predictive features have successfully been found. The error structure may be expressed as a distance function or a global gradient that may communicate how close a given state is to the target goal. In one embodiment, the error, λ, may be defined to be: where, A may be a set of features (paths) that have been found by the agents, and B may be a set of features that achieve the target goal. For example, the agents may determine a set of paths, A, that connect specific types of users to recommendations that they may have a high probability of acting upon, and may compare the paths to actual recommendations accepted by the users, B, in the information space to determine the error, λ, of the determined set of paths; para. [0123, 0192]); and adjusting the first affinity value of the first swarm member based on the prediction accuracy (i.e. the probability, pxy k, of moving from state x to state y may depend on the attractiveness of the move ηxy, which may be based on the move's effect on the cost function, and the trail level τxy of the move, which may be an indication of a move's desirability based on previous iterations as communicated by the gradient (e.g., error) of each agent, which are referred to as local pheromones. The agents may move from node to node along the edges of the graph of the information space, and may be influenced by the information provided by the pheromones previously recorded. The amount of pheromone recorded ΔT may depend on the quality of the trial solution found. On each subsequent iteration, the agents utilize the pheromone information to converge towards a region of the information space in which a solution that meets the required cost and/or achieves the target goal exists; para. [0113, 0114]). Claim 3: Harris teaches the method of claim 2. Harris further teaches comprising: determining that after successive adjustments to the first affinity value (i.e. the agents searching for good paths within the information space may be updated depending on if the current solution state has met a predetermined cost requirement. If the agents have found a set of paths that do not meet the predetermined cost requirement, then each of the agents may update information regarding the region that was searched so that the agents may move closer to less costly paths in the next iteration. In one embodiment, each of the agents may update a local pheromone for a given trail or path within the graph according to: (equation) where τxy can be the amount of pheromone deposited for a state transition xy, ρ can be the pheromone evaporation coefficient, and Δτxy k can be the amount of pheromone deposited by the kth agent; para. [0105, 0120, 0121]), cooperative predictions generated by the first swarm member differ, by more than a threshold amount, from expert predictions made by the first swarm member, wherein the expert predictions are generated independent of affinity values (i.e. The error structure of the paths relative to the target goal may be evaluated to determine if predictive features have successfully been found. The error structure may be expressed as a distance function or a global gradient that may communicate how close a given state is to the target goal. In one embodiment, the error, λ, may be defined to be: where, A may be a set of features (paths) that have been found by the agents, and B may be a set of features that achieve the target goal. For example, the agents may determine a set of paths, A, that connect specific types of users to recommendations that they may have a high probability of acting upon, and may compare the paths to actual recommendations accepted by the users, B, in the information space to determine the error, λ, of the determined set of paths. If the error of the detected paths at a solution state is within a predetermined margin relative to the target goal, then the final state may be arrived at, and the solution paths that the agents have found may be identified as candidate features for the AI model; para. [0123, 0185]); and in response to the determining, retraining the first swarm member (i.e. If at step 650 it is determined that the cost requirement has not been met, then step 650 a may be performed. At step 650 a, each of the agents may update local path information to indicate the calculated cost of a proposed solution within its searched region of the graph. The agents may be initialized again at a next iteration to search for a less costly solution. Steps 620 to steps 650 a may be continuously cycled until the cost requirement is finally met; para. [0105]). Claim 4: Harris teaches the method of claim 3. Harris further teaches wherein retraining the first swarm member includes: incrementally updating an accuracy of the first swarm member (i.e. After training data is obtained, a learning process can be used to train the model. Learning module 120 is shown receiving existing records 110 and providing model 130 after training has been performed. As data samples include outputs known to correspond to specific inputs, a model can learn the type of inputs that correspond to which outputs, e.g., which images are of dogs. The training can determine errors between a predicted output of the model and the known or inferred output. These errors can be used to optimize parameters (e.g., weights) of the model so as to reduce the errors, thereby increasing accuracy; para. [0035, 0040, 0123, 0188, 0192]). Claim 5: Harris teaches the method of claim 2. Harris further teaches wherein training the first swarm member to generate the first cooperative prediction includes: obtaining historical data associated with historical transactions, the historical data including an actual historical outcome associated with a first historical transaction (i.e. Besides the graph of nodes, the information space may also include data for historical requests. Examples of historical requests may include financial transaction requests, requests for a recommendation, or request messages for a chat bot. The requests may provide known output values (e.g., fraudulent or non-fraudulent) or results, such as user feedback. During each update to the artificial intelligence model, the updated data may include the historical data of requests to the artificial intelligence model and output results associated with the requests; para. [0062, 0063]), and a result of an evaluation of the first historical transaction by the second swarm member (i.e. At step S1007, the data regarding the proposed solutions found by each of the agents may be gathered and evaluated to see if the stop criteria has been met. The stop criteria may include criteria relating to the error of the aggregate solution proposed by the colony of agents in relation to the target goal; para. [0181]); generating a plurality of first cooperative predictions related to the first historical transaction using the first swarm member, wherein the plurality of first cooperative predictions are generated using different test affinity values of the first swarm member to the one or more other swarm members (i.e. a set of agents k may move from a state x to a state y, and each agent k may compute a set of feasible movements Ak(x), where the probability of choosing a specific next movement is based on past information from previous iterations. For an agent k, the probability, pxy k, of moving from state x to state y may depend on the attractiveness of the move ηxy, which may be based on the move's effect on the cost function, and the trail level τxy of the move, which may be an indication of a move's desirability based on previous iterations as communicated by the gradient (e.g., error) of each agent, which are referred to as local pheromones. The agents may move from node to node along the edges of the graph of the information space, and may be influenced by the information provided by the pheromones previously recorded. The amount of pheromone recorded ΔT may depend on the quality of the trial solution found. On each subsequent iteration, the agents utilize the pheromone information to converge towards a region of the information space in which a solution that meets the required cost and/or achieves the target goal exists; para. [0113, 0114]); comparing the plurality of first cooperative predictions to the actual historical outcome (i.e. The error structure of the paths relative to the target goal may be evaluated to determine if predictive features have successfully been found. The error structure may be expressed as a distance function or a global gradient that may communicate how close a given state is to the target goal. In one embodiment, the error, λ, may be defined to be: where, A may be a set of features (paths) that have been found by the agents, and B may be a set of features that achieve the target goal. For example, the agents may determine a set of paths, A, that connect specific types of users to recommendations that they may have a high probability of acting upon, and may compare the paths to actual recommendations accepted by the users, B, in the information space to determine the error, λ, of the determined set of paths. If the error of the detected paths at a solution state is within a predetermined margin relative to the target goal, then the final state may be arrived at, and the solution paths that the agents have found may be identified as candidate features for the AI model; para. [0123, 0185]); and setting the first affinity value of the first swarm member based on comparisons of the plurality of first cooperative predictions to the actual historical outcome (i.e. the agents searching for good paths within the information space may be updated depending on if the current solution state has met a predetermined cost requirement. If the agents have found a set of paths that do not meet the predetermined cost requirement, then each of the agents may update information regarding the region that was searched so that the agents may move closer to less costly paths in the next iteration. In one embodiment, each of the agents may update a local pheromone for a given trail or path within the graph according to: (equation) where τxy can be the amount of pheromone deposited for a state transition xy, ρ can be the pheromone evaporation coefficient, and Δτxy k can be the amount of pheromone deposited by the kth agent; para. 0120, 0121]. Claim 6: Harris teaches the method of claim 2. Harris further teaches wherein training the first swarm member to generate the first cooperative prediction includes: using different first affinity values for different transaction contexts (i.e. The updated feature list set may comprise a list of input nodes, and/or may comprise a collection of input nodes as paths or clusters that may be expressed as a string of aggregated input nodes. The input nodes may be individual information elements that can be contained in a request such as a type of transaction (e.g. cardholder not present). The features may be used by the AI model to make predictions, in which a given entity or request may be linked to one or more feature in the feature set. For example, a financial transaction may be conducted at a specific terminal, which may be expressed in the graph as an input node having the individual characteristics of ‘location=94113,’ ‘Merchant=merchant X,’ ‘transaction type=cardholder not present,’ ‘terminal ID=12535.’ The characteristics may be concatenated together and used to define or label the node, as well as determine its position in the graph. An AI model may be using the graph of the information space comprising the input node of the terminal to detect instances of fraud. The updated feature list may comprise of one or more features that may be connected or linked to the input node of the terminal (e.g. ‘cardholder not present’, ‘94113MerchantX,’ etc.), which may allow the AI model, based on its training, to predict if the terminal for the financial transaction is highly correlated to fraud or not; para. [0114, 0117, 0142]). Claim 7: Harris teaches the method of claim 1. Harris further teaches wherein: the first affinity value of the first swarm member is selected based on a context of the transaction (i.e. The updated feature list set may comprise a list of input nodes, and/or may comprise a collection of input nodes as paths or clusters that may be expressed as a string of aggregated input nodes. The input nodes may be individual information elements that can be contained in a request such as a type of transaction (e.g. cardholder not present). The features may be used by the AI model to make predictions, in which a given entity or request may be linked to one or more feature in the feature set. For example, a financial transaction may be conducted at a specific terminal, which may be expressed in the graph as an input node having the individual characteristics of ‘location=94113,’ ‘Merchant=merchant X,’ ‘transaction type=cardholder not present,’ ‘terminal ID=12535.’ The characteristics may be concatenated together and used to define or label the node, as well as determine its position in the graph. An AI model may be using the graph of the information space comprising the input node of the terminal to detect instances of fraud. The updated feature list may comprise of one or more features that may be connected or linked to the input node of the terminal (e.g. ‘cardholder not present’, ‘94113MerchantX,’ etc.), which may allow the AI model, based on its training, to predict if the terminal for the financial transaction is highly correlated to fraud or not; para. [0114, 0117, 0142]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Chu et al. (Pub. No. US 20200175518 A1), an apparatus (100) for real-time detection of fraudulent digital transactions is disclosed. The apparatus comprises: a transceiver module arranged to receive information data of a digital transaction; a model generator module (102) arranged to dynamically generate a predictive model for fraud detection based collectively on historical information data relating to identified fraudulent transactions and the received information data. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached on 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Nov 01, 2022
Application Filed
Oct 20, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month