Prosecution Insights
Last updated: April 18, 2026
Application No. 18/425,153

DETERMINING MORE ACCURATE LABELS FOR NODES IN A GRAPH

Final Rejection §101§103
Filed
Jan 29, 2024
Examiner
LEE, CLAY C
Art Unit
3699
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mastercard Technologies Canada Ulc
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
117 granted / 216 resolved
+2.2% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
60 currently pending
Career history
276
Total Applications
across all art units

Statute-Specific Performance

§101
32.7%
-7.3% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 7/25/2024 is(are) in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendment filed September 29, 2025 has been entered. Claims 1-2, 5-12, 15-17, and 19-20 remain pending in the application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 5-12, 15-17, and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under the Step 1 of the Section 101 analysis, Claims 1-11 are drawn to a system which is within the four statutory categories (i.e. a machine), Claims 12-16 are drawn to a method which is within the four statutory categories (i.e., a process), and Claims 17-20 are drawn to a non-transitory computer-readable medium which is within the four statutory categories (i.e., a manufacture). Since the claims are directed toward statutory categories, it must be determined if the claims are directed towards a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea). Based on consideration of all of the relevant factors with respect to the claim as a whole, claims 1-20 are determined to be directed to an abstract idea. The rationale for this determination is explained below: Regarding Claims 1, 12, and 17: Claims 1, 12, and 17 are drawn to an abstract idea without significantly more. The claims recite “receive a graph including a plurality of nodes linked by one or more connections, wherein each node of the plurality of nodes represents a card with a chargeback claim or represents a merchant, each node of the plurality of nodes is unlabeled or associated with a plurality of potential labels, and a connection between a first node and a second node represents a transaction made between a merchant and a user using a card with a chargeback claim; augment the graph by creating one or more new connections in the graph; for each node of the plurality of nodes included in the augmented graph: using a first machine learning model, determine a first vector associated with the node based on the augmented graph, wherein each value included in the first vector is associated with a label and represents a likelihood that the label is a more accurate label, when the node represents a card with a chargeback claim, each label associated with a value included in the first vector is third party fraud, first party fraud, or technical error, and when the node represents a merchant, each label associated with a value included in the first vector is a merchant category code; using a second machine learning model, determine a second vector associated with the node based on the augmented graph, wherein each value included in the second vector is associated with a label and represents a likelihood that the label is the more accurate label, when the node represents a card with a chargeback claim, each label associated with a value included in the second vector is third party fraud, first party fraud, or technical error, and when the node represents a merchant, each label associated with a value included in the second vector is a merchant category code; and determine the more accurate label for the node based on the first vector and the second vector; and send, to a server, a determination of whether to allow or deny a transaction based on one or more accurately labeled nodes, wherein the server is configured to perform or deny the transaction based on the determination.” Under the Step 2A Prong One, the limitations, as underlined above, are processes that, under its broadest reasonable interpretation, cover Certain Methods Of Organizing Human Activity such as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). For example, but for the “machine learning model” and “server” language, the underlined limitations in the context of this claim encompass the human activity. The series of steps belong to a typical sales activities or behaviors, because a graph with nodes and connections is processed for a determination of whether to allow or deny a transaction. Under the Step 2A Prong Two, this judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “A system for determining more accurate labels for nodes in a graph, the system comprising: an electronic computing device, the electronic computing device including: an electronic processor, the electronic processor configured to:”, “A method for determining more accurate labels for nodes in a graph, the method comprising:”, “A non-transitory computer-readable medium comprising executable instructions that, when executed by an electronic processor, cause the electronic processor to perform a set of functions comprising:”, “machine learning model”, and “server”. The additional elements are recited at a high-level of generality (i.e., performing generic functions of an interaction) such that it amounts no more than mere instructions to apply the exception using a generic computer component, merely implementing an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Additionally, regarding the specification and claims, there is no improvement in the functioning of a computer or an improvement to other technology or technical field present, there is no applying or using the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition present, there is no implementing the judicial exception with or using the judicial exception in conjunction with a particular machine or manufacture that is integral to the claim present, there is no effecting a transformation or reduction of a particular article to a different state or thing present, and there is no applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment present such that the claim as a whole is more than a drafting effort designed to monopolize the exception. Accordingly, these additional elements, individually or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under the Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements in the process amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Regarding Claims 2, 5-11, 15-16, and 19-20: Dependent claims 6, 9, 16, and 20 only further elaborate the abstract idea and do not recite additional elements. Dependent claims 2, 5, 7-8, 10-11, 15, and 19 include additional limitations, for example, “machine learning model” (Claim 2); “machine learning model” (Claim 5); “machine learning model” (Claims 7, 15, and 19); “machine learning model” (Claim 8); “machine learning model” (Claim 10); and “machine learning model” (Claim 11), but none of these limitations are deemed significantly more than the abstract idea because, as stated above, they require no more than generic computer structures or signals to be executed, and do not recite any Improvements to the functioning of a computer, or Improvements to any other technology or technical field. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation or implementing the judicial exception on a generic computer. Therefore, whether taken individually or as an ordered combination, claims 2, 5-11, 15-16, and 19-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9, and 12-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filliben (US 20190377819 A1) in view of Wang (WO 2024107587 A1). Regarding Claims 1, 12, and 17, Filliben teaches A system for determining more accurate labels for nodes in a graph, the system comprising: an electronic computing device, the electronic computing device including: an electronic processor, the electronic processor configured to (Filliben: Abstract; Paragraph(s) 0004, 0014-0016, 0021): A method for determining more accurate labels for nodes in a graph, the method comprising (Filliben: Paragraph(s) 0004, 0014-0016): A non-transitory computer-readable medium comprising executable instructions that, when executed by an electronic processor, cause the electronic processor to perform a set of functions comprising (Filliben: Paragraph(s) 0021, 0027): receive a graph including a plurality of nodes linked by one or more connections, wherein each node of the plurality of nodes represents a card with a … claim or represents a merchant, each node of the plurality of nodes is unlabeled or associated with a plurality of potential labels and a connection between a first node and a second node represents a transaction made between a merchant and a user using a card with a … claim; augment the graph by creating one or more new connections in the graph (Filliben: Paragraph(s) 0014-0016, 0004, 0078, 0071, 0115-0116 teach(es) a machine learning engine including a machine learning model trained using historical transaction data to identify a known pattern; a graph module configured to store and update a graph with incoming transaction data, the graph including nodes and edges, where each node corresponds to an entity type, and where each edge represents a relationship between two nodes; labeling the incoming transaction data by integrating a first node corresponding to the incoming transaction data into the graph and by inserting an edge linking the first node with an existing node in the graph, where the first node is an entity type based on the incoming transaction data; in a transaction involving a merchant and a credit card, a machine learning model associated with the merchant may output a low fraud score, whereas a second machine learning model associated with the credit card may output a high fraud score; entities may be as nodes in a graph); for each node of the plurality of nodes included in the augmented graph: using a first machine learning model, determine a first vector associated with the node based on the augmented graph, wherein each value included in the first vector is associated with a label and represents a likelihood that the label is a more accurate label, when the node represents a card with a … claim, each label associated with a value included in the first vector is third party fraud, first party fraud, or technical error, and when the node represents a merchant, each label associated with a value included in the first vector is a merchant category code; using a second machine learning model, determine a second vector associated with the node based on the augmented graph, wherein each value included in the second vector is associated with a label and represents a likelihood that the label is the more accurate label, when the node represents a card with a … claim, each label associated with a value included in the second vector is third party fraud, first party fraud, or technical error, and when the node represents a merchant, each label associated with a value included in the second vector is a merchant category code (Filliben: Paragraph(s) 0014-0015, 0008-0009, 0051, 0074, 0116, 0147, 0126, 0139, 0077-0078 teach(es) labeling the incoming transaction data by integrating a first node corresponding to the incoming transaction data into the graph and by inserting an edge linking the first node with an existing node in the graph, where the first node is an entity type based on the incoming transaction data; The system further including a feature vector, where the machine learning engine is configured to minimize a loss function based on an objective function using the feature vector; an ensemble may include a bucket of machine learning models and use a model selection algorithm to select the best model for a particular entity type; The spread of risk to different entities may be based on the likelihood that each entity is related to the fraudulent transaction; The label engine may also assign a value to the classification attribute of a node. In one example, the feature vector is an n-dimensional vector of numerical features that represent an entity in a graph.. another feature associated with the transaction data or the user; in a transaction involving a merchant and a credit card, a machine learning model associated with the merchant may output a low fraud score, whereas a second machine learning model associated with the credit card may output a high fraud score); and determine the more accurate label for the node based on the first vector and the second vector (Filliben: Paragraph(s) 0051, 0074 teach(es) In reinforcement learning, a machine learning algorithm is rewarded for correct labels, allowing it to iteratively observe conditions until rewards are consistently earned); and send, to a server, a determination of whether to allow or deny a transaction based on one or more more accurately labeled nodes, wherein the server is configured to perform or deny the transaction based on the determination (Filliben: Paragraph(s) 0086, 0133 teach(es) A hot file restriction may, for example, cause one or more devices to deny a transaction involving a particular entity associated with the restriction; In the case of a completed transaction, the restrict engine may analyze the transaction data as compared to the existing graph structure to determine whether any ex post facto alerts or flags may be desirable. A hot file restrict engine may, for example, cause a device, which correspond to a node in the graph, to deny a transaction involving a particular entity (e.g., modify permissions) associated with the restriction). However, Filliben does not explicitly teach a chargeback claim. Wang from same or similar field of endeavor teaches a card with a chargeback claim (Wang: Paragraph(s) 0047, 0049, 0098 teach(es) the issuer finds an abnormality based on a chargeback request of a requestor. One of the factors to determine to approve or decline the chargeback request of the requestor is a value of the fraud risk score). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Filliben to incorporate the teachings of Wang for a chargeback claim. There is motivation to combine Wang into Filliben because Wang’s teachings of a chargeback claim would facilitate transaction authorization, clearing, refund, chargeback, and fraud (Wang: Paragraph(s) 0047, 0049, 0098). Regarding Claim 2, the combination of Filliben and Wang teaches all the limitations of claim 1 above; and Filliben further teaches wherein the electronic processor is configured to train a third machine learning model with training data, wherein the training data includes the labeled nodes (Filliben: Paragraph(s) 0010, 0074 teach(es) training a machine learning model of a machine learning sub-engine of the ensemble using a corpus, where the corpus includes a training data and a test data; classifying a plurality of nodes in the graph based on the machine learning model, by setting a classification attribute of a first node and a second node of the plurality of nodes to one of a plurality of classifications). Regarding Claim 5, the combination of Filliben and Wang teaches all the limitations of claim 1 above; and Filliben further teaches wherein the electronic processor is configured to augment the graph by creating one or more new connections by: for each node of the plurality of nodes: using a fourth machine learning model, predicting a new connection based on a structure of the graph and features associated with the plurality of nodes, wherein the predicted new connection is associated with a likelihood; and when the predicted new connection is associated with a likelihood above a predetermined threshold, create the new connection in the graph (Filliben: Paragraph(s) 0016, 0020, 0074 teach(es) labeling the incoming transaction data by integrating a first node corresponding to the incoming transaction data into the graph and by inserting an edge linking the first node with an existing node in the graph, where the first node is an entity type based on the incoming transaction data; detecting, by the machine learning engine, an emerging pattern between a first node and second node in the graph based on the (i) and (ii); inserting an edge between the first node and the second node in the graph in response to the detecting of the emerging pattern). Regarding Claim 6, the combination of Filliben and Wang teaches all the limitations of claim 1 above; and Filliben further teaches wherein the electronic processor is configured to augment the graph by creating one or more new connections by: when there are more than a predetermined number of predicted new connections with a likelihood above a predetermined threshold predicted for a node, creating, in the graph, the predetermined number of new connections, wherein the created new connections are predicted new connections that are associated with higher likelihoods (Filliben: Paragraph(s) 0147, 0020, 0016teach(es) if a router has a high likelihood of being compromised, a laptop connecting to the Internet through the router may have a respective likelihood of being compromised; detecting, by the machine learning engine, an emerging pattern between a first node and second node in the graph based on the (i) and (ii); inserting an edge between the first node and the second node in the graph in response to the detecting of the emerging pattern). Regarding Claims 7, 15, and 19, the combination of Filliben and Wang teaches all the limitations of claims 1, 12, and 17 above; and Filliben further teaches wherein the first machine learning model and the second machine learning model are initialized with different weights (Filliben: Paragraph(s) 0014-0016 teach(es) spreading heat from the existing node to the first node, where the heat corresponds to a classification attribute, and where an amount of the heat spread is based on a weight assigned to the edge connecting the first node with the existing node). Regarding Claims 9, 16, and 20, the combination of Filliben and Wang teaches all the limitations of claims 1, 12, and 17 above; and Filliben further teaches wherein the electronic processor is configured to determine the more accurate label for the node based on the first vector and the second vector by: averaging the first vector and the second vector to determine an average vector; and determining a label associated with a greatest value included in the average vector to be the more accurate label for the node (Filliben: Paragraph(s) 0074, 0008, 0015 teach(es) The output of each model in the ensemble may be averaged together (e.g., when the output is a continuous variable). In other examples, an ensemble may incorporate Bayesian model combinations. In yet other examples, an ensemble may include a bucket of machine learning models and use a model selection algorithm to select the best model for a particular entity type). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filliben in view of Wang, as applied to claim 5 above, in view of Lewis (US 20200184268 A1). Regarding Claim 8, the combination of Filliben and Wang teaches all the limitations of claim 5 above; and Filliben teaches wherein the electronic processor is further configured to train the fourth machine learning model by: determining an … loss value for the fourth machine learning model; and based on the … loss value, adjusting the fourth machine learning model (Filliben: Paragraph(s) 0015, 0057, 0074 teach(es) The system further including a feature vector, where the machine learning engine is configured to minimize a loss function based on an objective function using the feature vector; The output of each model in the ensemble may be averaged together (e.g., when the output is a continuous variable); an ensemble may include a bucket of machine learning models and use a model selection algorithm to select the best model for a particular entity type). However, the combination of Filliben and Wang does not explicitly teach an augmentation loss value. Lewis from same or similar field of endeavor teaches an augmentation loss value (Lewis: Paragraph(s) 0058, 0062 teach(es) FIGS. 17A-B illustrate alterations of the distance-based loss function that attempt to incorporate additional information provided for suspect labels in an augmented training dataset). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Filliben and Wang to incorporate the teachings of Lewis for an augmentation loss value. There is motivation to combine Lewis into the combination of Filliben and Wang because Lewis’s teachings of augmentation loss value would facilitate to train the machine learning models (Lewis: Paragraph(s) 0058, 0062). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filliben in view of Wang, as applied to claims 1 above, in view of Zheng (US 20250068941 A1). Regarding Claim 10, the combination of Filliben and Wang teaches all the limitations of claim 9 above; and Filliben further teaches wherein the electronic processor is configured to train the first machine learning model and the second machine learning model by: determining a … loss value based on the first vector, the second vector, and a pseudo target vector; determining a … loss value based on the average vector and a label corruption matrix; and updating the first machine learning model and the second machine learning model based on the … loss value and the … loss value (Filliben: Paragraph(s) 0015, 0057, 0074, 0104, 0135 teach(es) The system further including a feature vector, where the machine learning engine is configured to minimize a loss function based on an objective function using the feature vector; The output of each model in the ensemble may be averaged together (e.g., when the output is a continuous variable); an ensemble may include a bucket of machine learning models and use a model selection algorithm to select the best model for a particular entity type; the personal computer may be compromised, such that transactions originating from the personal computer should be considered extremely suspect, if not per se fraudulent; Deep learning can also be executed using neural processing units (NPUs) that are optimized for artificial intelligence (AI) in hardware to handle dot product math and matrix operations using lower precision numbers). However, the combination of Filliben and Wang does not explicitly teach classifier loss value and reconstruction loss value. Zheng from same or similar field of endeavor teaches classifier loss value and reconstruction loss value (Zheng: Paragraph(s) 0119 teach(es) if the graph reconstruction is a binary classification task, using the binary classification cross-entropy loss as the graph reconstruction loss is supported). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Filliben and Wang to incorporate the teachings of Zheng for classifier loss value and reconstruction loss value. There is motivation to combine Zheng into the combination of Filliben and Wang because Zheng’s teachings of classifier loss value and reconstruction loss value would facilitate to train the machine learning models (Zheng: Paragraph(s) 0119). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Filliben in view of Wang, as applied to claim 9 above, and in further view of Lewis and Zheng. Regarding Claim 11, the combination of Filliben and Wang teaches all the limitations of claim 9 above; and Filliben further teaches wherein the electronic processor is configured to train the first machine learning model, the second machine learning model, and a fourth machine learning model by: determining an … loss value for the fourth machine learning model; determining a … loss value based on the first vector, the second vector, and a pseudo target vector; determining a … loss value based on the average vector and a label corruption matrix; determining a total loss value based on the … loss value, the … loss value, and the … loss value; and updating the first machine learning model, the second machine learning model, and the fourth machine learning model based on the total loss value (Filliben: Paragraph(s) 0015, 0057, 0074, 0104, 0135, as stated above with respect to claim 10). However, the combination of Filliben and Wang does not explicitly teach augmentation loss value. Lewis from same or similar field of endeavor teaches augmentation loss value (Lewis: Paragraph(s) 0058, 0062 teach(es) FIGS. 17A-B illustrate alterations of the distance-based loss function that attempt to incorporate additional information provided for suspect labels in an augmented training dataset). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Filliben and Wang to incorporate the teachings of Lewis for augmentation loss value. There is motivation to combine Lewis into the combination of Filliben and Wang because Lewis’s teachings of augmentation loss value would facilitate to train the machine learning models (Lewis: Paragraph(s) 0058, 0062). However, the combination of Filliben, Wang, and Lewis does not explicitly teach classifier loss value and reconstruction loss value. Zheng from same or similar field of endeavor teaches classifier loss value and reconstruction loss value (Zheng: Paragraph(s) 0119 teach(es) if the graph reconstruction is a binary classification task, using the binary classification cross-entropy loss as the graph reconstruction loss is supported). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of the combination of Filliben, Wang, and Lewis to incorporate the teachings of Zheng for classifier loss value and reconstruction loss value. There is motivation to combine Zheng into the combination of Filliben, Wang, and Lewis because Zheng’s teachings of classifier loss value and reconstruction loss value would facilitate to train the machine learning models (Zheng: Paragraph(s) 0119). Response to Arguments Applicant's arguments filed September 29, 2025 have been fully considered but they are not persuasive. Regarding applicant’s argument under Claim Rejections - 35 USC § 101 that “Applicant has amended the independent claims to define claims used in the terms and link the last element of the independent claims to the preceding claim elements,” examiner respectfully argues that the amended claims do not overcome the rejections, because the newly-added limitations do not provide any additional elements, not providing any improvements of the functioning of the computer or other technology or technical field. Regarding applicant’s argument under Claim Rejections - 35 USC § 103 that “Nothing in the cited portions of Filliben teach a vector "wherein each value included in the... vector is associated with a label and represents a likelihood that the label is a more accurate label, when the node represents a card with a chargeback claim, each label associated with a value included in the... vector is third party fraud, first party fraud, or technical error, and when the node represents a merchant, each label associated with a value included in the... vector is a merchant category code," as recited by claim 1,” examiner respectfully argues that Filliben teaches that the label engine may also assign a value to the classification attribute of a node. In one example, the feature vector is an n-dimensional vector of numerical features that represent an entity in a graph.. another feature associated with the transaction data or the user; in a transaction involving a merchant and a credit card, a machine learning model associated with the merchant may output a low fraud score, whereas a second machine learning model associated with the credit card may output a high fraud score (Filliben: Paragraph(s) 0014-0015, 0008-0009, 0051, 0074, 0116, 0147, 0126, 0139, 0077-0078), and Wang teaches that one of the factors to determine to approve or decline the chargeback request of the requestor is a value of the fraud risk score (Wang: Paragraph(s) 0047, 0049, 0098). Therefore, the combination of Filliben and Wang teaches all the features, as stated above with respect to the 103 rejections. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bitaab (US 20250173738 A1) teaches Automated Domain Crawler And Checkout Simulator For Proactive And Real-Time Scam Website Detection, including chargeback. Caldera (US 20190122149 A1) teaches Enhanced System And Method For Identity Evaluation Using A Global Score Value, including graph, vector, and chargeback. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CLAY LEE whose telephone number is (571)272-3309. The examiner can normally be reached Monday-Friday 8-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neha Patel can be reached at (571)270-1492. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CLAY C LEE/Primary Examiner, Art Unit 3699
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Jun 10, 2025
Non-Final Rejection — §101, §103
Sep 09, 2025
Examiner Interview Summary
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Response Filed
Jan 14, 2026
Final Rejection — §101, §103
Mar 11, 2026
Interview Requested
Mar 19, 2026
Applicant Interview (Telephonic)
Mar 19, 2026
Examiner Interview Summary
Mar 26, 2026
Request for Continued Examination
Apr 07, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597019
Post-Provisioning Authentication Protocols
2y 5m to grant Granted Apr 07, 2026
Patent 12591639
RESOURCE BASED LICENSING
2y 5m to grant Granted Mar 31, 2026
Patent 12572907
UNIVERSAL PAYMENT CHANNEL
2y 5m to grant Granted Mar 10, 2026
Patent 12561654
SYSTEMS AND METHODS FOR EXECUTING REAL-TIME ELECTRONIC TRANSACTIONS USING A ROUTING DECISION MODEL
2y 5m to grant Granted Feb 24, 2026
Patent 12561712
LOYALTY POINT DISTRIBUTIONS USING A DECENTRALIZED LOYALTY ID
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+57.1%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month