Prosecution Insights
Last updated: April 19, 2026
Application No. 18/809,149

COMPUTER-IMPLEMENTED METHODS, SYSTEMS COMPRISING COMPUTER-READABLE MEDIA, AND ELECTRONIC DEVICES FOR PROVIDING FINANCIAL NETWORK LARGE LANGUAGE MODEL DYNAMIC OPEN BANKING SERVICES

Non-Final OA §101
Filed
Aug 19, 2024
Examiner
MILLER, JAMES H
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mastercard International Incorporated
OA Round
1 (Non-Final)
40%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
77%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
78 granted / 193 resolved
-11.6% vs TC avg
Strong +37% interview lift
Without
With
+36.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
35 currently pending
Career history
228
Total Applications
across all art units

Statute-Specific Performance

§101
35.7%
-4.3% vs TC avg
§103
33.7%
-6.3% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 193 resolved cases

Office Action

§101
DETAILED ACTION Acknowledgements This action is in response to Applicant’s filing on Aug. 19, 2024, and is made Non-Final. This action is being examined by James H. Miller, who is in the eastern time zone (EST), and who can be reached by email at James.Miller1@uspto.gov or by telephone at (469) 295-9082. Interviews Examiner interviews are available by telephone or, preferably, by video conferencing using the USPTO’s web-based collaboration platform. Applicants are strongly encouraged to schedule via the USPTO Automated Interview Request (AIR) portal at http://www.uspto.gov/interviewpractice. Interviews conducted solely for the purpose of “sounding out” the examiner, including by local counsel acting only as a conduit for another practitioner, are not permitted under MPEP § 713.03. The Office is strictly enforcing established interview practice, and applicants should ensure that every interview request is directed toward advancing prosecution on the merits in compliance with MPEP §§ 713 and 713.03. For after-final Interview requests, supervisory approval is required before an interview may be granted. Each AIR should specifically explain how the After-Final Interview request will advance prosecution—for example, by identifying targeted arguments responsive to the rejection of record, alleged defects in the examiner’s analysis, proposed claim amendments, or another concrete basis for discussion. See MPEP § 713. If the AIR form’s character limits prevent inclusion of all pertinent details, Applicants may send a contemporaneous email to the examiner at James.Miller1@uspto.gov. The examiner is generally available Monday through Friday, 10:00 a.m. to 4:00 p.m. EST. For any GRANTED Interview Request, Applicant can expect an email within 24 hours confirming an interview slot from the dates/times proposed and providing collaboration tool access instructions. For any DENIED Interview Request, the record will include a communication explaining the reason for the denial. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS)s (x5) submitted on Oct. 3, 2025; Oct. 30, 2025; Nov. 13, 2025; and Dec. 4, 2025 (x2), were filed before the mailing of a first office action on the merits and therefore, is in compliance with the provisions of 37 CFR 1.97(b)(3). Accordingly, the IDSs have been considered. Claim Status The status of claims is as follows: Claims 1–20 are pending and examined with Claims 1 and 11 in independent form. This is a first action on the merits. Claim Interpretation Under the broadest reasonable interpretation, the following claim terms are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. MPEP § 2111. predefined training action is a stored training configuration (recipe) created in advance that defines how to curate training data and retrain the model. Spec. ¶¶ 6, 7, 60–63, 66–68, Claims 6, 16. It does not require a particular algorithm or new data structure beyond being a stored configuration. predefined prompt [template] modification is a stored configuration (recipe) created in advance that defines how a prompt (template) is to be altered. Spec. ¶¶ 6, 7, 51–53, 60–63, 66–68, Claims 7, 17. It does not require a particular algorithm or new data structure beyond being a stored configuration. training metadata is data (e.g., fields, tags, numeric ranges, labels) stored with a predefined training action so that the later-derived training value can be compared to it and a “best match” action can be selected. Spec. ¶¶ 60–63, 66–68, Claims 1, 11. It is an abstract information label/criterion used in a later matching steps, implemented with generic data storage and comparison, and not a new data structure beyond being stored or new hardware configuration. prompt metadata is stored performance/descriptive data associated with each prompt (template)-change recipe and used later when comparing current prompt values to decide which predefined modification to apply. Spec. ¶¶ 60–63, 66–68, Claims 1, 11. It is an abstract information label/criterion used in later matching steps, implemented with generic data storage and comparison, and not a new data structure beyond being stored or new hardware configuration. the training metadata being configured for matching against one or more values for a predefined training performance characteristic of an LLM is nothing more than a description of generic information that is later used for comparison in the “matching” step. It does not impose any structural limits on the memory or on the metadata itself. It merely states the intended use of that data. MPEP § 2103(I)(C). Specifically, that the subsequent logic will compare “training values” to this metadata in the “match the training value … using the training metadata” step.. Thus, informational labeling used in a later match operation implemented with routine data storage and comparison on a generic computer and not a meaningful technological limitation. For example, under BRI, a single piece of training metadata (data about other data), is “configured for matching” in the later step, like ordering meals by numbers at a fast food restaurant. the prompt metadata being configured for matching against one or more values for a predefined prompt performance characteristic of the LLM is nothing more than a description of generic information that is later used for comparison in the “matching” step. It does not impose any structural limits on the memory or on the metadata itself. It merely states the intended use of that data. Specifically, that the subsequent logic will compare “prompt values” to this metadata in the “match the prompt value … using the prompt metadata” step. MPEP § 2103(I)(C). Thus, this element is another instance of storing comparison criteria (information content) to be used in a later, routine matching step implemented with routine data storage and comparison on a generic computer and not a meaningful technological limitation. For example, under BRI, a single piece of prompt metadata (data about other data), is “configured for matching” in the later step, like ordering meals by numbers at a fast food restaurant. based on the predefined training action, curate a training data set and retrain the LLM on the training data set to generate a retrained LLM for value optimized transaction data prompts is after a particular predefined training action has been selected, use whatever that training action specifies to: (1) pick the training examples to create a training data set; and (2) perform a training procedure on the LLM with that training data set, producing an updated (retrained) version of the LLM. The intended future use of the retrained model “for value optimized transaction data prompts” is intended use, MPEP § 2103(I)(C). This limitation is purely functional because it does not claim a particular new training algorithm or model structure. based on the predefined prompt modification, generate a second prompt, the second prompt including second open banking data and seeking one or both of the value optimized transaction data and second value optimized transaction data is after a particular predefined prompt (template) modification recipe has been selected, use the recipe to construct a new (second) prompt, where the new prompt: (1) incorporates some additional or different open-banking data, and (2) is worded/structured to request value optimized transaction data. This entire element is functional “use the selected prompt-modification recipe to construct a new financial optimization prompt using more/different opening banking data” with the training “seeking” language being intended use about what the prompt is asking for. MPEP § 2103(I)(C). This limitation is purely functional because it does not claim a particular new training algorithm or model structure. generate a second output based on the second prompt to the retrained LLM is send the second prompt into the retrained LLM and record whatever output the model returns as the “second output.” It does not require a particular algorithm or new data structure. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1–20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Analysis Step 1: Claims 1–20 are directed to a statutory category. Claims 1–10 recite a “non-transitory machine-readable storage medium” and are therefore, directed to the statutory category of an "article of manufacture.” Claims 11–20 recite a “computer-implemented method” and are therefore, directed to the statutory category of a “process.” Representative Claim Claim 1 is representative [“Rep. Claim 1”] of the subject matter under examination and recites, in part, emphasis added by Examiner to identify limitations with normal font indicating the abstract idea exception, bold limitations indicating additional elements. Each limitation is identified by a letter for later use as a shorthand notation in referencing/describing each limitation. Portions of the claim use italics to identify intended use limitations1 and underline, as needed, in further describing the abstract idea exception: [A] 1. Non-transitory computer-readable storage media having computer-executable instructions stored thereon for providing dynamic large language model (LLM) open banking services, wherein when executed by at least one processor the computer-executable instructions cause the at least one processor to: [B] generate a predefined training action and a predefined prompt modification for value optimized transaction data prompts; [mental] [C] store the predefined training action in association with training metadata, the training metadata being configured for matching against one or more values for a predefined training performance characteristic of an LLM; [D] store the predefined prompt modification in association with prompt metadata, the prompt metadata being configured for matching against one or more values for a predefined prompt performance characteristic of the LLM; [E] generate an output based on a first prompt to the LLM, the first prompt including open banking data and seeking value optimized transaction data, and the output including a response from the LLM relating to the value optimized transaction data; [F] evaluate the output against the predefined training performance characteristic to generate a training value; [G] evaluate the output against the predefined prompt performance characteristic to generate a prompt value; [H] match the training value to the predefined training action using the training metadata; [I] match the prompt value to the predefined prompt modification using the prompt metadata; [J] based on the predefined training action, curate a training data set and retrain the LLM on the training data set to generate a retrained LLM for value optimized transaction data prompts; [K] based on the predefined prompt modification, generate a second prompt, the second prompt including second open banking data and seeking one or both of the value optimized transaction data and second value optimized transaction data; and [L] generate a second output based on the second prompt to the retrained LLM. Claims are directed to an abstract idea exception. Step 2A, Prong One: Rep. Claim 1 recites: “generating” and “storing” “predefined training actions” and “predefined prompt modifications” with associated “metadata” (Limitations B, C, D); generating LLM outputs from prompts containing “open banking data” (Limitation E); evaluating those outputs against predefined “performance characteristics” to generate a “training value” and “prompt value” (Limitations F, G); match the generated “training value” and “prompt value” to the stored “predefined training action” and “predefined prompt modification” using the associated “metadata” (Limitations H, I); curating training data and retraining the LLM to generate a retrained LLM (Limitation J); generate new (second) prompts containing “open banking data” (Limitation K); generate using the retrained LLM, new (second) output based on the new (second prompts) (Limitation L) to “seek” “optimized transaction data” (Limitation K), which recites a fundamental economic principle/practice or commercial or legal interactions under the organizing human activity exception because “seeking optimized [financial] transaction data” describe concepts relating to the economy and commerce, such as “hedging, insurance, and mitigating risks,” MPEP § 2106.04(a)(2)(II)(A), or “sales activities or behaviors, and business relations” between two people. MPEP § 2106.04(a)(2)(II)(B). Alternatively2, Limitations B–L, as drafted, recite the abstract idea exception of mental processes that under the broadest reasonable interpretation, cover performance in the human mind or with pen and paper, but for the recitation of the generic computer components indicated in bold. MPEP § 2106.04(a)(2)(III). Claims recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: • a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); . . . • a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011). MPEP § 2106.04(a)(2)(III)(A). For example, but for the generic computer components claim language, here, Limitations B–L, recite collecting information (Limitations C, D) and analyzing it (Limitations B, E, F, G, H, I, J, K), where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind. For example, Limitation B is mental process that is practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “generate [select/receive] a predefined training action and a predefined prompt modification.” Limitation B covers any solution with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, which is so broad as to encompass mental processes. Likewise, Limitation E is mental process that is practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “generate an output based on a first prompt to the LLM [transmit first prompt to LLM], the first prompt including open banking data and seeking value optimized transaction data, and the output including a response from the LLM.” As interpreted under BRI, a human could act as the LLM, could read a prompt that includes open banking data, apply their own reasoning to the data and that question, and write out a textual response that recommends one or more transaction parameters optimizing the requested value. The claims do not recite a minimum scale or volume of data and thus encompasses embodiments where the input consists of only small amount of data that a human could score manually. Limitation E covers any solution with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, which is so broad as to encompass mental processes (purely functional). Limitations F and G are mental processes that are practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to “evaluate the output”. Limitations F and G cover any solution with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, which is so broad as to encompass mental processes (purely functional). Limitations H and I are mental processes that are practically performed in the human mind or with pen and paper because collecting and comparing known information (i.e., “training value to the predefined training action using the training metadata” and “the prompt value to the predefined prompt modification using the prompt metadata”) are steps that can be practically performed in the human mind under Classen. Limitation J is mental process that is practically performed in the human mind or with pen and paper because it requires mere “observation, evaluation, judgment, and/or opinion” to based on the predefined training action, curate a training data set and retrain the LLM on the training data set to generate a retrained LLM.” Under BRI, the claim does not require large-scale data or any particular training algorithm. It merely requires picking data according to a predefined recipe and updating the model based on that data to improve answers to “value optimized transaction data” prompts. A human with pen and paper and a small data set of examples could: (1) Treat the “predefined training action” as a written recipe: e.g., “If ESG performance is low for this kind of transaction, collect 2 past transactions with high ESG scores, their features, and the recommended actions; (2) Curate the training set by hand: read the transaction records, select those that match the recipe’s conditions, summarize them in a notebook table (columns for merchant, amount, category, ESG impact, chosen parameters, etc.); (3)  “Retrain the LLM” mentally: study those curated examples, infer patterns (e.g., “avoid merchants of type X; increase spend in category Y”), and update one’s rule of thumb playbook for answering future “value optimized transaction” questions; (4) The result is a “retrained” human decision model for this class of prompts: when asked similar questions later, the human uses the updated rules to generate new recommendations that better match the value/objective. As further evidenced by Leskovec et al., “Mining of Massive Datasets,” (2019) [“NPL Leskovec”] training and retraining a model consist of selecting appropriate examples and updating decision rules or parameters to better fit an objective, which is the same high level process recited in “curate a training data set and retrain the LLM,” and which can likewise be performed by a human on a small set of financial transactions using pen and paper. NPL Leskovec, pp. 472–75 (“Training a Perceptron with Zero Threshold”), 476–77 (The Winnowing Algorithm”), 519–20 (“Gradient Decent”), 567–568 (“Training A Neural Net” and “Backpropagation”) (cited PTO-892). Limitations K and L are mental process that are practically performed in the human mind or with pen and paper because they require mere “observation, evaluation, judgment, and/or opinion” to “based on the predefined prompt modification, generate a second prompt, the second prompt including second open banking data” (Limitation K) and “generate a second output based on the second prompt to the retrained LLM” (Limitation L) for the same reasoning as articulated for Limitation E. “The use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation, but simply accounts for variations in memory capacity from one person to another” or a multi-step mental process. MPEP § 2106.04(a)(2)(III)(B). If a claim limitation under BRI, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract idea exception. MPEP § 2106.04(a)(2)(III). Accordingly, the pending claims recite the combination of these abstract idea exceptions. Step 2A, Prong Two: Rep. Claim 1 does not contain additional elements that integrate the abstract idea exception into a practical application because the additional elements are mere instructions to apply the abstract idea exception. MPEP § 2106.05(f). The additional elements are limited to the computer components and indicated in bold, supra. The additional elements are: Non-transitory computer-readable storage media having computer-executable instructions stored thereon; at least one processor; LLM; and retrained LLM. Regarding the Non-transitory computer-readable storage media having computer-executable instructions stored thereon; at least one processor; LLM; and retrained LLM, Applicant’s Specification does not otherwise describe them or describes them using exemplary language as a general-purpose computer, as a part of a general-purpose computer, or as any known and exemplary (generic) computer component known in the prior art. Thus, Applicant takes the position that such hardware/software is so well known to those of ordinary skill in the art that no explanation is needed under 35 U.S.C. § 112(a). Lindemann Maschinenfabrik GMBH v. Am. Hoist & Derrick Co., 730 F.2d 1452, 1463 (Fed. Cir. 1984) (citing In re Meyers, 410 F.2d 420, 424 (CCPA 1969) (“[T]he specification need not disclose what is well known in the art”). E.g., the Specification explains that the processor is generic electronic hardware processors configured by hardware/software/firmware to perform the claimed operations. Spec. ¶ 84. The computer-implemented methods are performed by client devices, servers, and a service device using processors, transceivers, hardware, software, firmware, and computer-readable media storing executable programs. Id. ¶¶ 86, 131. A processing element may be special-purpose or general-purpose, including programmable logic embodied in a general-purpose processor, with operations carried out by machines performing “processing,” “computing,” “determining,” and similar functions. Id. ¶¶ 198, 199, 201, 203). The LLM 504 and any retrained LLM for value optimized transaction data prompts are each implemented as software executing on such processing elements and computer-readable media, without any non-generic hardware structure beyond these known general-purpose components. “[T]he LLM is initially a licensed model such as those made available under the trademarks GPT-4® or CHATGPT® … as of the date of the initial filing of the present disclosure.” Id. ¶ 64. Similarly, with respect to the “communication network” and “communication elements,” the Specification describes only generic, well-known networking components and technologies. Spec. ¶¶ 35, 36. Limitation A describes the processor executing instructions stored in media to perform the steps of the claimed invention. This takes generic hardware and describes the functions of receiving, storing, and sending data (instructions) between the processor and memory device, which merely invokes computers or other machinery in its ordinary capacity to receive, store, or transmit data. MPEP § 2106.05(f)(2). Limitations B–L describe the processor, memory device, and instructions, performing the steps of the claimed invention, which represents the abstract idea exception itself. Performing the steps of the abstract idea exception itself simply adds a general-purpose computer after the fact to an abstract idea exception, MPEP § 2106.05(f)(2), or generically recites an effect of the judicial exception. MPEP § 2106.05(f)(3). Therefore, the claim as a whole, looking at the additional elements individually and in combination, are no more than mere instructions to apply the exception using generic computer components and is not a practical application. MPEP § 2106.05(f). The additional elements do not integrate the abstract idea exception into a practical application because they do not impose any meaningful limits on the abstract idea exception. Accordingly, Rep. Claim 1 is directed to an abstract idea. Rep. Claim 1 is not substantially different than Independent Claim 11 and includes all the limitations of Rep. Claim 1. Independent Claim 11 contains no additional elements. Therefore, Independent Claim 11 is also directed to the same abstract idea. The claims do not provide an inventive concept. Step 2B: Rep. Claim 1 fails Step 2B because the claim as whole, looking at the additional elements individually and in combination, are not sufficient to amount to significantly more than the recited judicial exception. As discussed with respect to Step 2A, Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer and/or generic computer components. MPEP § 2106.05(f). The same analysis applies here in Step 2B. Mere instructions to apply an exception using a generic computer and/or generic computer components cannot provide an inventive concept. MPEP § 2106.05(I). The additional elements, taken individually and in combination, do not result in the claim, as a whole, amounting to significantly more than the identified judicial exception. The pending claims in their combination of additional elements is not inventive. First, the claims are directed to an abstract idea. Second, each additional element represents a currently available generic computer technology, used in the way in which it is commonly used (individually generic). Last, Applicant’s Specification discloses that the combination of additional elements is not inventive. Spec., ¶¶ 85, 130, 196 (steps/functions may be performed in any order); ¶¶ 35, 36, 64, 84, 86, 131, 198, 199, 201, 203 (known and generic (exemplary) computer equipment as explained and cited supra.) Thus, Examiner finds the additional elements of Rep. Claim 1 are elements that have been recognized as well-understood, routine, and conventional (“WURC”) activity in the particular field of this invention based on Applicant’s own disclosure3. Spec. ¶¶ 35, 36, 64, 84, 85, 86, 130, 131, 196, 198, 199, 201, 203; MPEP § 2106.05(d). Specifically, Applicant’s Specification discloses the recited additional elements (i.e., Non-transitory computer-readable storage media having computer-executable instructions stored thereon; at least one processor; LLM; and retrained LLM) are limited to generic computer components. These elements do no more than “apply” the recited abstract idea(s) on a known computer (e.g., processor) and computer-related components (e.g., media). NPL Leskovec is additional evidence that training/retraining consists of choosing a training set and adjusting model parameters/decision rules to better satisfy an objective (e.g., perceptron/Winnow updates, gradient decent). NPL Leskovec, pp. 472–77. The clam’s “curate a training data set and retrain the LLM … based on a predefined training action” is just the same high-level process—select examples and update model weights to improve performance on an objective—NPL Leskovec teaches as routine ML practice. Using “performance characteristics” and “objective function” and scores (“training value,” “prompt value”) to drive such retraining is likewise standard ML (loss/objective training). Given NPL Leskovec’s examples of choosing learning rate, stopping criteria, and alternative algorithms (perceptron versus Winnow, nearest neighbor) based on performance on training data, storing “training metadata” and “prompt metadata” and using performance scores to select a “predefined training action” or “predefined prompt-modification” is just routine design choice: a lookup table mapping from scores to which training recipe or prompt template to use. Rule based selection of learning strategy or hyperparameters based on prior performance is common in ML pipelines and WURC. The Examiner also finds the functions described in Limitations A–L are all normal functions of a generic computer. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the additional elements in combination adds nothing that is not already present when looking at the elements individually. Their collective functions merely provide conventional computer implementation and standard ML techniques to implement the abstract idea at a high level of generality. Thus, Rep. Claim 1 does not provide an inventive concept. Rep. Claim 1 is not substantially different than Independent Claim 11 and includes all the limitations of Rep. Claim 1. Independent Claim 11 contains no additional elements. Therefore, Independent Claim 11 also does not recite an inventive concept. Dependent Claims Not Significantly More The dependent claims have been given the full two-part analysis including analyzing the additional limitations both individually and in combination. The dependent claim(s) when analyzed both individually and in combination are also held to be patent ineligible under 35 U.S.C. § 101. Dependent claims are dependent on Independent Claims and include all the limitations of the Independent Claims. Therefore, all dependent claims recite the same Abstract Idea. Dependent claims do not contain additional elements that integrate the abstract idea exception into a practical application or recite an inventive concept because the additional elements: (1) are mere instructions to apply the abstract idea exception; and/or (2) further limit the abstract idea exception of the Independent Claims. The abstract idea itself cannot provide the inventive concept or practical application. MPEP §§ 2106.05(I), 2106.04(d)(III). Dependent Claims 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, and 20 all recite “wherein” clauses or limitations that further limit the abstract idea of the Independent Claims and contain no additional elements. Regarding Claims 2 and 12, these claims only alter the logic of how performance scores are used and matching recites a mental process under Classen. Claims 3 and 13 merely add more scoring dimensions to the same abstract evaluation/selection process. Adding more score fields is a conventional data modeling choice and do not recite any new technical mechanism. The computer still only stores and compares numbers in a routine way. Claims 4 and 14 further narrow the business content (what is being recommended and which financial data is used) not how the computer operates. Claims 5 and 15 represents goals as objective functions, update external business objectives, and update the objective function, which are routine business optimization functions. These claims merely frame the abstract idea (refining recommendations to meet changing business goals) in mathematical language with no improvement in technology. Claims 6 and 16 merely further limits the predefined training action found abstract in Claims 1 and 11. Claims 7 and 17 further limits the predefined prompt (template) modification found abstract in Claims 1 and 11. Claims 8, 9, 18, and 19 adds a generic pre-post processing privacy filer, which is another abstract process. Claims 10 and 20 describe using known heuristics or generic algorithm techniques to learn correlations and drive which predefined recipe to pick next. This is the exact routine discussed in NPL Leskovec (see citations supra). These claims are purely functional without any specific unconventional detail. Conclusion Claims 1–20 are therefore drawn to ineligible subject matter as they are directed to an abstract idea without significantly more. The analysis above applies to all statutory categories of invention. As such, the presentment of Rep. Claim 1 otherwise styled as another statutory category is subject to the same analysis. Examiner Statement of Prior Art—No Prior Art Rejections Based on the prior art search results and detailed element-by-element mapping, the prior art of record fails to anticipate or render obvious the claimed subject matter of the instant application. While some individual features of Claims 1–20 may be shown in the prior art of record—such as conventional fine-tuning of a pretrained language model or generic prompt optimization—no known reference, alone or in any reasonable combination, teaches or suggests the claimed architecture comprising: (1) “predefined training action” and “predefined prompt modification”; (2) respective “training metadata” and “prompt metadata” “configured for matching” against separate “training” and “prompt performance characteristics” of an LLM; (3) generation of distinct a “training value” and “prompt value” from each LLM output; and (4) metadata-driven matching of those values to the predefined training action and predefined prompt modification to “curate a training data set and retrain the LLM” and “generation of a second prompt” and “output” as recited in Independent Claims. The prior art most closely resembling the applicant’s claimed invention, generally, falls into three buckets: (i) LLM fine-tuning patents; (ii) automated prompt-optimization and (3) generic LLM adaption/fine-tuning patents. The closest prior art is: Chouta et al. (U.S. Pat. Pub. No. 2024/0220489) (Pub. Date: Jul. 4, 2024) is pertinent because it discloses retrieving a generic ML model, fine-tuning it on prior natural language requests and associated enterprise queries, and then using the fine-tuned model to transform follow-on natural language requests into analytic queries with improved task performance. However, Chouta does not disclose the claimed control loop architecture of (1)–(4), supra, recited in Independent Claims. Liu et al. (U.S. Pat. No. 2023/0289616) is pertinent because it discloses training a machine learning model on a plurality of devices in parallel. The method performs model profiling execution before a model normal execution, allocates tensors of the model into a plurality of chunks based on profiling results from the model profiling execution, and performs the model normal execution on the plurality of devices in parallel to train or fine-tune the model. However, Liu does not disclose the claimed control loop architecture of (1)–(4), supra, recited in Independent Claims. Agrawal et al. (U.S. Pat. Pub. No. 2025/0335777) (Filed: Apr. 30, 2024) is pertinent because it discloses selecting a pre-trained LLM and applying supervised fine tuning (e.g., bilingual transcripts) so that “the fine-tuned LLM can be used for certain generative tasks with improved performance compared to the pre-trained LLM prior to fine tuning.” However, Agrawal does not disclose the claimed control loop architecture of (1)–(4), supra, recited in Independent Claims. FOR: CN Pat. Pub. No. 116629235 B is pertinent because it discloses obtaining “pre-trained large-scale language model” and fine-tuning it using “a preset task instruction template to obtain an input text and an output text for fine-tuning.” Calculating a loss function based on the output of the model and updating parameters using the loss function until the model converges. However, FOR does not disclose the claimed control loop architecture of (1)–(4), supra, recited in Independent Claims. NPL: Pryzant, et al., “Automatic Prompt Optimization with Gradient Search and Beam Search (2023) is pertinent because it discloses automatic prompt optimization for LLMs by iteratively generating prompt variants, scoring them on task performance, and revising prompts using “natural language gradients”, beam search, or bandit-style selection. However, NPL does not disclose the claimed control loop architecture of (1)–(4), supra, recited in Independent Claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES H MILLER whose telephone number is (469)295-9082. The examiner can normally be reached M-F: 10- 4 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett M Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES H MILLER/Primary Examiner, Art Unit 3694 1 Statements of intended use fail to limit the scope of the claim under BRI. MPEP § 2103(I)(C). 2 “It should be noted that these groupings are not mutually exclusive, i.e., some claims recite limitations that fall within more than one grouping or sub-grouping. … Accordingly, examiners should identify at least one abstract idea grouping, but preferably identify all groupings to the extent possible, if a claim limitation(s) is determined to fall within multiple groupings and proceed with the analysis in Step 2A Prong Two.” MPEP § 2106.04(a). 3 See Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision (Berkheimer v. HP, Inc.), 3-4, https://www.uspto.gov/sites/default/files/documents/memo-berkheimer-20180419.PDF (April, 18, 2018) (That additional elements are well-understood, routine, or conventional may be supported by various forms of evidence, including "[a] citation to an express statement in the specification or to a statement made by an applicant during prosecution that demonstrates the well-understood, routine, conventional nature of the additional element(s).").
Read full office action

Prosecution Timeline

Aug 19, 2024
Application Filed
Mar 27, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602690
SYSTEMS AND METHODS FOR TRANSACTION AUTHORIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12591931
METHODS, APPARATUS, AND SYSTEMS TO FACILITATE TRADES USING DISPLAYED FINANCIAL CURVES
2y 5m to grant Granted Mar 31, 2026
Patent 12561745
Artificial Intelligence Systems and Methods for Efficient Use of Assets
2y 5m to grant Granted Feb 24, 2026
Patent 12547992
CRYPTOGRAPHIC CURRENCY EXCHANGE
2y 5m to grant Granted Feb 10, 2026
Patent 12518279
SYSTEMS AND METHODS FOR PROVIDING MULTI-FACTOR AUTHENTICATION FOR VEHICLE TRANSACTIONS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
40%
Grant Probability
77%
With Interview (+36.6%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 193 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month