Prosecution Insights
Last updated: April 19, 2026
Application No. 18/755,573

ITERATIVE PROMPT GENERATION LOOP

Non-Final OA §102§103§112
Filed
Jun 26, 2024
Examiner
WITHEY, THEODORE JOHN
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
10 granted / 23 resolved
-18.5% vs TC avg
Strong +47% interview lift
Without
With
+46.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
39 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
22.0%
-18.0% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
17.1%
-22.9% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This office action is a First Action on the Merits (FAOM) for the claim set submitted on 06/26/2024. Claims 1-20 are pending and have been considered. The examiner would like to note that claims 1-20 have been deemed to be containing eligible subject matter under 35 U.S.C. 101 due to the inclusion of a claimed improvement in the independent claims, namely, iteratively generating candidate prompts at a machine learning model based on evaluation scores of a plurality of candidate prompts at each iteration, removing the need for time-consuming trial and error prompt engineering strategies which require user development and are not translatable to other domains/models (see [0002], [0016] of instant app). See components of the “in each of a plurality of iterations of a prompt generation loop:” sections of independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement(s) submitted on 08/21/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Specification The disclosure is objected to because of the following informalities: paragraphs [0049], [0078], and [0087] discloses “one or more few-show examples of the machine learning model task” (emphasis added to underlined portion). The examiner believes this is meant to be referring to “few-shot” examples, as later disclosed in [0078]: “…programmatically generating a few-shot example…”. Appropriate correction is required. Claim Objections Claims 7 and 16 are objected to because of the following informalities: lines 6/5 of the claims respectively read “one or more few-show examples of the machine learning model task” (emphasis added to underlined portion). Similarly to the specification, the examiner believes these terms are meant to be “few-shot”. This amendment will be adopted for further analysis of the claims. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 20 recites the limitation "the final prompt" in line 14. There is insufficient antecedent basis for this limitation in the claim. For further analysis of the claim, “the final prompt” will be amended to “a final prompt” which will be interpreted by the examiner to be any prompt from a resulting operation performed on a prompt. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3, 10-11, 13, and 19-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Jia et al. (US-20250371356-A1), hereinafter Jia. Regarding claim 1, Jia discloses: a computing system (Abstract, Methods, systems, and computer-readable storage media) comprising: one or more processing devices ([Fig. 6, Processor 610]) configured to: receive prompt generation instructions that specify an initial prompt ([0017] providing an initial version of a prompt template, the prompt template including dynamic input and first static input, generating a prompt using the initial version of the prompt template) and a prompt evaluation criterion ([0017] receiving, from a large language model (LLM), an output that is responsive to the prompt, providing an evaluation at least partially based on the output, [The instructions would be similar to “use an LLM to generate prompt evaluations”]); in each of a plurality of iterations of a prompt generation loop: generate a plurality of candidate prompts at least in part at a machine learning model ([0064] the LLM is used to generate an updated version of the prompt template, [0171] In some examples, the prompt template is an initial version for an initial iteration of optimization. In some examples, the prompt template is an updated version for a next iteration of optimization. A batch of prompts is generated (504)), wherein the candidate prompts are generated based at least in part on a current-iteration prompt that is initialized as the initial prompt in a first iteration of the plurality of iterations ([0083] After ten (10) iterations of batch-based optimization, an updated version of the prompt template can be provided, [Performing batch-generation of prompts ten times indicates the first optimization to be applied to an initial version in view of [0171] cited above, wherein ten iterations indicates a plurality]); as specified by the prompt evaluation criterion, compute respective evaluation scores associated with the candidate prompts ([0092] each evaluation in the batch of evaluations 306 is provided from the LLM system 220 in response to respective evaluation prompts provided by the evaluation module 206. For example, the evaluation module 206 can generate an evaluation prompt for each output in the batch of outputs 304 and prompts the LLM of the LLM system 220 using the evaluation prompt); and, based at least in part on the evaluation scores, replace the current-iteration prompt ([Fig. 4, 412 “Update prompt template”], [0169] If the score does not exceed the threshold score, the prompt template is updated (412)); and, output a final prompt generated in a final iteration of the plurality of iterations ([0169] If the score does exceed the threshold score, the prompt template is stored for production use (414), [storing prompts based on a threshold being met (indicating a final iteration in a plurality of iterations) requires an outputting of the final prompt from the evaluation module to a storage unit]). Regarding claim 3, Jia discloses: the computing system of claim 1. Jia further discloses: wherein the one or more processing devices are configured to compute the evaluation scores at least in part at an evaluation machine learning model ([0168] the evaluation module 206 can make an API call to the LLM system 220, the call including the evaluation prompt, where the LLM system 220 returns the evaluation 232). Regarding claim 10, Jia discloses: the computing system of claim 1. Jia further discloses: wherein: the prompt generation instructions indicate a mutable portion of the initial prompt and an immutable portion of the initial prompt ([0020] prompt templates include static input and dynamic input, [Mutable and immutable track to dynamic and static respectively]); and in the prompt generation loop, the one or more processing devices are configured to modify the mutable portion of the initial prompt while leaving the immutable portion unchanged ([0020] Here, the static input is the same for each prompt and each invocation of the LLM (each time the LLM is prompted), and the dynamic input includes data dictated by user interaction for each invocation of the LLM). Regarding claim 11, Jia discloses: a method for use with a computing system (Abstract, Methods, systems, and computer-readable storage media), the method comprising: receiving prompt generation instructions that specify an initial prompt ([0017] providing an initial version of a prompt template, the prompt template including dynamic input and first static input, generating a prompt using the initial version of the prompt template) and a prompt evaluation criterion ([0017] receiving, from a large language model (LLM), an output that is responsive to the prompt, providing an evaluation at least partially based on the output, [The instructions would be similar to “use an LLM to generate prompt evaluations”]); in each of a plurality of iterations of a prompt generation loop: generating a plurality of candidate prompts at least in part at a machine learning model ([0064] the LLM is used to generate an updated version of the prompt template, [0171] In some examples, the prompt template is an initial version for an initial iteration of optimization. In some examples, the prompt template is an updated version for a next iteration of optimization. A batch of prompts is generated (504)), wherein the candidate prompts are generated based at least in part on a current-iteration prompt that is initialized as the initial prompt in a first iteration of the plurality of iterations ([0083] After ten (10) iterations of batch-based optimization, an updated version of the prompt template can be provided, [Performing batch-generation of prompts ten times indicates the first optimization to be applied to an initial version in view of [0171] cited above, wherein ten iterations indicates a plurality]); as specified by the prompt evaluation criterion, computing respective evaluation scores associated with the candidate prompts ([0092] each evaluation in the batch of evaluations 306 is provided from the LLM system 220 in response to respective evaluation prompts provided by the evaluation module 206. For example, the evaluation module 206 can generate an evaluation prompt for each output in the batch of outputs 304 and prompts the LLM of the LLM system 220 using the evaluation prompt); and, based at least in part on the evaluation scores, replacing the current-iteration prompt ([Fig. 4, 412 “Update prompt template”], [0169] If the score does not exceed the threshold score, the prompt template is updated (412)); and, outputting a final prompt generated in a final iteration of the plurality of iterations ([0169] If the score does exceed the threshold score, the prompt template is stored for production use (414), [storing prompts based on a threshold being met (indicating a final iteration in a plurality of iterations) requires an outputting of the final prompt from the evaluation module to a storage unit]). Regarding claim 13, Jia discloses: the method of claim 11. Jia further discloses: computing the evaluation scores at least in part at an evaluation machine learning model ([0168] the evaluation module 206 can make an API call to the LLM system 220, the call including the evaluation prompt, where the LLM system 220 returns the evaluation 232). Regarding claim 19, Jia discloses: the method of claim 11. Jia further discloses: wherein: the prompt generation instructions indicate a mutable portion of the initial prompt and an immutable portion of the initial prompt ([0020] prompt templates include static input and dynamic input, [Mutable and immutable track to dynamic and static respectively]); and the method further comprises, in the prompt generation loop, modifying the mutable portion of the initial prompt while leaving the immutable portion unchanged ([0020] Here, the static input is the same for each prompt and each invocation of the LLM (each time the LLM is prompted), and the dynamic input includes data dictated by user interaction for each invocation of the LLM). Regarding claim 20, Jia discloses: a computing system (Abstract, Methods, systems, and computer-readable storage media) comprising: one or more processing devices ([Fig. 6, Processor 610]) configured to: via a graphical user interface (GUI) ([0178] the input/output device 640 includes a display unit for displaying graphical user interfaces), receive prompt generation instructions that specify an initial prompt ([0017] providing an initial version of a prompt template, the prompt template including dynamic input and first static input, generating a prompt using the initial version of the prompt template) and a prompt evaluation criterion ([0017] receiving, from a large language model (LLM), an output that is responsive to the prompt, providing an evaluation at least partially based on the output, [The instructions would be similar to “use an LLM to generate prompt evaluations”]); in each of a plurality of iterations of a prompt generation loop: generate a plurality of candidate prompts at least in part at a machine learning model ([0064] the LLM is used to generate an updated version of the prompt template, [0171] In some examples, the prompt template is an initial version for an initial iteration of optimization. In some examples, the prompt template is an updated version for a next iteration of optimization. A batch of prompts is generated (504)), wherein the candidate prompts are generated based at least in part on a current-iteration prompt that is initialized as the initial prompt in a first iteration of the plurality of iterations ([0083] After ten (10) iterations of batch-based optimization, an updated version of the prompt template can be provided, [Performing batch-generation of prompts ten times indicates the first optimization to be applied to an initial version in view of [0171] cited above, wherein ten iterations indicates a plurality]); as specified by the prompt evaluation criterion, compute respective evaluation scores associated with the candidate prompts ([0092] each evaluation in the batch of evaluations 306 is provided from the LLM system 220 in response to respective evaluation prompts provided by the evaluation module 206. For example, the evaluation module 206 can generate an evaluation prompt for each output in the batch of outputs 304 and prompts the LLM of the LLM system 220 using the evaluation prompt); and, based at least in part on the evaluation scores, replace the current-iteration prompt ([Fig. 4, 412 “Update prompt template”], [0169] If the score does not exceed the threshold score, the prompt template is updated (412)); compute a compiled prompt that includes the final prompt and further includes prompt input data received via the GUI ([Fig. 5, Update Prompt Template 510], [In view of the dynamic input of Jia ([0020]) which changes for each invocation of the LLM based on data dictated by user interaction, indicating updating a “final” prompt to be further including prompt input received via the GUI for the dynamic input, wherein the updating prompt is necessarily “compiled” to generate validation prompts from (Step 512)]); at the machine learning model, process the compiled prompt to generate a compiled prompt response ([Fig. 5, Receive validation outputs from LLM 514], [A validation output tracks to a form of prompt response, i.e. is the prompt valid?]); and output the compiled prompt response to the GUI ([0177] display graphical information for a user interface on the input/output device 640, [0169] If the score does exceed the threshold score, the prompt template is stored for production use (414), [storing prompts based on a threshold being met (indicating a final iteration in a plurality of iterations) requires an outputting of the final prompt from the evaluation module to a storage unit, wherein the prompt could be output to the GUI without a change in functionality to Jia as the output prompt to be stored could be “stored” on the user interface]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 4-5, 9, 12, 14, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jia in view of Vandeputte et al. (US-20250307289-A1), hereinafter Vandeputte. Regarding claim 2, Jia discloses: the computing system of claim 1. Jia does not disclose: wherein the one or more processing devices are further configured to: store the final prompt as a prompt fragment in a prompt library that includes a plurality of other prompt fragments; compute a compiled prompt that includes the final prompt and one or more of the other prompt fragments; at the machine learning model, process the compiled prompt to generate a compiled prompt response; and output the compiled prompt response. Vandeputte discloses: wherein the one or more processing devices are further configured to: store the final prompt as a prompt fragment in a prompt library that includes a plurality of other prompt fragments ([Fig. 2, Reference Prompt 220 containing segments 222-226 to be adjusted], [0062] provided with a reference prompt 220 as an input, [0064] one or more augmented prompts 230, 240 are obtained by adjusting or mutating one or more input segments with respect to the reference prompt 220, [Obtaining a reference prompt comprising a plurality of segments 222-226 to be augmented indicates the reference prompt to necessarily be stored before being retrieved for augmentation (in any iteration other than a first), wherein each segment of the prompt corresponds to a prompt fragment, indicating a plurality of prompt fragments to form a prompt library within the reference prompt. The reference prompt is a final prompt at the time of augmentation for any iteration beyond a first, i.e. a reference prompt with respect to a previous iteration output, final prompt, to be improved upon. Vandeputte discloses a reference prompt may be any input sequence ([0017]) indicating the final prompt as generated in Jia could be applied as the reference prompt of Vandeputte]); compute a compiled prompt that includes the final prompt and one or more of the other prompt fragments ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [Segment 232 represents a prompt fragment not previously part of the reference, i.e. final, prompt]); at the machine learning model, process the compiled prompt to generate a compiled prompt response ([Fig. 5, Generative AI Model 503], [0089] This augmented prompt may then be provided to the generative AI model 503 as an input, which generates an output sequence in response); and output the compiled prompt response ([Fig. 5, Output 513], [0092] the generated output sequence is provided as the final output 513). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Regarding claim 4, Jia discloses: the computing system of claim 1. Jia does not disclose: wherein, during each of the iterations of the prompt generation loop, the one or more processing devices are further configured to: insert one or more test input portions into each of the candidate prompts to obtain a plurality of test prompts; and at the machine learning model, process the test prompts to compute a plurality of test outputs; and compute the evaluation scores based at least in part on the test outputs. Vandeputte discloses: wherein, during each of the iterations of the prompt generation loop, the one or more processing devices are further configured to: insert one or more test input portions into each of the candidate prompts to obtain a plurality of test prompts ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [In view of the plurality of generated augmented prompts 230, 240, 250 indicating a plurality of test prompts]); and at the machine learning model, process the test prompts to compute a plurality of test outputs ([Fig. 5, Generative AI Model 503], [0089] This augmented prompt may then be provided to the generative AI model 503 as an input, which generates an output sequence in response, [In view of the plurality of test prompts 230, 240, 250 indicating a plurality of test outputs]); and compute the evaluation scores based at least in part on the test outputs ([Fig. 5, Prompt Importance Analysis 504], [0090] Module 504 may further be configured to determine if, and how many augmented prompts need to be obtained from respective reference prompts, as well as for how many target output sequences the prompt importance scores are to be determined. At least one prompt input sequence and target output sequence are obtained when determining the scores. Multiple target output sequences may be selected during the same analysis to smoothen the prompt importance scores. The obtained prompt importance scores allow evaluating the effectiveness of the input segments in the prompt 501, 502). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Regarding claim 5, Jia in view of Vandeputte discloses: the computing system of claim 4. Vandeputte further discloses: wherein the one or more processing devices are configured to: generate a respective plurality of the test prompts for each of the candidate prompts ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [In view of the plurality of generated augmented prompts 230, 240, 250 indicating a plurality of test prompts]). Jia further discloses: repeat the prompt generation loop until, for at least one of the candidate prompts, each of the test prompts generated from that candidate prompt exceeds a predefined evaluation score threshold ([0074] if an evaluation metric (e.g., groundness score, conciseness score, coherence scores, custom score) meets a respective threshold score, it can be determined that the prompt template need not be updated (e.g., the prompt template is considered optimized). If an evaluation metric (e.g., groundness score, conciseness score, coherence scores, custom score) does not meet a respective threshold score, it can be determined that the prompt template is to be updated (e.g., the prompt template is considered non-optimized), [Wherein a non-optimized prompt corresponds to a test prompt and/or candidate prompts of Vandeputte depending on the stage of updating (the original candidate prompt is a first test prompt, further updates are additional test prompts corresponding to the original candidate/test). Further, updating a prompt template is indicative of a prompt generation representing the updated template]). Regarding claim 9, Jia discloses: the computing system of claim 1. Jia does not disclose: wherein: the initial prompt is structured as a plurality of prompt chunks; and in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts as candidate orderings of the prompt chunks. Vandeputte discloses: wherein: the initial prompt is structured as a plurality of prompt chunks ([Fig. 2, Reference Prompt 220 comprising prompt “chunks” 222, 224, 226], [In a first iteration, the reference prompt will be an initial prompt which has not had any augmentations performed]); and in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts as candidate orderings of the prompt chunks ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [In view of the plurality of generated augmented prompts 230, 240, 250 indicating a plurality of candidate prompts with candidate orderings based on the total amount of segments]). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Regarding claim 12, Jia discloses: the method of claim 11. Jia does not disclose: storing the final prompt as a prompt fragment in a prompt library that includes a plurality of other prompt fragments; computing a compiled prompt that includes the final prompt and one or more of the other prompt fragments; at the machine learning model, processing the compiled prompt to generate a compiled prompt response; and outputting the compiled prompt response. Vandeputte discloses: storing the final prompt as a prompt fragment in a prompt library that includes a plurality of other prompt fragments ([Fig. 2, Reference Prompt 220 containing segments 222-226 to be adjusted], [0062] provided with a reference prompt 220 as an input, [0064] one or more augmented prompts 230, 240 are obtained by adjusting or mutating one or more input segments with respect to the reference prompt 220, [Obtaining a reference prompt comprising a plurality of segments 222-226 to be augmented indicates the reference prompt to necessarily be stored before being retrieved for augmentation (in any iteration other than a first), wherein each segment of the prompt corresponds to a prompt fragment, indicating a plurality of prompt fragments to form a prompt library within the reference prompt. The reference prompt is a final prompt at the time of augmentation for any iteration beyond a first, i.e. a reference prompt with respect to a previous iteration output, final prompt, to be improved upon. Vandeputte discloses a reference prompt may be any input sequence ([0017]) indicating the final prompt as generated in Jia could be applied as the reference prompt of Vandeputte]); computing a compiled prompt that includes the final prompt and one or more of the other prompt fragments ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [Segment 232 represents a prompt fragment not previously part of the reference, i.e. final, prompt]); at the machine learning model, processing the compiled prompt to generate a compiled prompt response ([Fig. 5, Generative AI Model 503], [0089] This augmented prompt may then be provided to the generative AI model 503 as an input, which generates an output sequence in response); and outputting the compiled prompt response ([Fig. 5, Output 513], [0092] the generated output sequence is provided as the final output 513). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Regarding claim 14, Jia discloses: the method of claim 11. Jia does not disclose: during each of the iterations of the prompt generation loop: inserting one or more test input portions into each of the candidate prompts to obtain a plurality of test prompts; and at the machine learning model, processing the test prompts to compute a plurality of test outputs; and computing the evaluation scores based at least in part on the test outputs. Vandeputte discloses: during each of the iterations of the prompt generation loop: inserting one or more test input portions into each of the candidate prompts to obtain a plurality of test prompts ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [In view of the plurality of generated augmented prompts 230, 240, 250 indicating a plurality of test prompts]); and at the machine learning model, processing the test prompts to compute a plurality of test outputs ([Fig. 5, Generative AI Model 503], [0089] This augmented prompt may then be provided to the generative AI model 503 as an input, which generates an output sequence in response, [In view of the plurality of test prompts 230, 240, 250 indicating a plurality of test outputs]); and computing the evaluation scores based at least in part on the test outputs ([Fig. 5, Prompt Importance Analysis 504], [0090] Module 504 may further be configured to determine if, and how many augmented prompts need to be obtained from respective reference prompts, as well as for how many target output sequences the prompt importance scores are to be determined. At least one prompt input sequence and target output sequence are obtained when determining the scores. Multiple target output sequences may be selected during the same analysis to smoothen the prompt importance scores. The obtained prompt importance scores allow evaluating the effectiveness of the input segments in the prompt 501, 502). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Regarding claim 18, Jia discloses: the method of claim 11. Jia does not disclose: wherein: the initial prompt is structured as a plurality of prompt chunks; and the method further comprises, in the prompt generation loop, generating the candidate prompts as candidate orderings of the prompt chunks. Vandeputte discloses: wherein: the initial prompt is structured as a plurality of prompt chunks ([Fig. 2, Reference Prompt 220 comprising prompt “chunks” 222, 224, 226], [In a first iteration, the reference prompt will be an initial prompt which has not had any augmentations performed]); and the method further comprises, in the prompt generation loop, generating the candidate prompts as candidate orderings of the prompt chunks ([Fig. 2, Augmented Prompt 230], [0065] Adjusting the reference prompt 220 may thus comprise selecting an input segment from the set of possible input segments and subsequently adding it at any position within the ordered reference prompt 220. Adding a sampled input segment can be achieved by appending the selected input segment, e.g. 231, to the reference prompt 220 as illustrated by augmented prompt 230, [In view of the plurality of generated augmented prompts 230, 240, 250 indicating a plurality of candidate prompts with candidate orderings based on the total amount of segments]). Jia and Vandeputte are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Vandeputte, because of the novel way to supplement prompts with high-impact information (as determined through quantitative metrics of augmentation effectiveness), resulting in fewer required iterations of a generative AI model to arrive at a final output, reducing required computing resources. Claim(s) 6, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jia in view of Shea et al. (US-20250110975-A1), hereinafter Shea. Regarding claim 6, Jia discloses: the computing system of claim 1. Jia does not disclose: wherein the final prompt includes one or more non-ASCII characters. Shea discloses: wherein the final prompt includes one or more non-ASCII characters ([0043] The string prompt may include letters, numbers, whitespace, punctuation, and in some cases formatting. Similarly, the generative output of a generative output engine as described herein can be formatted/encoded according to any suitable encoding (e.g., ISO, Unicode, ASCII as examples), [Wherein Unicode includes encodings for symbols and emojis, non-ASCII characters]). Jia and Shea are considered analogous art within prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Shea, because of the novel way to leverage retained data (wherein the data includes non-ASCII characters) including past prompt results for training and functionality improvement in the context of prompt engineering resulting in further customized prompts for particular users, sessions, or use histories (Shea, [0187]). Regarding claim 15, Jia discloses: the method of claim 11. Jia does not disclose: wherein the final prompt includes one or more non-ASCII characters. Shea discloses: wherein the final prompt includes one or more non-ASCII characters ([0043] The string prompt may include letters, numbers, whitespace, punctuation, and in some cases formatting. Similarly, the generative output of a generative output engine as described herein can be formatted/encoded according to any suitable encoding (e.g., ISO, Unicode, ASCII as examples), [Wherein Unicode includes encodings for symbols and emojis, non-ASCII characters]). Jia and Shea are considered analogous art within prompt engineering. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Shea, because of the novel way to leverage retained data (wherein the data includes non-ASCII characters) including past prompt results for training and functionality improvement in the context of prompt engineering resulting in further customized prompts for particular users, sessions, or use histories (Shea, [0187]). Claim(s) 7, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jia in view of Tu et al. (US-20240330603-A1), hereinafter Tu. Regarding claim 7, Jia discloses: the computing system of claim 1. Jia further discloses: wherein the prompt generation instructions further specify a machine learning model task ([Col. 3, Table 2] {TASK} Answer: Is the submission concise and to the point? {TASK} Answer: Is the submission coherent, well- structured, and organized? {TASK} Answer: [customized question] {DATA} {LLM output}, [Having LLM output as data indicates a machine learning model performing a task as is necessarily required to produce output]). Jia does not disclose: wherein in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts such that the candidate prompts include one or more few-shot examples of the machine learning model task. Tu discloses: wherein in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts such that the candidate prompts include one or more few-shot examples of the machine learning model task ([0073] 5-shots, 15-shots, and all-shots for vanilla classifiers on intent tasks…As illustrated, aligned prompts can further improve performance, with the best results obtained in few-shot settings. Additionally, the variances in task performance across all languages with aligned prompts are significantly smaller than those observed with fine-tuning and prompt tuning only. Although prompt tuning achieves higher accuracies on few-shot settings, [Intent classification, i.e. a machine learning model task, in a few-shot learning setting for aligning prompts indicates generation of candidate, i.e. aligned prompts using few-shot intent classification examples]). Jia and Tu are considered analogous art within prompt fine-tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Tu, because of the novel way to adapt prompts for performing tasks which will function well across different languages (modifying the prompt to be compatible with multiple languages), improving prompt tuning methods across different language domains which will result in more accurate cross-lingual tasks with the same prompt (Tu, [0017]-[0018]). Regarding claim 16, Jia discloses: the method of claim 11. Jia further discloses: wherein the prompt generation instructions further specify a machine learning model task ([Col. 3, Table 2] {TASK} Answer: Is the submission concise and to the point? {TASK} Answer: Is the submission coherent, well- structured, and organized? {TASK} Answer: [customized question] {DATA} {LLM output}, [Having LLM output as data indicates a machine learning model performing a task as is necessarily required to produce output]). Jia does not disclose: the method further comprises, in the prompt generation loop, generating the candidate prompts such that the candidate prompts include one or more few-shot examples of the machine learning model task. Tu discloses: the method further comprises, in the prompt generation loop, generating the candidate prompts such that the candidate prompts include one or more few-shot examples of the machine learning model task ([0073] 5-shots, 15-shots, and all-shots for vanilla classifiers on intent tasks…As illustrated, aligned prompts can further improve performance, with the best results obtained in few-shot settings. Additionally, the variances in task performance across all languages with aligned prompts are significantly smaller than those observed with fine-tuning and prompt tuning only. Although prompt tuning achieves higher accuracies on few-shot settings, [Intent classification, i.e. a machine learning model task, in a few-shot learning setting for aligning prompts indicates generation of candidate, i.e. aligned prompts using few-shot intent classification examples]). Jia and Tu are considered analogous art within prompt fine-tuning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Tu, because of the novel way to adapt prompts for performing tasks which will function well across different languages (modifying the prompt to be compatible with multiple languages), improving prompt tuning methods across different language domains which will result in more accurate cross-lingual tasks with the same prompt (Tu, [0017]-[0018]). Claim(s) 8, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jia in view of Qadrud-Din et al. (US-20240289561-A1), hereinafter Qadrud-Din. Regarding claim 8, Jia discloses: the computing system of claim 1. Jia does not disclose: wherein: the prompt generation instructions further specify a structured input format; and in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts in the structured input format. Qadrud-Din discloses: wherein: the prompt generation instructions further specify a structured input format ([Fig. 10, generated Timeline prompt 1008 based on Chunk 1010 of input text], [0225] determines one or more timeline generation prompts 1008 based on the timeline generation request message 1004. In some embodiments, the determination of the one or more timeline prompts may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods 500 and 600 shown in FIG. 5 and FIG. 6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text, [Developing prompts for each chunk of received text indicates the input format to be specified to be in chunks]); and in the prompt generation loop, the one or more processing devices are configured to generate the candidate prompts in the structured input format ([0332] The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list). Jia and Qadrud-Din are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Qadrud-Din, because of the novel way to automatically evaluate documents against a policy through use of a document pipeline which identifies text that should be kept together and extraneous text, allowing for segmentation of input text with more accurate responses generated through removal of extraneous input (Qadrud-Din, [0023]-[0028]). Regarding claim 17, Jia discloses: the method of claim 11. Jia does not disclose: wherein: the prompt generation instructions further specify a structured input format; and the method further comprises, in the prompt generation loop, generating the candidate prompts in the structured input format. Qadrud-Din discloses: wherein: the prompt generation instructions further specify a structured input format ([Fig. 10, generated Timeline prompt 1008 based on Chunk 1010 of input text], [0225] determines one or more timeline generation prompts 1008 based on the timeline generation request message 1004. In some embodiments, the determination of the one or more timeline prompts may involve processing one or more input documents via the chunker. As discussed herein, for instance with respect to the methods 500 and 600 shown in FIG. 5 and FIG. 6, the chunker may perform one or more operations such as pre-processing, sharding, and/or chunking the documents into manageable text, [Developing prompts for each chunk of received text indicates the input format to be specified to be in chunks]); and the method further comprises, in the prompt generation loop, generating the candidate prompts in the structured input format ([0332] The prompt template may also include one or more additional instructions, such as an instruction to format the text generated by the text generation model as structured text. For instance, the structured text may be implemented as a JSON list). Jia and Qadrud-Din are considered analogous art within prompt augmentation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Jia to incorporate the teachings of Qadrud-Din, because of the novel way to automatically evaluate documents against a policy through use of a document pipeline which identifies text that should be kept together and extraneous text, allowing for segmentation of input text with more accurate responses generated through removal of extraneous input (Qadrud-Din, [0023]-[0028]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. (CN-117312523-A) (machine translation attached) discloses “a prompt information generation method, device, computer equipment and storage medium, wherein the method comprises: obtaining initial prompt information; performing intention classification to the initial prompt information to obtain at least one intention type; generating strategy based on the prompt information respectively corresponding to at least one intention type, generating multiple prompt information to be screened under each intention type; aiming at each said intention category, screening the middle prompting information from multiple prompting information to be screened under the intention category, and taking said middle prompting information as said initial prompting information to return to the step of executing said intention category until the target prompting information meeting the preset prompting information requirement is obtained. Through said method, the target prompt information with more detailed description and higher quality can be generated, so the model can be generated based on the target prompt information and content, and the high quality content capable of satisfying the user demand can be further obtained” (abstract). See entire document. Shen et al. (US-20250156638-A1) discloses “Prompt embedding samples are drawn from a prior distribution and are passed into a pretrained model to receive a corresponding token label prediction for a batch of text data. Prompt embedding samples are accepted from a distribution of a first iteration; the accepted samples satisfy a condition of a distance function between a ground truth label and the corresponding token label prediction being less than a first tolerance. Embeddings are resampled from the accepted prompt embedding samples with probability proportional to weights and the resampled embeddings are perturbed via a perturbation kernel to obtain a new sample. The perturbed resampled embeddings are propagated through the pretrained model, and those that satisfy a condition are projected, where the second tolerance is decayed by one step per iteration. The projected resampled embeddings are concatenated with an embedding of a given input and inferencing is performed” (abstract). See entire document. Hellman et al. (US-20190259293-A1) discloses “Systems and methods for automated custom training of a scoring model are disclosed herein. The method include: receiving a plurality of responses received from a plurality of students in response to providing of a prompt; identifying an evaluation model relevant to the provided prompt, which evaluation model can be a machine learning model trained to output a score relevant to at least portions of a response; generating a training indicator that provides a graphical depiction of the degree to which the identified evaluation model is trained; determining a training status of the model; receiving at least one evaluation input when the model is identified as insufficiently trained; updating training of the evaluation model based on the at least one received evaluation input; and controlling the training indicator to reflect the degree to which the evaluation model is trained subsequent to the updating of the training of the evaluation model” (abstract). See entire document. Ma et al. (“An Iterative Optimizing Framework for Radiology Report Summarization With ChatGPT”) discloses “large language models (LLMs) like Chat Generative Pre-trained Transformer (ChatGPT) have shown strong generalization capabilities and performance, but their performance in specific domains, such as radiology, remains under-investigated and potentially limited. To address this limitation, we propose ImpressionGPT, leveraging the contextual learning capabilities of LLMs through our dynamic prompt and iterative optimization algorithm to accomplish the AIG task. ImpressionGPT initially employs a small amount of domain-specific data to create a dynamic prompt, extracting contextual semantic information closely related to the test data. Subsequently, the iterative optimization algorithm automatically evaluates the output of LLMs and provides optimization suggestions, continuously refining the output results” (abstract). See entire document. Cameron et al. (US-12493772-B1) discloses “Systems and methods for constructing layered prompts to operate as input into a pre-trained large language model (LLM). The method involves obtaining a set of application domains in which the LLM will be used. Using these application domains, a set of guidelines is determined, defining operation boundaries for the LLM. A set of layers is determined, each associated with the guidelines and including variables representing attributes identified within those guidelines. Using these layers, a first layered prompt is constructed to test the initial operation boundaries of the guidelines and is supplied to the LLM to generate a set of responses. Based on the responses, a second layered prompt is dynamically constructed to test additional operation boundaries, ensuring iterative refinement and contextual relevance.” (abstract). See entire document. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEODORE JOHN WITHEY whose telephone number is (703)756-1754. The examiner can normally be reached Monday - Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571) 272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THEODORE WITHEY/Examiner, Art Unit 2655 /ANDREW C FLANDERS/Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Jun 26, 2024
Application Filed
Feb 17, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591744
METHOD FOR TRAINING SEMANTIC REPRESENTATION MODEL, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12536994
APPARATUS FOR CLASSIFYING SOUNDS BASED ON NEURAL CODE IN SPIKING NEURAL NETWORK AND METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12475330
METHOD FOR IDENTIFYING NOISE SAMPLES, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Nov 18, 2025
Patent 12417759
SPEECH RECOGNITION USING CADENCE PATTERNS
2y 5m to grant Granted Sep 16, 2025
Patent 12412580
Sound Extraction System and Sound Extraction Method
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
90%
With Interview (+46.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month