Prosecution Insights
Last updated: April 19, 2026
Application No. 18/908,222

RAPID INTERACTIVE ITERATION FOR PROMPT DESIGN

Non-Final OA §101§102
Filed
Oct 07, 2024
Examiner
KHAKHAR, NIRAV K
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Scaled Cognition Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
72%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
345 granted / 444 resolved
+22.7% vs TC avg
Minimal -6% lift
Without
With
+-5.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
4 currently pending
Career history
448
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
39.0%
-1.0% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 444 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Examiner acknowledges applicants’ claim of priority to U.S. Provisional Application No. 63/551,535, filed February 9, 2024. Remarks Claims 1 – 20 are currently pending, of which claims 1, 8, and 15 are independent. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 6, 8 – 13, and 15 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite mental processes, capable of being performed in the human mind. This judicial exception is not integrated into a practical application and the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are analyzed as per MPEP 2106. Step 1: Are the Claims in One of the Four Categories of Statutory Subject Matter? Independent claim 1 is directed to a method; claim 8 is directed to a non-transitory computer readable storage medium; and claim 15 is directed to a system including a memory and a processor. Step 1: Yes. Step 2A, Prong 1: Are the Claims Directed to a Judicial Exception? All three independent claims recite: rendering the prompt from the set of parameters with at least one modified parameter, wherein a plurality of the parameters are used to format the prompt but are not directly entered into the prompt. Giving this step its broadest reasonable interpretation, it is a mental process, capable of being performed in the human mind. For example, a human person could be instructed to draft a question about a particular topic, using a particularly rude tone. The tone influences the format of the question without being an explicit part of the question. Because this step is capable of being performed in the human mind, it is an abstract idea, and is therefore a judicial exception. Step 2A, Prong 1: Yes. Step 2A, Prong 2: Is the Judicial Exception Integrated into a Practical Application? All three independent claims recite: providing, by a server, an interface for configuring a set of parameters used to render a prompt, the prompt to be submitted to a machine learning model provided by one or more remote servers; [and] receiving a selection to modify at least one parameter within the interface. These steps both amount only to insignificant extra-solution activity. The provision of an interface for the collection of user input and the receipt of that input amount only to mere data gathering, as per MPEP 2106.05(g). Insignificant extra-solution activity cannot integrate a judicial exception into a practical application. Claims 8 and 15 also recite preambular limitations that indicate only which category of invention these claims fall under. These limitations do not integrate the identified judicial exception into a practical application. Step 2A, Prong 2: No. Step 2B: Do the Claims Amount to Significantly More? All three independent claims recite: providing, by a server, an interface for configuring a set of parameters used to render a prompt, the prompt to be submitted to a machine learning model provided by one or more remote servers; [and] receiving a selection to modify at least one parameter within the interface. These elements are well-understood, routine, and conventional, as per MPEP 2106.05(d), because they are equivalent to the presentation of offers and the recording of a customer’s order, both of which are examples of types of activity that the courts have found to be well-understood, routine, and conventional activity when the are claimed in a merely generic manner or as insignificant extra-solution activity. Therefore, they do not amount to significantly more than the identified judicial exception. Step 2B: No. The dependent claims are analyzed: Claims 2, 4, 9, 11, 16, and 18 recite additional details on the user input. The collection of user input is insignificant extra-solution activity, and therefore does not integrate the judicial exception into a practical application. This step is also well-understood, routine, and conventional, (MPEP 2106.05(d)) and therefore does not amount to significantly more. Claims 3, 10, and 17 recite a separate and distinct judicial exception, because a human person could follow the script of a customer service agent. These claims are directed to mental processes capable of being performed in the human mind. Claims 5, 12, and 19 recite the retrieval of information in memory. This act is insignificant extra solution activity, and therefore does not integrate the judicial exception into a practical application. It is also well-understood, routine, and conventional, (MPEP 2106.05(d)) and therefore does not amount to significantly more. Claims 6, 13, and 20 recite a separate and distinct judicial exception, because the act of evaluating a prompt is a mental process. For these reasons, claims 1 – 6, 8 – 13, and 15 – 20 are rejected under 35 USC 101 for being directed to a judicial exception without significantly more. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 – 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Thatisetti et al., U.S. PG-Pub. No. 20250005020, having a filing date of June 28, 2023 (hereafter, “Thatisetti”). As to Claim 1, Thatisetti discloses: a method for automatically rendering a prompt, comprising: providing, by a server, an interface for configuring a set of parameters used to render a prompt, the prompt to be submitted to a machine learning model provided by one or more remote servers (Fig. 23, described at [0342], showing a user’s natural language input to a prompt); receiving a selection to modify at least one parameter within the interface ([0372], “… the automated chat service may conduct a search of a knowledge base or other content store using the original user input or a modified user input resulting from an exchange …”); and rendering the prompt from the set of parameters with at least one modified parameter, wherein a plurality of the parameters are used to format the prompt but are not directly entered into the prompt ([0059] – [0061], referring to prompt generation on the basis of input, including prompt formatting). As to Claim 2, Thatisetti discloses: the method of claim 1, further comprising receiving a selection of a state within a state machine having a plurality of states, the interface provided from a template associated with the selected state, wherein two or more of the states of the plurality of states have different interface templates ([0064] – [0065], a preconditioning service may insert the user’s raw input into a template prompt selected from a set of prompts. [0098], “… a particular engineered prompt template can be selected based on a desired task for which output of the generative output engine may be useful to assist.” [0342], “… content region 2302 may be used to create new tasks or issues, arrange tasks or issues in accordance with column categories or states, view issue content, or perform other tasks…”). As to Claim 3, Thatisetti discloses: the method of claim 2, wherein the state machine is associated with logic followed by an automated agent during an interaction with a customer associated with a remote device ([0053], “… a trouble ticket system (e.g., an information technology service management or “ITSM” system) may include an interface for a service agent to chat with or exchange information with a customer experiencing a problem.”). As to Claim 4, Thatisetti discloses: the method of claim 1, wherein rendering the prompt includes retrieving one or more instructions to include in the rendered prompt based on two or more parameters related to selecting the instructions ([0065], “The preconditioning service can, without limitation: append additional context to the user's raw input; may insert the user's raw input into a template prompt selected from a set of prompts; replace ambiguous references in the user's input with specific references… Thereafter, optionally, the modified/supplemented/hydrated user input can be provided as input to a secondary queue that meters and orders requests from one or more software platforms to a generative output system…”). As to Claim 5, Thatisetti discloses: the method of claim 1, wherein rendering the prompt includes retrieving one or more examples to include in the rendered prompt based on two or more parameters related to selecting the examples (Fig. 19C, described at [0325], “The prompt 1950 also includes a set of structured query examples 1958 that provide demonstrative input-output pairs. Specifically, the input-output pairs include an example natural language input or prompt paired with an example schema-formatted output.”). As to Claim 6, Thatisetti discloses: the method of claim 1, further comprising evaluating the rendered prompt and an output by an LLM that processed the prompt to generate the output ([0372], “The automated chat service may obtain a set of results and evaluate each of the results to compute a score or content metric.”). As to Claim 7, Thatisetti discloses: the method of claim 6, further comprising automatically optimizing the rendered prompt in response to an evaluation report generated from the evaluating and one or more pairs of an example prompt and an output of an LLM that processed the example prompt ([0372], “Other content metrics may include a Jaccard similarity, cosine similarity, confidence score or other metric representing an analysis of the user input with respect to content of the identified content item.”). As to Claim 8, Thatisetti discloses: a non-transitory computer readable storage medium having embodied thereon a program, the program being executable by a processor (Fig. 34, showing Processing Unit 3402 in communication with Memory 3404 via system bus 3414) to automatically render a prompt, the method comprising: providing, by a server, an interface for configuring a set of parameters used to render a prompt, the prompt to be submitted to a machine learning model provided by one or more remote servers (Fig. 23, described at [0342], showing a user’s natural language input to a prompt); receiving a selection to modify at least one parameter within the interface ([0372], “… the automated chat service may conduct a search of a knowledge base or other content store using the original user input or a modified user input resulting from an exchange …”); and rendering the prompt from the set of parameters with at least one modified parameter, wherein a plurality of the parameters are used to format the prompt but are not directly entered into the prompt ([0059] – [0061], referring to prompt generation on the basis of input, including prompt formatting). As to Claim 9, Thatisetti discloses: the non-transitory computer readable storage medium of claim 8, the method further comprising receiving a selection of a state within a state machine having a plurality of states, the interface provided from a template associated with the selected state, wherein two or more of the states of the plurality of states have different interface templates ([0064] – [0065], a preconditioning service may insert the user’s raw input into a template prompt selected from a set of prompts. [0098], “… a particular engineered prompt template can be selected based on a desired task for which output of the generative output engine may be useful to assist.” [0342], “… content region 2302 may be used to create new tasks or issues, arrange tasks or issues in accordance with column categories or states, view issue content, or perform other tasks…”). As to Claim 10, Thatisetti discloses: the non-transitory computer readable storage medium of claim 9, wherein the state machine is associated with logic followed by an automated agent during an interaction with a customer associated with a remote device ([0053], “… a trouble ticket system (e.g., an information technology service management or “ITSM” system) may include an interface for a service agent to chat with or exchange information with a customer experiencing a problem.”). As to Claim 11, Thatisetti discloses: the non-transitory computer readable storage medium of claim 8, wherein rendering the prompt includes retrieving one or more instructions to include in the rendered prompt based on two or more parameters related to selecting the instructions ([0065], “The preconditioning service can, without limitation: append additional context to the user's raw input; may insert the user's raw input into a template prompt selected from a set of prompts; replace ambiguous references in the user's input with specific references… Thereafter, optionally, the modified/supplemented/hydrated user input can be provided as input to a secondary queue that meters and orders requests from one or more software platforms to a generative output system…”). As to Claim 12, Thatisetti discloses: the non-transitory computer readable storage medium of claim 8, wherein rendering the prompt includes retrieving one or more examples to include in the rendered prompt based on two or more parameters related to selecting the examples (Fig. 19C, described at [0325], “The prompt 1950 also includes a set of structured query examples 1958 that provide demonstrative input-output pairs. Specifically, the input-output pairs include an example natural language input or prompt paired with an example schema-formatted output.”). As to Claim 13, Thatisetti discloses: the non-transitory computer readable storage medium of claim 8, the method further comprising evaluating the rendered prompt and an output by an LLM that processed the prompt to generate the output ([0372], “The automated chat service may obtain a set of results and evaluate each of the results to compute a score or content metric.”). As to Claim 14, Thatisetti discloses: the non-transitory computer readable storage medium of claim 13, the method further comprising automatically optimizing the rendered prompt in response to an evaluation report generated from the evaluating and one or more pairs of an example prompt and an output of an LLM that processed the example prompt ([0372], “Other content metrics may include a Jaccard similarity, cosine similarity, confidence score or other metric representing an analysis of the user input with respect to content of the identified content item.”). As to Claim 15, Thatisetti discloses: a system for automatically rendering a prompt, comprising: one or more servers, wherein each server includes a memory and a processor; and one or more modules stored in the memory and executed by at least one of the one or more processors (Fig. 1, showing Host Server(s) 102, and Fig. 34, showing Processing Unit 3402 in communication with Memory 3404 via system bus 3414) to provide, by a server, an interface for configuring a set of parameters used to render a prompt, the prompt to be submitted to a machine learning model provided by one or more remote servers (Fig. 23, described at [0342], showing a user’s natural language input to a prompt), receive a selection to modify at least one parameter within the interface ([0372], “… the automated chat service may conduct a search of a knowledge base or other content store using the original user input or a modified user input resulting from an exchange …”); and render the prompt from the set of parameters with at least one modified parameter, wherein a plurality of the parameters are used to format the prompt but are not directly entered into the prompt ([0059] – [0061], referring to prompt generation on the basis of input, including prompt formatting). As to Claim 16, Thatisetti discloses: the system of claim 15, the modules further executable to receive a selection of a state within a state machine having a plurality of states, the interface provided from a template associated with the selected state, wherein two or more of the states of the plurality of states have different interface templates ([0064] – [0065], a preconditioning service may insert the user’s raw input into a template prompt selected from a set of prompts. [0098], “… a particular engineered prompt template can be selected based on a desired task for which output of the generative output engine may be useful to assist.” [0342], “… content region 2302 may be used to create new tasks or issues, arrange tasks or issues in accordance with column categories or states, view issue content, or perform other tasks…”). As to Claim 17, Thatisetti discloses: the system of claim 16, wherein the state machine is associated with logic followed by an automated agent during an interaction with a customer associated with a remote device ([0053], “… a trouble ticket system (e.g., an information technology service management or “ITSM” system) may include an interface for a service agent to chat with or exchange information with a customer experiencing a problem.”). As to Claim 18, Thatisetti discloses: the system of claim 15, wherein rendering the prompt includes retrieving one or more instructions to include in the rendered prompt based on two or more parameters related to selecting the instructions ([0065], “The preconditioning service can, without limitation: append additional context to the user's raw input; may insert the user's raw input into a template prompt selected from a set of prompts; replace ambiguous references in the user's input with specific references… Thereafter, optionally, the modified/supplemented/hydrated user input can be provided as input to a secondary queue that meters and orders requests from one or more software platforms to a generative output system…”). As to Claim 19, Thatisetti discloses: the system of claim 15, wherein rendering the prompt includes retrieving one or more examples to include in the rendered prompt based on two or more parameters related to selecting the examples (Fig. 19C, described at [0325], “The prompt 1950 also includes a set of structured query examples 1958 that provide demonstrative input-output pairs. Specifically, the input-output pairs include an example natural language input or prompt paired with an example schema-formatted output.”). As to Claim 20, Thatisetti discloses: the system of claim 15, the modules further executable to evaluate the rendered prompt and an output by an LLM that processed the prompt to generate the output ([0372], “The automated chat service may obtain a set of results and evaluate each of the results to compute a score or content metric.”). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIRAV K KHAKHAR whose telephone number is (571)270-1004. The examiner can normally be reached Monday through Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert W Beausoliel, Jr. can be reached at 571-272-3645. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NIRAV K KHAKHAR/Examiner, Art Unit 2167 /ROBERT W BEAUSOLIEL JR/Supervisory Patent Examiner, Art Unit 2167
Read full office action

Prosecution Timeline

Oct 07, 2024
Application Filed
Oct 31, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602500
ENCAPSULATING ACCESS ALGORITHMS FOR DATA PROCESSING ENGINES
2y 5m to grant Granted Apr 14, 2026
Patent 12585656
MULTIDIMENSIONAL ANALYSIS OF COMMUNICATION RECORDS USING LLMS
2y 5m to grant Granted Mar 24, 2026
Patent 12572507
DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND COMPUTER PROGRAM FOR EXECUTING DATA PROCESSING METHOD USING INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12572614
AUTOMATED GENERATION OF PROMPTS FOR RESEARCH SUMMARIES USING GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12566793
MULTIMEDIA CONTENT PUBLISHING METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
72%
With Interview (-5.9%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 444 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month