DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/21/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 5-13, and 15-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1 and 11relate to the statutory category of method/process and machine/apparatus. Independent claims 1 and 11 recite “…storing a prompt library including a plurality of prompt fragments and a plurality of prompt templates; … at a prompt compiler: receiv(ing) a prompt generation input including prompt input data; based at least in part on the prompt input data, select(ing) a prompt template and one or more of the prompt fragments from the prompt library; and fill(ing) the selected prompt template with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt; at a first machine learning model, process(ing) the compiled prompt to compute a machine learning model output; and output(ting) the machine learning model output”.
The limitations of claims 1 or 11 of “…storing…”, “receiv(ing)…”, “…select(ing)…”, “…fill(ing)…”, “…process(ing)…”, and “…output(ing). “ as drafted covers mental activity. More specifically, for claim 1, a human having a group of forms to be filled out, can determine the information necessary from a table which is that lists prompt words/phrases depending on what information is needed to fill out the form. The human can by looking at both the form and the list of prompt words/phrases and determine the information needed to fill out the form.
This judicial exception is not integrated into a practical application. In particular, claim 1 recites the additional elements of “memory” and “processing devices” which are recited generally in the specification. For example, in paragraph [[0016] of as filed specification, there is a description of using a general purpose computing system. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Claims 1 and 11 recite the additional element of “machine learning model” which is also recited generally in the specification. For example, in paragraph [0016] of the as filed specification, there is a description of using a general purpose computing system to compile a prompt using a machine learning model. As recited above, a human can, by comparing the list of prompt words/phrases and the form to be filled out, determine the prompt to be compiled. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer is noted as a general purpose computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
With respect to claims 2 and 12, the claims relate to the prompts being related to a particular subject/topic. The claims relate to a mental activity of generating a prompt which is related to the subject/topic in the form to be filled. No additional elements are present.
With respect to claims 3 and 13, the claims relate to the prompts being related to a particular activity. The claims relate to a mental activity of generating a prompt which is related to the activity in the form to be filled. No additional elements are present.
With respect to claims 6 and 16, the claims relate to choosing a prompt related to that describes the actual prompt. The claims relate to choosing the information from the list of words/phrases which describe the actual prompt to be generated. No additional elements are present.
With respect to claims 7, 8, and 17, the claims relate to deciding if the list of most important/highly used words/phrases related to the prompt generation are listed in the table. The claims relate to a mental activity of determining if the table lists the highly used prompts. No additional elements are present.
With respect claims 9 and 18, the claims relate to tagging the information about the prompt in the table and including the information in the generated prompt . The claims relate to a mental activity of generating a prompt that includes the tagged information. No additional elements are present.
With respect to claims 10 and 19, the claims relate to determining if the table of prompt words/phrases include instructions to follow a group of steps which need to be performed. The claims relate to a mental activity of following the instructions listed in the list of prompt words/phrases. No additional elements are present.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6, 9-16, and 18-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Scheuermann et al. (US 2025/0348331).
Regarding Claim 1, Scheuermann et al. discloses a computing system comprising: memory storing a prompt library including a plurality of prompt fragments and a plurality of prompt templates (The user interface element template storage 214 stores a plurality of user interface element templates or types (interpreted by the examiner as data to be included in the prompt). Each template in the user interface element template storage 214 may define a structure, content, format, attributes, or combination thereof, for a particular user interface element that is supported by the digital assistant) (page 6, paragraph [0066]); and one or more processing devices (The query and response processing component 406 may process both user inputs and backend responses. For example, the query and response processing component 406 can work with the LLM 118 to process a user query to determine a function that is of interest to the user 108) (page 9, paragraph [0097]) configured to: at a prompt compiler (wherein the prompt data comprises at least a subset of a predetermined vocabulary for generating the condensed metalanguage representation, and the predetermined vocabulary is used to compile the metalanguage of the condensed metalanguage representation) (page 14, paragraph [0148]): receive a prompt generation input including prompt input data (The method 600 commences at opening loop operation 602 and proceeds to operation 604, where the digital assistant service system 126 receives user input via user interface associated with a digital assistant) (page 13, paragraph [0128]); based at least in part on the prompt input data, select a prompt template and one or more of the prompt fragments from the prompt library (The method may include providing, by the digital assistant service system, the prompt data to the generative machine learning model to obtain the intermediate representation, and then processing the intermediate representation to obtain the output data structure) (page 3, paragraph [0030]); and fill the selected prompt template (The method 600 proceeds to operation 612, where the digital assistant service system 126 generates prompt data) (page 13, paragraph [0134]) with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt (The method may include providing, by the digital assistant service system, the prompt data to the generative machine learning model to obtain the intermediate representation, and then processing the intermediate representation to obtain the output data structure) (page 3, paragraph [0030]); at a first machine learning model, process the compiled prompt to compute a machine learning model output (in some cases, the LLM 118 might be utilized to perform direct template filling. In such cases, the LLM 118 is prompted to utilize the selected user interface element template and directly generate the output data structure (e.g., JSON format message for downstream rendering).) (page 13, paragraph [0134]); and output the machine learning model output (The method 600 proceeds to operation 618, where the digital assistant service system 126 causes the rendering of one or more user interface elements based on the generated output data structure) (page 14, paragraph [0139]).
Regarding Claim 2, Scheuermann et al. discloses the computing system, wherein: the prompt library includes a plurality of domain-based prompt fragments among the plurality of prompt fragments (The user interface element template storage 214 stores a plurality of user interface element templates or types. Each template in the user interface element template storage 214 may define a structure, content, format, attributes, or combination thereof, for a particular user interface element that is supported by the digital assistant) (page 6, paragraph [0066]); and at the prompt compiler, the one or more processing devices are further configured to: identify a prompt domain associated with the prompt input data (In the context of digital assistants, RAG can be employed to preselect relevant functions by retrieving a subset of information from a large database or knowledge base that is pertinent to the user's query or context (e.g., functions that relate to a conversation topic or are semantically similar to a query based on vector similarity). The retrieval acts as a filtering mechanism, narrowing down the potential functions that the generative machine learning model should consider. For instance, if a user asks about retrieving a list of employees, the digital assistant service system 126 might retrieve, from the database 130 of FIG. 1, functions related to data aggregation, human resources, and report generation, while omitting unrelated functions such as financial, scheduling, or email management functions) (page 13, paragraph [0130]); and select one or more of the domain-based prompt fragments that match the prompt domain for inclusion in the compiled prompt (The method may include providing, by the digital assistant service system, the prompt data to the generative machine learning model to obtain the intermediate representation, and then processing the intermediate representation to obtain the output data structure) (page 3, paragraph [0030]).
Regarding Claim 3, Scheuermann et al. discloses the computing system, wherein: the prompt library includes a plurality of few-shot task examples among the plurality of prompt fragments (The user interface element template storage 214 stores a plurality of user interface element templates or types. Each template in the user interface element template storage 214 may define a structure, content, format, attributes, or combination thereof, for a particular user interface element that is supported by the digital assistant) (page 6, paragraph [0066]); and at the prompt compiler, the one or more processing devices are further configured to: determine a task specified by the prompt input data (In the context of digital assistants, RAG can be employed to preselect relevant functions by retrieving a subset of information from a large database or knowledge base that is pertinent to the user's query or context (e.g., functions that relate to a conversation topic or are semantically similar to a query based on vector similarity). The retrieval acts as a filtering mechanism, narrowing down the potential functions that the generative machine learning model should consider. For instance, if a user asks about retrieving a list of employees, the digital assistant service system 126 might retrieve, from the database 130 of FIG. 1, functions related to data aggregation, human resources, and report generation, while omitting unrelated functions such as financial, scheduling, or email management functions) (page 13, paragraph [0130]); and select one or more of the few-shot task examples associated with the task for inclusion in the compiled prompt (The digital assistant service system 126 processes the backend response (e.g., using the bot component 204) to select one or more user interface element templates at operation 610) (page 13, paragraph [0133]).
Regarding Claim 4, Scheuermann et al. discloses the computing system, wherein, at the prompt compiler, the one or more processing devices are further configured to: retrieve a database record from a database via retrieval-augmented generation (RAG) (In some examples, retrieval-augmented generation (RAG) techniques are implemented by the digital assistant service system 126 (e.g., by the bot component 204 of FIG. 2). RAG is a technique that combines the capabilities of a retrieval system with a generative machine learning model. In the context of digital assistants, RAG can be employed to preselect relevant functions by retrieving a subset of information from a large database or knowledge base that is pertinent to the user's query or context (e.g., functions that relate to a conversation topic or are semantically similar to a query based on vector similarity)) (page 13, paragraph [0130]); and insert the database record into the prompt template (Where one or more user interface element templates are identified in the prompt data provided to the generative machine learning model, the one or more user interface elements rendered via the user interface may each correspond to one of the one or more user interface element templates) (page 3, paragraph [0031]).
Regarding Claim 5, Scheuermann et al. discloses the computing system, wherein: at least one prompt fragment of the one or more selected prompt fragments includes a tokenized indicator (For preprocessing, the processing engine 116 may tokenize, compress, or format the data to optimize it for the LLM 118) (page 5, paragraph [0055]) that encodes (Variational autoencoders (VAEs): VAEs may encode input data into a latent space (e.g., a compressed representation) and then decode it back into output data) (page 16, paragraph [0182]) image data, video data, or audio data (generative AI can produce text, images, video, audio, code, or synthetic data) (page 16, paragraph [0177]); and at the prompt compiler, the one or more processing devices are further configured to: decode (Variational autoencoders (VAEs): VAEs may encode input data into a latent space (e.g., a compressed representation) and then decode it back into output data) (page 16, paragraph [0182]) the tokenized indicator (For postprocessing, it may format the LLM 118 response, perform detokenization or decompression, and prepare the response for sending back to the requesting system (e.g., the digital assistant service system 126)) (page 5, paragraph [0055])to obtain the image data, video data, or audio data (generative AI can produce text, images, video, audio, code, or synthetic data) (page 16, paragraph [0177]); and insert the image data, video data, or audio data into the prompt template (The intermediate representation processing component 412 operates to process the intermediate representation generated by the generative machine learning model in order to obtain the final output data structure to transmit from the bot component 204) 9page 9, paragraph [0103]).
Regarding Claim 6, Scheuermann et al. discloses the computing system, wherein, at the prompt compiler, the one or more processing devices are further configured to: receive temporal metadata associated with the prompt input data (Example prompt data for metalanguage generation is included below merely to illustrate certain aspects of the disclosure) (page 10, paragraph [0111]); and select the one or more prompt fragments based at least in part on the temporal metadata (“metadata” in programming language) (pages 10-11, TABLE-US-00004)
Regarding Claim 9, Scheuermann et al. discloses the computing system, wherein the one or more processing devices are further configured to: at the prompt compiler, assign prompt fragment metadata to the plurality of prompt fragments, wherein the prompt fragment metadata distinguishes the prompt fragments from the prompt input data (Example prompt data for metalanguage generation is included below merely to illustrate certain aspects of the disclosure) (page 10, paragraph [0111]); and at the first machine learning model, process the prompt fragments in a manner that differs from the processing of the prompt input data, as indicated by the prompt fragment metadata ( Prompt formats or styles may be varied and do not necessarily include such explicit role identifiers. In the above examples, the content associated with the “system” role refers to general personality and instructions provided to an LLM, the content associated with the “user” role may include actual user utterances or input messages, while the content associated with the “assistant” role can include messages generated by a digital assistant) (page 11, paragraph [0112]).
Regarding Claim 10, Scheuermann et al. discloses the computing system, wherein the compiled prompt includes an instruction to perform chain-of-thought generation when computing the machine learning model output (For example, and as described in greater detail elsewhere, after a function has been triggered and a response has been received by the digital assistant service system 126 from a backend service, the machine learning model can be used to generate an intermediate representation that is then resolved, compiled, or interpreted by the digital assistant service system 126 to generate a suitable output data structure for presenting response information to the user 108 via the web interface 132 or the app interface 134. A generative machine learning model may be used to generate such intermediate representations) (page 5, paragraph [0049]).
Claim 11 is rejected for the same reason as claim 1.
Claim 12 is rejected for the same reason as claim 2.
Claim 13 is rejected for the same reason as claim 3.
Claim 14 is rejected for the same reason as claim 4.
Claim 15 is rejected for the same reason as claim 5.
Claim 16 is rejected for the same reason as claim 6.
Claim 18 is rejected for the same reason as claim 9.
Claim 19 is rejected for the same reason as claim 10.
Regarding Claim 20, Scheuermann et al. discloses a computing system comprising: memory storing a prompt library including a plurality of prompt fragments and a plurality of prompt templates (The user interface element template storage 214 stores a plurality of user interface element templates or types. Each template in the user interface element template storage 214 may define a structure, content, format, attributes, or combination thereof, for a particular user interface element that is supported by the digital assistant) (page 6, paragraph [0066]); and one or more processing devices (The query and response processing component 406 may process both user inputs and backend responses. For example, the query and response processing component 406 can work with the LLM 118 to process a user query to determine a function that is of interest to the user 108) (page 9, paragraph [0097]) configured to: generate a compiled prompt (wherein the prompt data comprises at least a subset of a predetermined vocabulary for generating the condensed metalanguage representation, and the predetermined vocabulary is used to compile the metalanguage of the condensed metalanguage representation) (page 14, paragraph [0148]) as an input to a first machine learning model (For example, and as described in greater detail elsewhere, after a function has been triggered and a response has been received by the digital assistant service system 126 from a backend service, the machine learning model can be used to generate an intermediate representation that is then resolved, compiled, or interpreted by the digital assistant service system 126 to generate a suitable output data structure for presenting response information to the user 108 via the web interface 132 or the app interface 134) (page 5, paragraph [0049]), wherein generating the compiled prompt includes, at a prompt compiler: receiving a prompt generation input including prompt input data, wherein the prompt input data is received as user input to a graphical user interface (GUI) (The method 600 commences at opening loop operation 602 and proceeds to operation 604, where the digital assistant service system 126 receives user input via a user interface associated with a digital assistant) (page 13, paragraph [0128]); selecting a prompt template and one or more of the prompt fragments from the prompt library (The method may include providing, by the digital assistant service system, the prompt data to the generative machine learning model to obtain the intermediate representation, and then processing the intermediate representation to obtain the output data structure) (page 3, paragraph [0030]), wherein the prompt template and the one or more prompt fragments are selected at least in part by processing the prompt generation input at a second machine learning model (As mentioned, in some cases, the LLM 118 might be utilized to perform direct template filling. In such cases, the LLM 118 is prompted to utilize the selected user interface element template and directly generate the output data structure (e.g., JSON format message for downstream rendering)) (page 13, paragraph [0134]) (While a single LLM 118 is shown in FIG. 1, it will be appreciated that multiple generative machine learning models may be used (e.g., a first model may be used to trigger function calls and a section model may be used to generate intermediate representations)) (page 5, paragraph [0050]); and filling the selected prompt template (The method 600 proceeds to operation 612, where the digital assistant service system 126 generates prompt data) (page 13, paragraph [0134]) with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt (The method may include providing, by the digital assistant service system, the prompt data to the generative machine learning model to obtain the intermediate representation, and then processing the intermediate representation to obtain the output data structure) (page 3, paragraph [0030]); at the first machine learning model, process the compiled prompt to compute a machine learning model output (in some cases, the LLM 118 might be utilized to perform direct template filling. In such cases, the LLM 118 is prompted to utilize the selected user interface element template and directly generate the output data structure (e.g., JSON format message for downstream rendering).) (page 13, paragraph [0134]); and output the machine learning model output for display at the GUI (The method 600 proceeds to operation 618, where the digital assistant service system 126 causes the rendering of one or more user interface elements based on the generated output data structure) (page 14, paragraph [0139]).
Allowable Subject Matter
Claims 7, 8, and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims if the 35 USC 101 rejections above are overcome..
Cited Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Malhotra et al. (US 11,294,939) discloses automatically detecting and documenting privacy-related aspects of computer software.
Pu et al. (US 2021/0042662) discloses interactive information capture and retrieval with use-defined and/or machine intelligence augmented prompts and prompt processing.
Lee et al. (US 2024/0386040) discloses building management system with building equipment service and parts recommendations.
Cai et al. (US 2025/0028967) discloses training few-shot event detection based on multilingual prompt learning.
Saleh (US 2025/0373458) discloses artificial intelligence driven leading and templatizing of ideation session.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SATWANT K SINGH/Primary Examiner, Art Unit 2653