Prosecution Insights
Last updated: April 19, 2026
Application No. 18/670,369

Prompt Generation

Non-Final OA §101§102§103§112
Filed
May 21, 2024
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Sage Global Services Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION 1. This action is responsive to Application no.18/670,369 filed 5/21/2024. All claims have been examined and are currently pending. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification 3. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 5. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim recites “a computer program” which is software per se and does not fall within at least one of the four categories of patent eligible subject matter. Claim Rejections - 35 USC § 112 6. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. 7. Claim 16 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 16 recites “A method…according to claim 12”, where claim 12 is a system claim. Further, should claim 16 recite “A system…according to claim 12”, it would then be a duplicate claim with previous claim 15. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 102 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 10. Claims 1-9, 12, 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Zha et al (2024/0202458). Regarding claim 1 Zha teaches A computer implemented method of generating validated prompt-templates for generating prompts for instructing large language models (LLMs) to perform specific tasks (Abstract: Prompt discovery is performed for identifying prompts to natural language processing machine learning models. A request to determine a prompt for a natural language processing task performed by a pre-trained natural language processing machine learning model may be received. A task classification for the natural language processing task may be determined and candidate prompts for the natural language processing prompt task collection selected. Respective prompt results for the candidate prompts are evaluated to generate a prompt recommendation for the natural language processing task.; Figures 1, 4, 9, 11- computer system, processor; 0020: prompt; [0025] NLP ML model(s) 124 may be pre-trained or custom NLP ML models and may be based on various different ML model architectures (e.g., Generative Pre-trained Transformer (GPT)-based ML models), and frameworks (e.g., PyTorch, TensorFlow, etc.). These NLP Models may be trained to perform various NLP processing tasks, such as document or dialogue summarization, paraphrasing, structure to text, relation extraction and/or coreference resolution, among others), said method comprising the steps of: generating an initial prompt instructing an LLM to produce a plurality of candidate prompt-templates for generating test prompts (figure 1; paragraphs 0038: prompt ingestion; prompt submission may include the prompt; may also include various other information for the prompt, such as NLP processing task; description; information that can be included in an entry for the prompt; sample output; 43: prompt and NLP ML candidate selection 420. Prompt and NLP ML candidate selection 420 may utilize an NLP task classification, as well as other information from discovery request(s) 400 to select candidate prompts), said initial prompt defining: an input data type to be included with a prompt generated using a candidate prompt-template (38: processing task, task categories; 42: prompt task classification; 43: prompt and NLP ML candidate selection 420. Prompt and NLP ML candidate selection 420 may utilize an NLP task classification, as well as other information from discovery request(s) 400 to select candidate prompts), and output data to be produced by an LLM that has processed a prompt generated using a candidate prompt-template (38: sample output; 41: the sample output may an example of expected results); passing the initial prompt through an LLM to generate a plurality of candidate prompt-templates (fig 1; 0025; 0038: interactions to submit prompts for discovery, selection, and development through a machine learning service; 43: prompt and NLP ML candidate selection 420. Prompt and NLP ML candidate selection 420 may utilize an NLP task classification, as well as other information from discovery request(s) 400 to select candidate prompts); generating a plurality of test prompts, each test prompt constructed from one of the candidate prompt-templates using input data from a set of pre-labelled input data comprising a plurality of items of input data and corresponding labels (fig 4 435 test data; 0039: prompt validation tests; performance evaluations; 0046: evaluate performance of the candidate prompts; 46: Test data 435 may be obtained. Test data 435 may be specified or identified as part of discovery request 410 (e.g., by identifying a file, storage location, or other path for accessing test data 435, or test data 435 may be sample input 413). In some embodiments, test data 435 may be maintained by machine learning service 210 for evaluating the identified NLP task. When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data; labeled or ground truth data for test data 435 is used to score the accuracy of the candidate prompt), passing each test prompt through a further LLM to generate an output (0039 Using test data for an NLP processing task, like task 313, inferences may be made using the prompt; 46); assessing the output data produced by each test prompt with respect to the corresponding label associated with input data with which the test prompt was passed through the further LLM (0039 Using test data for an NLP processing task, like task 313, inferences may be made using the prompt 311 and obtain for validation, as indicated at 353. These inferences may then be compared with the sample output 317 and ground truth labels for the test data to determine whether the prompt's claimed sample output 317 is achieved; 0046 Results 434 for candidate prompts may be collected; labeled or ground truth data for test data 435 is used to score the accuracy of the candidate prompt), and selecting, based on the assessing, one or more of the candidate-prompt-templates for subsequent generation of prompts (0046: a test data sample output may be obtained for inclusion in a prompt recommendation; 0047: recommendation). Regarding claim 2 Zha teaches A computer implemented method according to claim 1, wherein the output data defined in the initial prompt comprises property data associated with a property of the input data defined in the initial prompt (fig 4 410; 40: description, task; 42: prompt task classification). Regarding claim 3 Zha teaches A computer implemented method according to claim 2, wherein the initial prompt further defines a constraint instruction to be applied by each test prompt which constrains the generated property data generated by each test prompt (fig 4 410; 41: the sample output may an example of expected results (e.g., The most frequent sentiment of commenters on the post is [sentiment]”); 0038 sample output). Regarding claim 4 Zha teaches A computer implemented method according to claim 3, wherein the constraint instruction specifies a plurality of predetermined properties of which the output data must comprise one (fig 4 410; 0038; 41: the sample output may an example of expected results (e.g., The most frequent sentiment of commenters on the post is [sentiment]”)). Regarding claim 5 Zha teaches A computer implemented method according to claim 2, wherein the property is one of a qualitative property of the input data or a quantitative property of the input data (fig 4 410; 0040: description, task; 42: prompt task classification; 41: the sample output may an example of expected results (e.g., The most frequent sentiment of commenters on the post is [sentiment]”)). Regarding claim 6 Zha teaches A computer implemented method according to claim 1, wherein the input data type defined by the initial prompt is text data (41: sample input may be sample document, file, or other text; 43). Regarding claim 7 Zha teaches A computer implemented method according to claim 6, wherein the input data type defined by the initial prompt is unstructured text data from a received message (19-20; 42; 43: text summarization; 19: NLP ML models, for example, may be trained using training data sets of documents or other sets of natural language (e.g., human language) to perform various natural language processing tasks, including, but not limited to information extraction (e.g., named entity recognition, relation extraction, coreference resolution, events extraction and joint entity relation extraction), text classification (e.g., classification, sentiment, relation classification, topic classification, paraphrase identification, word sense disambiguation, and natural language inference), question answering (e.g., extractive QA and close-book QA), summarization (e.g., extractive summarization and abstractive summarization), generation (e.g., sentence completion and structure to text), among others. – making natural language decisions for unstructured text). Regarding claim 8 Zha teaches A computer implemented method according to claim 2, wherein each label associated with each item of pre-labelled data specifies property data associated with a property of the item of pre-labelled data (43: NLP task classification, as well as other information from discovery request(s) ; prompts may be organized or identified with different prompt task collections 421. Each prompt task collection 421 may correspond to a task classification. 46: Test data 435 may be obtained. Test data 435 may be specified or identified as part of discovery request 410 (e.g., by identifying a file, storage location, or other path for accessing test data 435, or test data 435 may be sample input 413). In some embodiments, test data 435 may be maintained by machine learning service 210 for evaluating the identified NLP task. When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data; labeled or ground truth data for test data 435 is used to score the accuracy of the candidate prompt). Regarding claim 9 Zha teaches A computer implemented method according to claim 8, wherein the property data specified by each label associated with each item of pre-labelled data specifies one of a plurality of predetermined properties (43: NLP task classification, as well as other information from discovery request(s) ; prompts may be organized or identified with different prompt task collections 421. Each prompt task collection 421 may correspond to a task classification. 46: Test data 435 may be obtained. Test data 435 may be specified or identified as part of discovery request 410 (e.g., by identifying a file, storage location, or other path for accessing test data 435, or test data 435 may be sample input 413). In some embodiments, test data 435 may be maintained by machine learning service 210 for evaluating the identified NLP task. When the test data 435 is obtained, the candidate prompts 432 may be used to generate inferences using the candidate NLP ML model(s) 433 on the test data; labeled or ground truth data for test data 435 is used to score the accuracy of the candidate prompt). Regarding claim 12 Zha teaches A system for generating validated prompt-templates for generating prompts for instructing large language models (LLMs) to perform specific tasks, said system comprising a prompt-template generation instruction module configured to generate an initial prompt instructing an LLM to produce a plurality of candidate prompt-templates for generating test prompts, said initial prompt defining: an input data type to be included with a prompt generated using a candidate prompt-template, and output data to be produced by an LLM that has processed a prompt generated using a candidate prompt-template, said prompt-template generation instruction module configured to communicate the initial prompt to a first LLM system to generate a plurality of candidate prompt-templates, wherein said system further comprises a test prompt generation module configured to generate a plurality of test prompts, each test prompt constructed from one of the candidate prompt-templates generated by the prompt-template generation instruction module using input data from a set of pre-labelled input data comprising a plurality of items of input data and corresponding labels, said test prompt generation module configured to communicate each test prompt through a further LLM system to generate an output, and said system further comprising a prompt-template assessment unit configured to assess the output data produced by each test prompt with respect to the corresponding label associated with input data with which the test prompt was passed through the further LLM, and select, based on the assessing, one or more of the candidate-prompt-templates for subsequent generation of prompts. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Regarding claim 17 Zha teaches A computer program providing instructions which when implemented on a computing device implements a method according to claim 1. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Claim Rejections - 35 USC § 103 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. Claims 10-11, 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zha et al (2024/0202458) in view of Zeng et al (2005/0234955). Regarding claim 10 Zha does not specifically teach where Zeng teaches A computer implemented method according to claim 1, further comprising generating the set of pre-labelled input data by: retrieving labelled data samples from a labelled data samples data store (18 labeled data); retrieving unlabelled data from an unlabelled-data data store (18 unlabeled data); labelling the unlabelled data using an AI process guided by the labelled data (18 unlabeled data is then labeled), generating labelled data (18) ([0018] The following systems and methods for clustering based text classification (CBC) utilize both labeled and unlabeled data in semi-supervised learning operations. The systems and methods first cluster training data, which includes labeled and unlabeled data, with guidance of the labeled data. At least a portion of the unlabeled data is then labeled based on the obtained clusters to generate an expanded labeled dataset. In one implementation, discriminative classifiers are then trained with the expanded labeled dataset. In this manner, the systems and methods provide for semi-supervised learning treated as clustering aided by labeled data. Such labeled data may provide important information for latent class variables, assisting in the determination of parameters associated with clustering operations to affect final clustering results. By latent class variables we mean that the variables used to generate the data samples.), and {storing the labelled data as pre-labelled input data for use in generating the test prompts}. It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Zeng for an improved system to generate the pre-labelled (test) data for proper testing of the candidate prompts. Zha already teaches labelled test data for use in testing candidate prompts based on specific natural language task classification, and one could look to Zeng to further generate the labelled data, and with Zha allowing for storing the labelled data as pre-labelled input data for use in generating the test prompts. Regarding claim 11 Zha does not specifically teach where Zeng teaches A computer implemented method according to claim 10, wherein the AI process is one of a semi-supervised learning process, an active learning process or a clustering process (0018: clustering; semi-supervised learning). Rejected for similar rationale and reasoning as claim 10 Claims 13-16 recite limitations similar to claims 10-11 and are rejected for similar rationale and reasoning Conclusion 11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Jan 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month