Prosecution Insights
Last updated: April 19, 2026
Application No. 18/490,426

CONDITIONING PROMPTS FOR GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS FOR PRODUCTION OF STRUCTURED OUTPUT

Non-Final OA §103
Filed
Oct 19, 2023
Examiner
SONIFRANK, RICHA MISHRA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
250 granted / 379 resolved
+4.0% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/2025 has been entered. Response to Amendment Claims 1, 11 and 20 are amended. Claims 2, 6, 12 and 16 are cancelled. Claims 1, 3-5, 7-11, 13-15 and 17-24 presented for examination. Response to Arguments Claim Rejections 35 U.S.C. §103 Applicant’s arguments with respect to claim(s) 1 , 11 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4-5, 7-11, 14-15, 17- 20 and 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh ( US 20250124022) and further in view of Zha ( US 20240202458) Regarding claim 1, Ghosh teaches a method, comprising: receiving a user prompt via a user interface, wherein the user prompt is for a generative artificial intelligence system and is specified as natural language ( user prompt 402, prompt includes natural language, Para 0030, 0051-0052 ( RAG method for received prompt) ; choosing a selected prompt class (intent) ( select a prompt template based on the intent, Para 0042) from a plurality of prompt classes by matching a natural language processing analysis of the user prompt to a prompt template of the selected prompt class ( conditioned prompt, Fig 6; the system prompt constructor 416 uses the classified intent to look up system-provided prompts in a prompt template library. For example, the system prompt constructor 416 can search a prompt template library based on the intent and identify the system-provided prompts from the library that correspond to the intent; system provided prompts that corresponds to the intent, Para 0042,0060), and wherein the prompt template specifies an expected prompt structure ( structured prompt , Para 0066), and wherein the selected prompt class includes: one or more predefined conditioning instructions specific to the selected prompt class ( Given the user prompt 402, the task of predicting an intent (represented by a text-class label) to the user prompt 402 is transformed to generating a predefined textual response (e.g., positive, negative, etc.), Para 0041) : creating a well-structured prompt by transforming the user prompt based on the expected prompt structure of the selected prompt class ( step 604 conditioned prompts with the received prompts, Fig 6); generating a conditioned prompt by adding the one or more predefined conditioning instructions to the well-structured prompt ( step 604 conditioned prompts with the received prompts, Fig 6); and submitting the conditioned prompt to the generative artificial intelligence system ( step 604 conditioned prompts with the received prompts, Fig 6), wherein the conditioned prompt is configured to evoke a response including a structured output from the generative artificial intelligence system ( output the response, Fig 4-6) While Ghosh does not explicitly teach choosing a selected prompt class from a plurality of prompt classes by matching a natural language processing analysis of the user prompt to a prompt of the selected prompt class; and a prompt model parameter specific to the selected prompt class, wherein the prompt model parameter specifies a generative artificial intelligence system from a plurality of generative artificial intelligence models; and selecting the generative artificial intelligence system of the plurality of generative artificial intelligence models using the prompt model parameter However, Zha teaches choosing a selected prompt class from a plurality of prompt classes by matching a natural language processing analysis of the user prompt to a prompt of the selected prompt class ( Prompt and NLP ML candidate selection 420 may utilize an NLP task classification, as well as other information from discovery request(s) 400 to select candidate prompts, Para 0043); and a prompt model parameter specific to the selected prompt class ( prompt submission 310 may include an identifier of the NLP ML model 319 that the prompt 311 is for., Para 0038-0040) wherein the prompt model parameter specifies a generative artificial intelligence system from a plurality of generative artificial intelligence models ( Prompt recommendation generation 440 may generate prompt recommendation 450 and include various information, such as prompt(s) 451, sample output 453 for the prompt(s), performance metric(s) 455 for the prompts (e.g., inference time, resources/cost, etc.), the NLP ML model(s) 457 used, and the task identified 459 (e.g., at 410). Prompt recommendations 450 may be presented in various ways, using various visualization techniques, including, but not limited to, ranking, charts, graphs, output displays, etc. Para 0047, Fig 4 and Fig 5; hence the prompt is including the ML model to be used which is embodied within prompt discovery engine ) ; and selecting the generative artificial intelligence system of the plurality of generative artificial intelligence models using the prompt model parameter ( candidate prompt result using a model, Para 0072, 0076) It would have been obvious having the teachings of Ghosh to further include the concept of Zha before effective filing date because different NLP ML models may achieve different results for NLP tasks. For example, some NLP ML models may produce better results for text summarization, as they may have been trained or developed specifically for that task ( Para 0021, Zha) Regarding claim 4, Ghosh as above in claim 1, teaches wherein the generative artificial intelligence system is a large language model ( LLM, Para 0041) Regarding claim 5, Ghosh as above in claim 1 ,wherein the structured output of the generative artificial intelligence system is compatible with a particular computer system and is specific to the selected prompt class ( renderable form based on intent, Para 0091, 0042, 0050) Regarding claim 7, Ghosh as above in claim 1 ,, wherein the conditioned prompt is configured to reduce non-determinism in the response generated by the generative artificial intelligence system ( the prompt identifying operation 604 searches a prompt template library based on the intent and identifies the system-provided prompts that correspond to the intent, Para 0060- since its based on intent it will reduce the non-determinism) Regarding claim 8, Ghosh modified by Zha as above in claim 1, teaches submitting the response from the generative artificial intelligence system to an evaluation system with an evaluator question extracted from the selected prompt class ( prompt discovery based on the sample output etc; Performance criteria 415 may include information such as a range or limit on time for returning a response with an inference, a range (or limit) on resources used to host an NLP ML model to generate inferences, expected size of input, among others., Para 0041; evaluation results and adaption, Para 0065) Regarding claim 9, Zha as above in claim 8, teaches wherein the evaluation system is a generative artificial intelligence system ( evaluation based on system response; where the system can be a generative system, Para 0025) Regarding claim 10, Zha as above in claim 8, teaches further comprising: modifying the one or more predefined conditioning instructions based on an evaluation result generated by the evaluation system ( adaption, Para 0063-0065) Regarding claim 11, arguments analogous to claim 1, are applicable. In addition, Ghosh teaches A system, comprising: one or more processors configured to execute operations as described in claim 1 ( Para 0003) Regarding claim 14, arguments analogous to claim 4, are applicable. Regarding claim 15, arguments analogous to claim 5, are applicable. Regarding claim 17, arguments analogous to claim 7, are applicable. Regarding claim 18, arguments analogous to claim 8, are applicable. Regarding claim 19, arguments analogous to claim 9, are applicable. Regarding claim 20, arguments analogous to claim 1, are applicable. In addition, Ghosh teaches A computer program product comprising one or more computer readable storage mediums having program instructions embodied therewith, wherein the program instructions are executable by one or more processors to cause the one or more processors to execute operations as described in claim 1 ( Para 0004) Regarding claim 22, arguments analogous to claim 4, are applicable. Regarding claim 23, arguments analogous to claim 5, are applicable. Regarding claim 24, arguments analogous to claim 7, are applicable. Claims 3, 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Ghosh ( US 20250124022) and further in view of Heller ( US 20240273309) Regarding claim 3, Zha as above in claim 1, teaches , description includes keywork ( Para 0040), Ghosh modified by Zha does not explicitly teach wherein the prompt template includes a field that is populated by a keyword obtained from the natural language processing analysis of the user prompt However, Heller teaches wherein the prompt template includes a field that is populated by a keyword obtained from the natural language processing analysis of the user prompt (A prompt template may also include one or more fillable portions that may be filled based on information determined by the orchestrator 230, Para 0051) It would have been obvious having the teachings of Ghosh and Zha to further include the concept of Heller before effective filing date so to improve the quality of the text being inputted ( Para 0003, Heller) Regarding claim 13, arguments analogous to claim 3, are applicable. Regarding claim 21, arguments analogous to claim 3, are applicable. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richa Sonifrank whose telephone number is (571)272-5357. The examiner can normally be reached M-T 7AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Phan Hai can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Richa Sonifrank/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Aug 14, 2025
Interview Requested
Sep 16, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Examiner Interview Summary
Sep 30, 2025
Response Filed
Oct 15, 2025
Final Rejection — §103
Nov 17, 2025
Interview Requested
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Dec 09, 2025
Response after Non-Final Action
Dec 31, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Feb 14, 2026
Non-Final Rejection — §103
Mar 31, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602552
Machine-Learning-Based OKR Generation
2y 5m to grant Granted Apr 14, 2026
Patent 12603085
ENTITY LEVEL DATA AUGMENTATION IN CHATBOTS FOR ROBUST NAMED ENTITY RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12585883
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12585877
GROUPING AND LINKING FACTS FROM TEXT TO REMOVE AMBIGUITY USING KNOWLEDGE GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579988
METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+24.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month