Prosecution Insights
Last updated: April 19, 2026
Application No. 18/339,694

CONTEXTUAL QUERY GENERATION

Final Rejection §101§103§112
Filed
Jun 22, 2023
Examiner
MCCORD, PAUL C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
393 granted / 569 resolved
+7.1% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 101 Applicant’s amendments to suffice to obviate the 35 U.S.C. 101 rejection of Claims 1, 2, 4-20. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, 4-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 9, 16 recite the term “including, respectively,” when referencing the to one or more demonstrations as comprising a text snippet, a reference query and a training contextual query, the claimed respectively leaves it unclear the manner in which the application of the recited subject matter to the demonstration. It will be considered the claimed “including, respectively” recites that the one or more demonstrations “each include,” the text snippet, reference query and training contextual query. In claim 1 it may be more simple to construe that rather than existing with respect to an order given by a non-extant referent the recited “respectively” connotes that a text snippet, reference query, and contextual query each exist in each of the one or more demonstrations, nevertheless the absent reference may be presumed on the one or more demonstrations. This ambiguity multiples with respect to the expanded limitations placed upon the queries claims 9, 16. As such claims 1, 9, 16 are considered indefinite. Claims 2, 4-8, 10-15, 17-20 do not remedy and are similarly rejected. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4-20 rejected under 35 U.S.C. 103 as being unpatentable over Shwartz: “Unsupervised Commonsense Question Answering with Self-Talk,” (copy provided by Examiner; Available 1/17/21 and hereinafter Shw) further in view of Liu: 20240249113 and further in view of Chen: 20240249077 hereinafter Che. Regarding claim 1 Shw teaches: A method comprising: receiving, by a processing device, an input including a document having text and a reference query (Shw: § 2, 3, 3.3: such as by utilizing answering task datasets, instances thereof to operate with respect to a model, such as COPA, a variety of question answering (QA) datasets, etc.; wherein an instances in the datasets consist of a context, one or more answers, and a reference question such that questions portions are used to resolve answer portions); conditioning a trained language model based on contextual learning (Shw: Abstract; § 1, 2, 3-3.3, 6.3: pretrained models discussed as well-known to be fine-tuned or improved; such as based on generation of clarifying questions to confect answers utilizing self-talk) such as with respect to instances comprising a portion of text, a reference query which is fixed and a refinement query which varies (Shw: § 2, 3-3.3; Fig 3: beginning with an instance comprising context, question, and answer choices the system refines by generation of various clarification questions used to generate an enhanced answer); generating, by the processing device, a contextual query using the trained language model based on a semantic context of the text of the document and the reference query (Shw: § 1, 3-3.3; Fig 3: system operates by reifying knowledge based on generation of clarifying questions with respect to instance data wherein the instance includes context, a reference question, and plural potential answers; in this way the system iteratively generates a plurality of clarifying questions to proceed upon a correct answer for the reference question); outputting, by the processing device, the contextual query and the document having text to a question answering machine learning model (Shw: § 1, 3.3; Fig 1, 3: the system performs iteratively by concatenating the context, clarification question and answer and outputting as a prompt to the model); generating, by the processing device, a response as an answer to the contextual query by the question answering machine learning model based on the contextual query and the document (Shw: § 1, 3.3, 5; Fig 3: system predicts an appropriate answer and evaluates relevance, correctness and helpfulness of the answer); and outputting by the processing device the model response for display in a user interface (Shw: § 5; Table 2; Fig 3-6: system determines results, reports same such as by printing results to a screen or paper for presentation). Shw does not explicitly teach training a language model using in context learning nor generating an answer using a model diverse from the first model comprising training by the processing device , a language model using in-context learning using one or more demonstrations to condition the language model the one or more demonstrations each including, a text snippet, the reference query, and a training contextual query; nor does Shw directly address outputting by the processing device a determined response for display in a user interface. IN a related field of endeavor Liu teaches a system and method for parsing a question and generating an answer thereto based on diverse processing models comprising receiving, by a processing device, a language model configured using in- context learning to generate queries based on semantic contexts of input documents (Liu: ¶ 56, 74-81; fig 4, 9: a parser utilizing in context learning on an LLM receives an input question and text document); receiving, by the processing device, an input including a document having text and a reference query (id); training by the processing device , a language model using in-context learning to condition the language model (Liu: ¶ 49-52: system iteratively updates the underlying model based on generated model outputs)the training data including, a text snippet, the reference query, and a training contextual query (Liu: ¶ 49-52, 74-81, etc.; Fig 10-12: training data comprises textual questions comprising a reference query corresponding to a correct answer and additional questions, ; and generating, by the processing device, a response as an answer by a question answering machine learning model (Liu: ¶ 74-100, Fig 4, 9, 16: an executor comprising a model diverse from that of the parser which operates in concert with a Roberta model using to generate an answer) and outputting, by the processing device, the response for display in a user interface(Liu: ¶ 23, 31, 32, 50, etc.: system outputs an answer such as in the form of a displayed message in a user interface configured to allow a user to view the answer). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize a language model trained as taught or suggested by Liu upon to parse a received document and reference query as taught by each of Shw and Liu and to thereby generate the contextual query of Shw for output to a separate question answering model such as that of Liu for at least the purpose of allowing conditioned processing based on the contextual query and for generating an answer for displayed output to a user said answer comprising a robust likelihood of correctness; one of ordinary skill in the art would have expected only predictable results therefrom. Furthermore, the generation of a pipeline of processing models, modules, stages, etc. such as to accomplish or improve the accomplishment a particular task or tasks in a machine learning environment must be considered obvious to try in as much as available learning models, modules, stages, etc. comprise a finite set; finite solutions in the machine learning domain are accomplished by variously combining the available models; and one of ordinary skill upon the domain engaged with the pursuit of solutions would have reasonable expectation of success in the form of predictable results arrived at by routine experimentation. Shw in view of Liu thus teaches and/or strongly suggests training by the processing device , a language model using in-context learning to condition the language model the one or more demonstrations each including, a text snippet, the reference query, and a training contextual query. Shw in view of Liu does not explicitly teach training a language model using in context learning nor generating an answer using a model diverse from the first model comprising training by the processing device , a language model using in-context learning using one or more demonstrations to condition the language model the one or more demonstrations each including, a text snippet, the reference query, and a training contextual query. In a related field of endeavor Che teaches a system and method comprising an in context learning framework operable for receiving, by a processing device, an input including a text and a reference query (Che: ¶ 42, 53, 66-71: system operates to receive based on an input from a user, an input dataset such as from a training dataset and comprising text in the form of a sentence or query and operate to provide an answer thereto for display to a user); such as by retrieving demonstration examples (Che: Abstract) and comprising training by the processing device , a language model using in-context learning using one or more demonstrations to condition the language model the one or more demonstration including, respectively, a text snippet, the reference query, and a training contextual query. (Che: ¶ 3, 20 , 57, 66-71, etc.; Fig 7: conditioning a model based on demonstration examples with respect to an input context comprising utilizing a training dataset of demonstrations examples comprising at least a textual query, context data, answer, etc.); generating, by the processing device, a contextual query using the trained language model based on a semantic context of the text of the document and the reference query (Che: ¶ 20, 66-71, etc.; Fig 7: system generates a text sequence using a model based on training on a demonstration input and output); outputting, by the processing device, the contextual query and the text to a question answering machine learning model (Che: ¶ 27, 37, 44, 53: query input concatenated with contextual data fed as input to a model to generate final prediction, such as by providing an answer to a question); generating, by the processing device, a response as an answer to the contextual query by the question answering machine learning model based on the contextual query and the input (Che: ¶ 27, 37, 44, 53: such as using the final prediction model to generate and output an answer); and outputting by the processing device the response for display in a user interface (Che: ¶ 53: such as by outputting the answer for display to a user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to condition the Shw in view of Liu system and method based on the demonstrations taught or suggested by Che for at least the purpose of iteratively improving or fine-tuning a question answering pipeline such as that detailed by Shw in view of Liu based thereon; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 2 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 1, wherein the contextual query is a paraphrased version of the reference query such as based on one or more linguistic cues from the text, tokens therein, etc. of the document (Shw: § 1, 3.3; Fig 3: system self asks questions which iterate and re-pose the contextual query based on content and context of the input instance) ; (Liu: Fig 6, 9, etc.: system determines query based on parsed components of the input which are variously passed to the execution module to determine an answer) and/or configured to extract one or more key terms from the document for, when, etc. input to the question answering machine learning model (Shw: Figs 1-3: system utilizes salient terms within the input, generated questions, generated answers, etc. such as to encapsulate information of the input text, document, etc.); (Liu: Fig 6: system utilizes salient terms within the input, generated questions, generated answers, etc. such as for relevantly parsing the input documents, text thereof, etc.). The claim is considered obvious over Shw as modified by Liu and Che as addressed in the base claim as it would have been obvious to apply the further teaching of Shw, Liu, and/or Che to the modified device of Shw, Liu, and Che; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 4 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 1, wherein the in-context learning includes using three or fewer demonstrations (Che: ¶ 66: such as by using one, two, or three of the disclosed one or more demonstration examples). The claim is considered obvious over Shw as modified by Liu and Che as addressed in the base claim as it would have been obvious to apply the further teaching of Shw, Liu, and/or Che to the modified device of Shw, Liu, and Che; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 5 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 3, wherein the text snippets of the one or more demonstrations include a particular structure and the generating the contextual query includes transforming the text of the document to match the particular structure of the text snippets of the one or more demonstrations. Examiner has taken official notice which Applicant has failed to timely and explicitly traverse and it is thus accepted as Admitted Prior Art (APA: please see MPEP 2144.03) that generation of demonstrations based on the formatting or other structural characteristics of the input documents, training set, etc. would have comprised an obvious inclusion for at least the purpose of utilizing particularly formatted sets of data, such as derived from particular communications media, such as emails, texts, etc.; utilizing particular parsers or parse structures; utilizing particular data structures or data structures particular to specific frameworks, code bases, etc.; etc. to arrive at learning germane to a domain thereof; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 6 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 1, wherein the language model is a GPT-2 model and/or a Roberta model (Shw: §, 6.2, 6.3; Table 2,etc.: GPT-2, Roberta tractable for generating knowledge from an LLM and for generating a fine tuned question answering model); (Che: ¶ 3, 18: GPT shows remarkable in-context learning ability); and the question answering machine learning model is a RoBERTa model (Liu: ¶ 85, 100). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize the recited models in the recited manner as taught or suggested by Shw in view of Liu and Che as the permutation of available algorithms in such a way must be considered obvious to try in as much as available learning models comprise a finite set; finite solutions in the domain are arrived at by variously combining the available models; and one of ordinary skill upon the domain engaged with the pursuit of solutions would have reasonable expectation of success in the form of predictable results arrived at by routine experimentation arrived at by permutations of models, data flow among models, etc. The claim is thus considered obvious over Shw as modified by Liu and Che as addressed in the base claim as it would have been obvious to apply the further teaching of Shw, Liu, and/or Che to the modified device of Shw, Liu, and Che; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 7 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 1, wherein the response includes one or more key terms extracted from the document based on the contextual query and an additional response generated by the question answering machine learning model based on the reference query does not include the one or more key terms. (Shw: Figs 1-3: system utilizes salient terms within the input, generated questions, generated answers, etc. wherein the determined answer “help people find jobs” does not comprise the prior generated answer keyword such as internship); (Liu: Fig 6, 8: system utilizes salient terms within the input, generated questions, generated answers, etc. wherein keywords utilized in determining an answer are not necessarily included in the output answer). The claim is considered obvious over Shw as modified by Liu and Che as addressed in the base claim as it would have been obvious to apply the further teaching of Shw, Liu, and/or Che to the modified device of Shw, Liu, and Che; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 8 Shw in view of Liu in view of Che teaches or suggests: The method as described in claim 1, wherein the semantic context includes one or more domain specific text strings that represent key terms of the document (Shw: Figs 1-3; Table 6: : system utilizes salient terms within the input, generated questions, generated answers, etc. such as to encapsulate information of the input text, document, etc.; such as to address the situation, domain, etc. of the subject including concerns thereon, specialties, thereof); (Liu: ¶ 3, 53, 56, 73, 95 Fig 6, 8: system utilizes salient terms within the input, generated questions, generated answers, etc. such as for relevantly parsing the input documents, text thereof, etc. such as by utilizing a textual domain, symbolic domain, etc. particular language, symbols, etc. thereof such as to resolve additional knowledge by generalizing upon an emergent domain). The claim is considered obvious over Shw as modified by Liu and Che as addressed in the base claim as it would have been obvious to apply the further teaching of Shw, Liu, and/or Che to the modified device of Shw, Liu, and Che; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 9, 16, 17—the claims are considered to recite substantially similar subject matter to that of claim 1 supra and are similarly rejected. Regarding claim 10, 18—the claims are considered to recite substantially similar subject matter to that of claim 2 supra and are similarly rejected. Regarding claim 11—the claim is considered to recite substantially similar subject matter to that of claim 7 supra and is similarly rejected. Regarding claim 12, 20—the claims are considered to recite substantially similar subject matter to that of claim 8 supra and are similarly rejected. Regarding claim 13, 15—the claims are considered to recite substantially similar subject matter to that of claim 5 supra and are similarly rejected. Regarding claim 14, 19—the claims are considered to recite substantially similar subject matter to that of claim 1, 3 supra and are similarly rejected. Response to Arguments Applicant’s arguments in concert with claim amendments, see Remarks and Claims, filed 12/11/25, with respect to the rejection(s) of claim(s) 1, 2, 7-12, 16-18, 20 under 35 USC 103 over Shwartz and Liu; claims 3-6, 3-6 , 13-15 19 under 35 USC 103 over Shwartz, Liu and Jiachang Liu have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Shwartz, Liu, and Chen. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL C MCCORD whose telephone number is (571)270-3701. The examiner can normally be reached 730-630 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL C MCCORD/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103, §112
Dec 09, 2025
Examiner Interview Summary
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Response Filed
Mar 16, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603094
ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES
2y 5m to grant Granted Apr 14, 2026
Patent 12592238
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12593192
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12572323
DYNAMIC AUDIO CONTENT GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567003
TECHNOLOGIES FOR DECENTRALIZED FLEET ANALYTICS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month