Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/22/2026 has been entered.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7, 11-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Khemka et al (20230409615).
As per claim 1, Khemka et al (20230409615) teaches a computer-implemented method of performing contrastive in-context learning on a large language model, said method comprising:
inputting into the large language model a question associated with a contrastive in-context learning protocol for the large language model (as, using in-context learning for large language models – para 0204, which can be used to enhance the model’s generalizability – para 0204, first sentence); the contrastive in-context learning protocol being based on a user preference (as, the user enters information into the context engine – fig 2, subblock 220b; para 0052; and performing tasks relevant to user interests/preferences – para 0046));
inputting into the large language model a first answer for the question, the first answer forming a positive example of the contrastive in-context learning protocol (as, using from the samples for the training set, the top 10 samples as hard positives, using contrastive loss – para 0178);
instructing the large language model to analyze why the first answer forms the positive example (as, the large language model uses the “x%” of examples, as to “why”, the first answer forms the positive example – in other words, the criteria of ‘why’, is the measured “x%” of examples – see para 0178; examiner notes, in applicants spec, the only recitations to “analyze why” are ‘why’ and ‘reason’; -- see para 0037 of the pgpub – “(e.g., preferred length and style) text. It should be understood that these are just example deployments and should not be considered limiting…”; see further, para 0043 of the pgpub, “and then use the learned reasoning to find good answers. Oracles are upper-bounds of the embedding similarities where the correct/desired answers are fed in response to a current question to the large language model.”; further in Table III -- contrastive, reasoning Oracle One shot 0.819 (upper bound) Two shot 0.911 Three shot 0.939). Clearly, the claim scope, towards ‘why’ in the reasoning of the good/bad answers, is the use of some measure/probability, to label the result as good/bad. Clearly, Khemka et al (20230409615) uses the measure, as, the top/bottom % answers, as to the ‘reasoning’ of the selection).
inputting into the large language model a second answer for the question, the second answer forming a negative example of the contrastive in-context learning protocol (as, using from the samples for the training set, the bottom 10 samples as hard negatives, using contrastive loss – para 0178);
instructing the large language model to analyze why the second answer forms the negative example (as, the large language model uses the “x%” of examples, as to “why”, the first answer forms the positive example – in other words, the criteria of ‘why’, is the measured “x%” of examples – see para 0178);
generating by the large language model an output describing why the first answer forms the positive example and why the second answer forms the negative example as a part of the contrastive in-context learning protocol (see above, as to the claim scope of “why/reasoning” – applicants spec points to using some measure/probability, as to the reason “why” – as noted above, Khemka et al (20230409615) teaches using the top % for both categories; furthermore, para 0177 uses a similar Oracle scoring as disclosed by applicants spec, and furthermore in para 0177 – “the Oracle score may be useful for training a retriever that can learn to identify similar…the upper bound for DST-EQQA)
and deploying the large language model, after the contrastive in-context learning (as, deploying the model to train the remaining separate retrievers for each of the domain orderings – para 0178),
to generate additional answers based on the user preference and responsive to receiving additional questions (the trained retrievers, for each domain, as noted in para 0178, are now used as assistants/bots – para 0048 – assistant systems handling user input on client devices).
As per claims 2,3, Khemka et al (20230409615) teaches the computer-implemented method of claim 1, further comprising: generating the positive example responsive to the user providing a high rating to the first answer and generating the negative example responsive to the user providing a low rating to the second answer (as, the samples of, positive and negative, are marked by the user for objects they interact with to train the personalized models – para 0076, in view of the ranking of positive/negative pairings – para 0178).
As per claim 4, Khemka et al (20230409615) teaches the computer-implemented method of claim 1, further comprising: performing an additional contrastive in-context learning on the large language model by inputting a first subset of additional answers with high ratings as additional positive examples and a second subset of additional answers with low ratings as additional negative examples (as, in the overall 200 sample data set, taking the top 10 in each category – additional positive examples and additional negative examples – para 0178, with the additional option of the user adding more examples – para 0076, last two sentences).
As per claim 5, Khemka et al (20230409615) teaches the computer-implemented method of claim 1, the deploying the large language model comprising: deploying the large language model as a chatbot agent (as, using the models in assistant systems – para 0048 – and assistant xbots – para 0031).
As per claim 6, Khemka et al (20230409615) teaches the computer-implemented method of claim 1, the deploying the large language model comprising: deploying the large language model as an e-mail generator (as, using the assistant systems/xbots accessing the user’s information, including email servers – para 0325).
As per claim 7, Khemka et al (20230409615) teaches the computer-implemented method of claim 1, the deploying the large language model comprising: deploying the large language model as a text generator (as, the assistant system sends the generated responses to the assistant application, such output can be various modalities, including text – para 0031, “The assistant system 140 may send….to the user at the client system 130 via various modalities (e.g., audio, text, image, and video).
Claims 11-17 are system claims that perform the steps found in claims 1-7 above and as such, claims 11-17 are similar in scope and content to claims 1-7 above; therefore, claims 11-17 are rejected under similar rationale as presented against claims 1-7 above. Furthermore, Khemka et al (20230409615) teaches a processor executing steps stored in memory – para 0331.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 9,19 are rejected under 35 U.S.C. 103 as being unpatentable over Khemka et al (20230409615) in view of Andersen et al (20190122667).
As per claims 9,19, Khemka et al (20230409615) teaches the computer-implemented method of claim 1 (as mapped above), but does not explicitly teach, the user preference being associated with a length of answers, the inputting of the first answer and the second answer comprising: inputting to the large language model the first answer of a first length; and inputting to the large language model the second answer of a second length, the first length being shorter than the second length (Khemka et al (20230409615) teaches the first answer and the second answer, being favorable/unfavorable, as explained above in claims 1-8, but does not teach using answer length toward favorable/unfavorable status). Andersen et al (20190122667) teaches that, a short concise answer, from a set of candidate answers, is more favorable (see para 0038, last 2 sentences). Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was made, to modify the favorable/unfavorable decision in Khemka et al (20230409615) to include the length of the answer, as taught by Andersen et al (20190122667), because it would advantageously allow for quicker answers to have a higher priority, especially in emergency situations. (see Andersen et al (20190122667), para 0038).
Claims 10,20 are rejected under 35 U.S.C. 103 as being unpatentable over Khemka et al (20230409615) in view of Cohen et al (20060172275).
As per claims 10,20, Khemka et al (20230409615) teaches the computer-implemented method of claim 1 (as mapped against claim 1 above), but does not explicitly teach the user preference being associated with a style of answers, the inputting of the first answer and the second answer comprising: inputting to the large language model the first answer of a first style preferred by the user; and inputting to the large language model the second answer of a second style not preferred by the user (Khemka et al (20230409615) teaches the first answer and the second answer, being favorable/unfavorable, as explained above in claims 1-8, but does not teach using answer style toward favorable/unfavorable status); Cohen et al (20060172275) teaches scoring answers based on correct style/incorrect style. – para 0189, 0190. Therefore, it would have been obvious to one of ordinary skill in the art of answers databases to modify the favorable/unfavorable decision process of Khemka et al (20230409615) with incorrect style/correct style decision process, as taught by Cohen et al (20060172275) because it would advantageously detect possible errors due to the style change (Cohen et al (20060172275), para 0194).
Response to Arguments
Applicant's arguments filed 01/22/2026 have been fully considered but they are not persuasive. As per applicants arguments toward the amended claim language, “why” the second answer forms positive/negative answers, examiner notes the explanation of the claim scope as well as, how the Khemka et al (20230409615) matches the claim scope by using a scoring/probability, as to “why” a label of good/bad matches are made. Furthermore, examiner notes, the further references below, that teach one-shot contrastive models.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO-892 form.
Further, the following references were found, to be applicable to certain claim features and applicant’s specification:
Menon et al (20250014373) teaches positive and negative labels using contrastive learning (para 0044) with few shot learning setting (para 0045).
Hyland et al (20240256796) teaches positive and negative examples by contrast learning (para 0039, 0050) in a zero-shot type of loop (para 0039).
OU et al (20240177462) teaches few shot contrastive learning of global and local features (para 0030) using fine-grained positive/negative sample pairs.
Ghose et al (20140358631) teaches a system that generates faq’s (abstract), clustering based on user parameters/preferences, wherein the cluster (user preferences/parameters) are tied to the question-answer pairs (para 0042)
Gadamsetty et al (20050144090) teaches user activated preferences that will select the question/answer pairs in the preference set.
Xu et al (20240095460) teaches large language models (para 0003), generative language models (para 0029) operating on question/answer pairs (para 0036) as prompts into the llm’s (para 0029) with scoring/ranking the pairs – para 0059.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Michael N Opsasnick/Primary Examiner, Art Unit 2658 01/30/2026