Prosecution Insights
Last updated: April 19, 2026
Application No. 18/140,658

Interacting with a Language Model using External Knowledge and Feedback

Final Rejection §101§102§103
Filed
Apr 28, 2023
Examiner
VO, HUYEN X
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
869 granted / 1043 resolved
+21.3% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
17 currently pending
Career history
1060
Total Applications
across all art units

Statute-Specific Performance

§101
24.9%
-15.1% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1043 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments have been fully considered but they are not persuasive. Applicant essentially argues that the prior art on record fails to disclose “generating a usefulness measure” of the response for use to decide whether or not to generate a revised model-input information to the language model (REMARKS, pages 11-14). Absence of specifics in the claims, examiner maintains that the prior art on record fully anticipates all limitations of independent claims. Specifically, an input query is received and processed to generating embedding information for use to compare against embedding of example user inputs (figure 7, the “knowledge information” is equated to example user inputs) to generate “few-shot prompt” (see figure 7, “original model-input information” is equated to the few-shot prompt) to input to a language (figure 7, step 708). The language model returns canonical form input (figure 7, step 708) for use to decide if the canonical form input is similar to a predefined dialog flow (figure 6, step 606; the “usefulness measure” is equated to this similarity measure). If the canonical form input fails to satisfy a certain condition, then generate new “few-shot prompt” (figure 6, step 608 and/or process in figure 8, revised model-input information is equated to this new “few-shot prompt”) to input to the language model. The new claim amendments would not overcome the prior art on record. For the above reasons, examiner maintains the previous grounds of rejection. Regarding the 101 issue, whole process is an abstract idea. A human being can perform these steps mentally on a piece of paper. Specifically, one can receive an input query, interpret the query belonging to certain topic, generate a question to ask someone else, receive back a response, determine if the response is useful or related to the query, and if not, ask a new question. Absence of specifics, one can interpret the language model as another person, who can provide an answer to the question asked. Also, “programmatically generating …” fails to provide meaningful significance that go beyond generally linking the use of an abstract idea to a particular technological environment. For these reasons, examiner maintains the 101 rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 14, and 19 recite “generating original model-input information …”, “generating a usefulness measure …”, and “generating revised model-input information …”. These limitations, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “processor”. For example, but for the “processor” language, these steps in the context of this claim encompass the user manually generating information based on the input, generating usefulness score, and revising the model-input information. All of these steps can be performed in the mind and/or using a pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements - using a processor to perform these steps. The use of a processor is recited at a high-level of generality (i.e., as a generic computer device performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “data store”, “computing system” and additional steps of “receiving …” and “providing …” are merely for the purpose of data gathering and/or insignificant extra-solution activity that amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Similar to independent claims 1, 14, and 19, dependent claims 2-4, 6-13, 15-18, and 20 includes additional steps and/or elements “generating and presenting …”, “the generating …”, “policy is chosen …”, “retrieving …”, “identifying …”, “validating …”, and “generating feedback …” in the context of this claim encompasses the user manually performing these steps. All of these steps can be performed in the mind and/or using a pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Unintelligible Similar to independent claims 1, 14, and 19, dependent claim 5 includes additional element of “state machine” that are considered “insignificant extra-solution activity to the judicial exception” because they fail to provide meaningful significance that go beyond generally linking the use of an abstract idea to a particular technological environment. Therefore, these claims are also not patent eligible. Regarding CRM issue, the original disclosure explicitly indicate that "computer-readable storage medium" or "storage device" expressly excludes propagated signals per se in transit, while including all other forms of computer-readable media; a computer-readable storage medium or storage device is "non-transitory" in this regard” (paragraph 123). Therefore, there is no CRM issue. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5, 11-14, and 17-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dinu et al. (USPG 2024/0354319). Regarding claims 1, 14, and 19, Dinu discloses a computer-implemented method, system, and CRM for interacting with a machine-trained language model, comprising: computer-readable storage media including: an instruction data store for storing computer-readable instructions; and a state data store for storing state information, the state information describing aspects of a current dialogue state; and a processing system for executing the computer-readable instructions based on the state information in the state data store (figures 1-2 and 11-12), to perform operations including: receiving an input query (figure 6, step 602, receiving input query); providing knowledge information based on the input query (figure 6, step 604, which is discussed in details in figure 7, step 702 and/or paragraph 70, “the dialog engine 130 generates an embedding of the user input in a semantic or latent space”); generating original model-input information that includes the input query and the knowledge information (figure 7, step 704-706, generate “few-shot prompt”), and presenting the original model-input information to the language model (figure 7, step 708 and/or paragraph 73, input few-shot prompt into language model to generate canonical form input); receiving an original response from the language model, the original response being generated using the original model-input information as input (figure 6, step 606 comparing the canonical form input, which is discussed in details in figure 7, with predefined dialog flow to determine if they match; also see paragraphs 65-66; the few-shot prompt is generated based on input query and example user inputs and the response is equated to the canonical form); programmatically generating a usefulness measure that identifies usefulness of the original response (figure 6, step 606 and/or paragraphs 65-66, “usefulness measure” is treated as a determination whether they are canonical form input is similar to the predefined dialog flow); and in response to determining that the usefulness measure does not satisfy a prescribed test, generating revised model-input information that includes feedback information (figure 6, step 606, if the comparison fails to satisfy a certain condition, then proceed to step 608 discussed in figure 8, which generate “few-shot prompt” to input to model; also see paragraphs 74-77), presenting the revised model-input information to the language model, and receiving a revised response from the language model in response to the revised model-input information (figure 8, step 808, inputting “few-shot prompt into language model”). Regarding claims 2-3, Dinu further discloses the method of claim 1, wherein the language model includes weights that are produced in a pre-training operation, and wherein the weights of the language model remain fixed during training of other machine-trained logic used by the method (paragraphs 95 and 97, storing “weight”); wherein the language model includes attention logic for assessing relevance to be given to a part of input information fed to the attention logic when interpreting another part of the input information (paragraph 111, “recurrent and/or attention-based neural networks”). Regarding claim 5, Dinu further disclose the method of claim 1, wherein the generating of the revised model-input information is performed upon receiving a user request to generate the revised response (process in figure 6, merely a subsequent input from the user). Regarding claims 11-13 and 17-18, Dinu further discloses wherein the generating of the usefulness measure includes assessing an extent of overlap between the original response and the knowledge information (paragraph 46, similarity indicates overlap between original response and the knowledge information); generating the feedback information by retrieving pre-generated prompt information from a data store (figure 6, step 606, if the comparison fails to satisfy a certain condition, then proceed to step 608 discussed in figure 7, which generate “few-shot prompt” to input to model; also see paragraphs 72-73); further comprising generating the feedback information using a generative machine-trained model, based on state information that describes aspects of a current dialogue state (paragraph 64, based on conversation history). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dinu in view of Boies et al. (USPG 2014/0379353, hereinafter Boies). Regarding claims 6-9 and 15, Dinu fails to explicitly disclose, however, Boies teaches wherein different actions performed by the method are chosen by a state machine based on state information and a policy, the state information describing aspects of a current dialogue state, and the policy expressing logic for mapping different instances of state information to the different actions (paragraph 14, “Machine action generator 120 uses dialog policy 130 when determining the machine action. The dialog policy 130 includes different rules, including rules that use environmental conditions 150 and other dialog state information, to adjust the machine action that is generated”); wherein the state information describes aspects of a current dialogue turn, including at least: the query; the knowledge information; and a last-received response from the language model (paragraph 14, input, context, and result); wherein the state information also describes a history of previous dialogue turns, prior to the current dialogue turn (paragraph 14, taking previous dialog turn into consideration); and wherein the policy is chosen to maximize attainment of an objective, and wherein an extent to which an action advances the objective is expressed by a reward signal (paragraphs 14-16, “Environmental conditions 150 may affect how the machine action is provided to the user (e.g., speech, visual . . . ). For example, the response generated by response generator 140 may be a visual response when environmental conditions 150 indicate that the user's environment is noisy. The response generated by response generator 140 may be an auditory response when environmental conditions 150 indicate that the user's environment is very bright and it is unlikely that a display may be seen clearly”, “reward signal” can be construed as the best output for the user). Since Dinu and Boies are analogous in the art because they are from the same field of endeavor, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to use the known technique of using state machine to choose an action. One of ordinary skill in the art would have recognized that the results of the combination were predictable since the use of that known technique provides the rationale to arrive at a conclusion of obviousness. See KSR International Co. v. Teleflex Inc., 82 USPQ2d 1385 (U.S. 2007). Allowable Subject Matter Claims 4, 10, 16, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and resolving the 101 issue. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Heller et al. (USPN 12067366) teach a generative text model query method. Qiu et al. (USPG 2018/0004729, hereinafter Qiu) teach a state machine based context-sensitive system for managing multi-round dialog. These references are considered pertinent to the claimed invention. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUYEN X VO whose telephone number is (571)272-7631. The examiner can normally be reached M-F, 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUYEN X VO/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §101, §102, §103
Oct 28, 2025
Response Filed
Jan 21, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603083
ESTIMATION DEVICE, ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596873
OPTIMIZATION OF RETRIEVAL AUGMENTED GENERATION USING DATA-DRIVEN TEMPLATES
2y 5m to grant Granted Apr 07, 2026
Patent 12586594
GUIDING AMBISONIC AUDIO COMPRESSION BY DECONVOLVING LONG WINDOW FREQUENCY ANALYSIS
2y 5m to grant Granted Mar 24, 2026
Patent 12579990
ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12572755
SYSTEM AND METHOD FOR AUGMENTING TRAINING DATA FOR NATURAL LANGUAGE TO MEANING REPRESENTATION LANGUAGE SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+19.9%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 1043 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month