Prosecution Insights
Last updated: April 19, 2026
Application No. 18/783,023

DYNAMICALLY ADJUSTING RESPONSE PARAMETERS OF A LARGE LANGUAGE MODEL DURING AN INTERACTION WITH A USER

Non-Final OA §103
Filed
Jul 24, 2024
Examiner
SERROU, ABDELALI
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Red Hat Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
437 granted / 587 resolved
+12.4% vs TC avg
Strong +30% interview lift
Without
With
+30.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 587 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 20250094025) in view of Goldshtein (US 20240194180). As per claim 1, Maschmeyer teaches a system comprising: one or more processors; and one or more memories storing program code that is executable by the one or more processors for causing the one or more processors to perform operations ([0069]) comprising: inputting a first system prompt to a large language model, wherein the first system prompt includes a first set of response parameters, and wherein the large language model is configured to enter a first functional state that conforms to the first set of response parameters based on receiving the first system prompt (Fig. 1 and [0024], wherein the text customization system 120 generates a prompt 125 to the LLM 130; and Figs. 4A-4B, [0033]- [0034], wherein the prompt includes response parameters such as text properties. The selected parameters enable the LLM to enter the state of generating a text that conforms with the selected parameters); while the large language model is in the first functional state, operating the large language model to engage in an interaction with a user to thereby generate first interaction content (Figs. 5A-5B, [0035]- [0036], wherein the user is presented with a second set of parameters to select from); determining that a condition is satisfied based on the first interaction content ([0035]- [0036], identifying the selected parameters and determining that the range of degrees of some properties is within a composable range); based on determining that the condition is satisfied, inputting a second system prompt to the large language model, wherein the second system prompt includes a second set of response parameters that is different from the first set of response parameters, and wherein the large language model is configured to enter a second functional state that conforms to the second set of response parameters based on receiving the second system prompt ([0030]- [0036], inputting to the LLM the second set of parameters (regional dialect and traits) which is different from the set of parameters (text properties such as concise, direct, friendly…)) ; and while the large language model is in the second functional state, operating the large language model to continue the interaction with the user to thereby generate second interaction content ([0040], wherein the computer system can iteratively interact with the user to update the controls that are displayed or the properties of the controls and thereby generate second interaction content. See also, [0041], wherein the computer system provides to the user a preview of the LLM. At each iteration of providing the prompt to the LLM, the output of the LLM is modified based on the one or more of the selected parameters). Maschmeyer may not explicitly disclose entering a first and second functional state. However, the computer system of Maschmeyer necessarily enters different functional states as it interacts with user (Figs. 3-6). However, in order to expedite prosecution, the examiner refer to the prior art Goldshtein. Goldshtein in the same field of endeavor teaches a chatbot corresponding to a trained large language model (LLM) receiving unstructured free-form natural language input from a user of a client device (first system prompt). the unstructured free-form natural language input includes a natural language description of a dialog state map corresponding to the LLM ([0042], [0046], [0056]- [0057], and [0075]); and based on the nature of the unstructured free-form natural language input provided by the user of the client device ([0078]- [0079]), the automated assistant enables the chatbot/LLM to enter a given dialogue state (as the ones shown at Fig. 5B) and generate corresponding response accordingly, Figs. 5A-5C and [0078]- [0079]). Therefore, it would have been obvious the time the application was filed to use the above feature of Goldshtein with the system of Maschmeyer, in order to provide robust interaction devices. As per claim 2, Maschmeyer teaches wherein the first set of response parameters includes a first role to be played by the large language model, and wherein the second set of response parameters includes a second role to be played by the large language model, the second role being different from the first role (Maschmeyer, Figs. 4A-4B, [0030]- [0036], wherein the LLM at each iteration enters a different function state conforming with the received different response parameters. At each state function the LLM performs a different role. See also, Goldshtein, Figs. 5A-5B for the different dialog states where the LLM performs different roles based on the received response parameters, i.e. greeting, checking inventory, providing location information, store hours information, confirmation…). As per claim 3, Maschmeyer teaches wherein the operations comprise executing a rule engine to determine whether the condition is satisfied, the rule engine being configured to apply a predefined set of rules against the first interaction content to determine whether one or more conditions are satisfied ([0035], wherein said based on the dialect that is selected, there may be a limited range of degrees of some traits that are suitable to combine with the selected dialect… and if a user attempts to select a value outside this range, the system take no action in response to the user's input, or otherwise notify the user that the user attempted to select a non-selectable value). As per claim 4, Maschmeyer teaches wherein the operations comprise: based on determining that the condition is satisfied, selecting the second system prompt based on a correlation between the condition and the second system prompt in a predefined mapping, wherein the predefined mapping includes correlations between a plurality of conditions and a plurality of system prompts; and based on selecting the second system prompt, inputting the second system prompt to the large language model ([0035]- [0036], wherein based on determining the condition of a particular dialect is selected, identifying a range of degrees of a second text property that are suitable to combine with the selected dialect; and if there the conditions conform with the system prompts, a text with American English and specific trait is produced). As per claim 5, Maschmeyer teaches wherein operating the large language model in the first functional state to engage in the interaction with the user involves: receiving messages from the user; providing the messages as input prompts to the large language model, the input prompts being distinct from the first system prompt and the second system prompt; receiving responses to the messages as output from the large language model; and providing the responses to the user, wherein the messages and the responses constitute the first interaction content (Fig. 1, [0024]- [0027], wherein the text customization system generates a prompt to the LLM, which outputs generated text in response to the prompt; and [0041], wherein said, the computer system iteratively provides a prompt to the LLM and outputs the output of the LLM responsive to that prompt for each iteration. For each iteration, the output of the LLM is modified based on one or more of the Lo RA models of the plurality of LORA models selected based on the current state). As per claim 6, Maschmeyer teaches dynamically adjusting one or more response parameters of the large language model during the interaction by providing different system prompts as input to the large language model during the interaction in response to different conditions being satisfied during the interaction (Maschmeyer, [0047], wherein said, if the value outputted by the ML model is excessively high, the parameters may be adjusted so as to lower the output value in future training iterations, and [0049], wherein backpropagation is used to adjust the parameters. See also, Goldshtein, [0007]- [0011], wherein a trained chatbot corresponding to a large language model is fine-tuned based on unstructured free-form natural language input. The chatbot is capable of generating conversational outputs that are attentioned to the state(s)/transition(s) implicitly and/or explicitly defined by the unstructured free-form natural language input). As per claim 7, Goldshtein may not explicitly disclose wherein the condition is a first condition, and wherein the operations comprise: determining that a second condition is satisfied based on the second interaction content, the second condition being different from the first condition; based on determining that the second condition is satisfied, inputting a third system prompt to the large language model, wherein the third system prompt includes a third set of response parameters that is different from the first set of response parameters and the second set of response parameters, and wherein the large language model is configured to enter a third functional state that conforms to the third set of response parameters in response to receiving the third system prompt; and while the large language model is in the third functional state, operating the large language model to continue the interaction with the user to thereby generate third interaction content. However, Maschmeyer iteratively receives a user input including a set of response parameters, based on the user input provides a prompt to the LLM, and outputs the output of the LLM responsive to that prompt for each iteration. For each iteration, the output of the LLM is modified based on one or more of the Lo RA models of the plurality of LORA models selected based on the current state). Therefore, it would have been obvious at the time the application was filed for the system of Maschmeyer to determine that a second condition is satisfied, input a third system prompt to the large language model, wherein the large language model is configured to enter a third functional state that conforms to the third set of response parameters in response to receiving the third system prompt; and while the large language model is in the third functional state, operating the large language model to continue the interaction with the user to thereby generate third interaction content, as claimed. This would improve quality assurance and enhance users’ experience. As per claim 8, 10-14, method claims 8, 10-14 and apparatus claims 1-7 are related as method and apparatus of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claims 8, 10-14 are similarly rejected under the same rationale as applied above with respect to apparatus claims 1-7. As per claims 9, Maschmeyer teaches wherein the first set of response parameters includes a role parameter, a tone parameter, or a length parameter ([0072], wherein among the parameters used for adjusting outputs generated by the language model or LLM, length of the output) As per claims 15, 17-20, Maschmeyer teaches a computer readable medium ([069], [0078]). The remaining steps are rejected under the same rationale as applied to the method steps of rejected claims 1-7. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 20250094025) in view of Goldshtein (US 20240194180), and further in view of Shah (US 2024/0387025). As per claims 16, Maschmeyer teaches wherein the first set of response parameters includes a role parameter, a tone parameter, or a length parameter ([0072], wherein among the parameters used for adjusting outputs generated by the language model or LLM, length of the output). Maschmeyer in view of Goldshtein may not explicitly disclose wherein the first set of response parameters includes a first length parameter, a first tone parameter, and a first role parameter; and the second set of response parameters includes a second length parameter, a second tone parameter, and a second role parameter. Shah in the same field of endeavor teaches a multi-turn conversational system wherein the response parameters include a role which the LLM take during a conversation ([0045]- [0048]) and a tone which the LLM may generate during a conversation ([0072]). Therefore, it would have been obvious at the time the application was filed to use the above role and tone features of Shah along with the length feature of Maschmeyer, in order to make use of a first set of response parameters that includes a first length parameter, a first tone parameter, and a first role parameter; and a second set of response parameters that includes a second length parameter, a second tone parameter, and a second role parameter and interact with the user. This would improve quality assurance and enhance users’ experience. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDELALI SERROU/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 24, 2024
Application Filed
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602544
INFORMATION PROCESSING APPARATUS, OPERATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12596875
TECHNIQUES FOR ADAPTIVE LARGE LANGUAGE MODEL USAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12597417
EXPORTING MODULAR ENCODER FEATURES FOR STREAMING AND DELIBERATION ASR
2y 5m to grant Granted Apr 07, 2026
Patent 12596889
GENERATION OF NATURAL LANGUAGE (NL) BASED SUMMARIES USING A LARGE LANGUAGE MODEL (LLM) AND SUBSEQUENT MODIFICATION THEREOF FOR ATTRIBUTION
2y 5m to grant Granted Apr 07, 2026
Patent 12591603
AUTOMATED KEY-VALUE EXTRACTION USING NATURAL LANGUAGE INTENTS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 587 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month