Prosecution Insights
Last updated: April 19, 2026
Application No. 18/300,930

Efficiently Extendable In-Interpreter Natural Language Agent

Non-Final OA §102
Filed
Apr 14, 2023
Examiner
MCLEAN, IAN SCOTT
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Servicenow Inc.
OA Round
3 (Non-Final)
43%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
74%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
19 granted / 44 resolved
-18.8% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
40 currently pending
Career history
84
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
60.0%
+20.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation 2. Claims 1-13 are drawn to a method. Thus, they are not interpreted under 35 USC 112(f). The Examiner has corrected this below. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: a first module in claims 14, 16 and 18-19; Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Response to Arguments 3. Applicant's arguments filed 1/15/2026 have been fully considered but they are not persuasive Applicant argues that Lee cannot teach newly amended claim 1. In particular that Lee does not disclose updating the history by adding a representation of the first interpreter output to the history, wherein the first interpreter output is received from the interpreter in response to applying the first textual output thereto." Applicant argues that instead that “Rather, these portions of Lee relate to using feedback to re-train a model. Such re-training can be a very computationally expensive task. In contrast, amended claim 1 does not recite retraining the ‘natural language model,’ but rather ‘updating the history by adding a representation of the first interpreter output to the history’ and then ‘applying the updated history to the input of the trained natural language model,’ in order to allow the ’second textural output’ thusly generated by the model to be improved by access to the information in the ‘first interpreter output.’ This avoids the extreme computational expense of re-training the model by, instead, providing the model with documentation or other context information in the ‘first interpreter output’ by applying that information, as part of the ‘updated history,’ ‘to the input of the natural language model.’” The examiner respectfully disagrees. The amended claim does not require that the model is not retrained. The amended claim requires a runtime inference loop with the following elements: A trained natural language model A history provided as input to the model A first textual output that request documentation for a module in a particular language An interpreter response Updating the history by adding a representation of that interpreter response Re-applying the same trained model to the updated history Generating a second textual output that is a function call Executing the function call to the interpreter The claim does not recite: Retraining is prohibited Retraining cannot occur elsewhere Retraining cannot be asynchronous Retraining cannot also happen using the same data The claim is agnostic with regards to retraining. It merely requires using a trained model twice, with different inputs. Additionally, the factual answer of if Lee teaches “updating history and re-applying the model” is yes. From Lee’s disclosure, Lee repeatedly describes, generating a command, receiving system output, storing that output and using that stored information in subsequent model applications. Even when Lee later uses the same information for retraining, it is first stored and reused as input context. That satisfies “updating the history by adding a representation of the first interpreter output to the history and applying the updated history to the input of the trained natural language model.” The fact Lee also retrains does not negate that it performs runtime reuse of outputs. While Lee discloses optional retraining of machine learning models based on accumulated user feedback, Lee also clearly distinguishes such retraining from the runtime execution path in which a trained model infers intent, generates commands and executes tasks without retraining. As shown in Fig. 4, retraining steps 480-482 are conditional, in particular 481 and 482 do not need to occur at all. 482 occurs after execution of the final query 479 and are not prerequisites for generation of subsequent outputs. The trained ML model infers intent and produces executable commands based on available context regardless of whether retraining occurs. Fig. 4 block 479 and Fig. 11 explicitly discloses applying outputs generated during execution as inputs to subsequent processing steps without retraining. In particular, providing the final command to the management system to implement the identified task following inference of the command by a trained ML model. The figures taken together show that the system does this prior to any training. In view of the arguments above, the rejection of claims 1-20 is maintained. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 5. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lee (US 2023/0315856). Regarding Claim 1: A method comprising: applying a history to an input of a trained natural language model to generate a first textual output, wherein the history contains a first user query, and wherein the first textual output includes, as a command in a language of an interpreter, a request for documentation regarding a first module of a plurality of application modules (Lee: p[0046-0069], p[0075] and p[0078-0084] discloses multiple machine learning models trained on corpora and context, the system receives user queries via natural language input and generates a template query through intent inference based on the natural language request. The ML model operates based on a signal indicating completion of the query and maintains user corrections and selections as part of system history); applying the first textual output to the interpreter to and receiving, from the interpreter, a first interpreter output as a response to the first textual output (Lee: p[0075] p[0085] and p[0095] discloses the finalized query is provided to a system which executes the task); updating the history by adding a representation of the first interpreter output to the history (Lee: p[0075] p[0077] p[0086] and p[0094] discusses how results from execution are stored and inform further interactions, user adjustments based on system outputs are captured and used to retrain the models, effectively updating the system’s memory/history); applying the updated history to the input of the trained natural language model to generate a second textual output that comprises, in the language of the interpreter, a first function call to a first function of the first module (Lee: p[0075] p[0077] p[0085] and p[0094) discloses that the revised or re-submitted final query is executed by invoking a command to the system); and applying the first function call of the second textual output to the interpreter (Lee: p[0075] p[0077] p[0086] and p[0094) discloses that the revised or re-submitted final query is executed via a backend system). Regarding Claim 2: Lee further discloses the method of claim 1, further comprising receiving the first user query by: generating, using the trained natural language model, a third textual output (Lee: p[0064-0069] discloses that trained ML models can generate multiple different outputs (queries or commands) based on different prompts); applying the third textual output to the interpreter, wherein the third textual output comprises a command to return at least one prior user input (Lee: p[0095] and p[0096] discloses commands that interact with the system context and memory, such as a finalized query can be converted to a complex SQL query or invoke identified functions, obtaining a list of HTTP connections with sources, modify a security setting etc. Since users interactively refine commands based on past queries and result context in p[0075-0077] and the system maintains dialog context/history. A function that queries or surfaces previous input or system state is reasonable encompassed); and receiving a second interpreter output from the interpreter in response to applying the third textual output thereto, wherein the second interpreter output is representative of the first user query (Lee: p[0075-0077], p[0095] the output of interpreter executed queries may include representations of prior query content or system history, e.g., summaries of prior actions or visualized parameter settings – which could include fata reflective of prior user inputs). Regarding Claim 3: Lee further discloses the method of claim 1, wherein the first module is at least one of a knowledgebase query module, a reservation module, a server management module, a database management or access module, a user privileges modification or query module, a user biographical information modification or query module, a telecommunications module, a commercial services query module, or a map query module (Lee: p[0044-0057] discloses a body of information and returning summaries and reports about system state reasonably teaches a knowledgebase query module server management module database access module user privileges module). Regarding Claim 4: Lee further discloses the method of claim 1, further comprising: adding a representation of a second interpreter output to the history, wherein the second interpreter output is received from the interpreter in response to applying the second textual output thereto (p[0075-0077 and p[0085] discloses “the NL analysis device receives the finalized query and generates a system command… such that the tasks may be performed to return results” and “the user can view the results and adjust… to generate an updated finalized query, any number of times…” this means the returned results from the second textual output (a finalized query) inform what gets stored and fed back into future inference); subsequent to adding the representation of the second interpreter output to the history, applying the history to the input of the trained natural language model to generate third textual output (Lee: p[0077], p[0095] p[0094] in multiple cases the history (which now includes the second interpreter output) is implicitly or explicitly reused to produce new text – the third textual output); and presenting a representation of the third textual output (Lee: p[0092-0093] discloses that after each ML inference step the generated query is presented to the user interface). Regarding Claim 5: Lee further discloses the method of claim 4, wherein presenting the representation of the third textual output comprises: applying the third textual output to the interpreter, wherein the third textual output comprises a command to provide a representation of the second interpreter output to a user (Lee: p[0095] the finalized queries are converted into commands that return results to the user, satisfying the requirement that the third output retrieves and presents previous output). Regarding Claim 6: Lee further discloses the method of claim 1, further comprising: adding a representation of a second interpreter output to the history, wherein the second interpreter output is received from the interpreter in response to applying the second textual output thereto, and wherein the second interpreter output includes an error message (Lee: p[0086] discloses error feedback as part of execution results); subsequent to adding the representation of the second interpreter output to the history, applying the history to the input of the trained natural language model to generate third textual output, wherein the third textual output represents a request for additional information related to the first function call message (Lee: p[0075] user provides clarifying follow-up, the model generates refined query); presenting a representation of the third textual output; message (Lee: p[0093] presents results via editable templates or suggestions); responsive to presenting the representation of the third textual output, receiving a first user response message (Lee: p[0075] discloses system receiving new input from user after viewing output); in response to receiving the first user response, adding a representation of the first user response to the history message (Lee: p[0075] p[0086] discloses updating history and retraining based on it); subsequent to adding the representation of the first user response to the history, applying the history to the input of the trained natural language model to generate fourth textual output message (Lee: p[0070-0085] the machine learning model generates further query output based on new-extended history); and applying the fourth textual output to the interpreter, wherein the fourth textual output comprises a second function call to the first function of the first module message (Lee: p[0095] the system re-executed updated query with modified parameters any number of times). Regarding Claim 7: Lee further discloses the method of claim 6, wherein presenting a representation of the third textual output comprises applying the third textual output to the interpreter, wherein the third textual output comprises a command to provide a representation of the second interpreter output to a user, and wherein receiving the first user response comprises: generating, using the trained natural language model, a fifth textual output (Lee: p[0095] supports this by discloses presentation of previous results to user); applying the fifth textual output to the interpreter, wherein the fifth textual output comprises a command to return at least one prior user input (Lee: p[0075]iterative query refinement supported); and receiving a third interpreter output from the interpreter in response to applying the fifth textual output thereto, wherein the third interpreter output is representative of the first user response (Lee: p[0095] returns updated results based on this response). Regarding Claim 8: Lee further discloses the method of claim 1, further comprising: receiving the first user query (Lee: p[0072] the user inputs natural language phrase via the NL interface interpreted as the first query); adding a representation of the first user query to the history (Lee: p[0075] p[0085] and p[0094] discloses the natural language query and subsequent actions are stored in the system’s) context/history for further inference and learning); and prior to receiving the first user query: generating, using the trained natural language model, a third textual output (Lee: p[0075] p[0086]] the model can generate textual output based on initial prompts or historical interaction even before receiving a new user query); applying the third textual output to the interpreter, wherein the third textual output comprises a request to return information about a set of modules that are usable by the interpreter, wherein the first module is a member of the set of modules (Lee: p[0095-0096] discloses the system can be queried about capabilities of modules and supported operations. This includes requesting lists of actions, query fields, etc., and their relationship to backend modules); receiving a second interpreter output from the interpreter in response to applying the third textual output thereto, wherein the second interpreter output is representative of capabilities of each module of the set of modules(Lee: p[0095] the management system returns structured data describing the capabilities of each queried module); and adding a representation of the second interpreter output to the history (Lee: p[0094-0095] the results returned are retained in history for further model training and contextual refinement). Regarding Claim 9: Lee further discloses the method of claim 1, further comprising, prior to applying the history to the input of the trained natural language model to generate the second textual output and subsequent to adding the representation of the first interpreter output to the history: applying the history to the input of the trained natural language model to generate third textual output (Lee: p[0094-0095] a new query is generated by the ML model using updated history after storing the first interpreter output); applying the third textual output to the interpreter, wherein the third textual output comprises a request for at least one of information about the first module or information about the first function (Lee: p[0096] discloses the system requesting function/module details using ML-inferred textual commands, such as supported actions, schema or descriptions); and adding a representation of a second interpreter output to the history, wherein the second interpreter output is received from the interpreter in response to applying the third textual output thereto (Lee: p[0075], p[0094] discloses the output from this metadata/function query is stored in the systems history). Regarding Claim 10: Lee further discloses the method of claim 1, wherein the trained natural language model includes more than a billion parameters and has been trained on a corpus of generic speech, and wherein the history includes, prior to adding a representation of the first user query thereto, representations of at least two examples of goal-oriented dialog using the interpreter (Lee: p[0066] explicitly discloses GPT-3, GPT-Neo 2.7B and similar large models with billions of parameters trained on general language corpora). Regarding Claim 11: Lee further discloses the method of claim 1, wherein the trained natural language model has been trained using a plurality of representations of goal-oriented dialog using the interpreter (Lee: p[0072-0074] and p[0077] discloses training the model with feedback data, examples of user-to-user system interactions and generated queries, all tied to performing backend actions-representing goal oriented dialog). Regarding Claim 12: Lee further discloses the method of claim 11, wherein the plurality of representations of goal-oriented dialog using the interpreter used to train the trained natural language model comprises a representation of at least one instance of each of: calling a function, receiving an exception in response to calling a function, loading a module, loading documentation about a module or a function, receiving a user query, and generating a user response (Lee: p[0095] and p[0074] discloses various tasks performed via management modules: function invocation, search queries, module capabilities, user responses and error handling). Regarding Claim 13: Lee further discloses the method of claim 1, wherein the history includes, prior to adding a representation of the first user query thereto, a representation of at least one of a past user interaction, information about a user, or a list of modules accessible by the interpreter (Lee: p[0094], p[0074] disclose the system history as storing user profiles prior actions and available system modules for contextual refinement). Regarding Claim 14: Claim 14 has been analyzed with regard to claim 1 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 15: Claim 15 has been analyzed with regard to claim 4 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 16: Claim 16 has been analyzed with regard to claim 6 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 17: Claim 17 has been analyzed with regard to claim 7 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 18: Claim 18 has been analyzed with regard to claim 8 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 19: Claim 19 has been analyzed with regard to claim 9 (see rejection above) and is rejected for the same reasons of anticipation used above. Regarding Claim 20: Claim 20 has been analyzed with regard to claim 13 (see rejection above) and is rejected for the same reasons of anticipation used above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN SCOTT MCLEAN whose telephone number is (703)756-4599. The examiner can normally be reached "Monday - Friday 8:00-5:00 EST, off Every 2nd Friday". Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at (571) 272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN SCOTT MCLEAN/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Apr 14, 2023
Application Filed
May 23, 2025
Non-Final Rejection — §102
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 27, 2025
Examiner Interview Summary
Aug 28, 2025
Response Filed
Oct 27, 2025
Final Rejection — §102
Jan 15, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602553
SPEECH TRANSLATION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12494199
VOICE INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12443805
Systems and Methods for Multilingual Data Processing and Arrangement on a Multilingual User Interface
2y 5m to grant Granted Oct 14, 2025
Patent 12437144
Content Recommendation Method and User Terminal
2y 5m to grant Granted Oct 07, 2025
Patent 12400644
DYNAMIC LANGUAGE MODEL UPDATES WITH BOOSTING
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
43%
Grant Probability
74%
With Interview (+31.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month