Prosecution Insights
Last updated: April 19, 2026
Application No. 18/544,609

LANGUAGE MODEL SPECIALIZATION VIA PROMPT ANALYSIS

Final Rejection §103§112
Filed
Dec 19, 2023
Examiner
YAMAMOTO, JOSEPH JEREMY
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
31 granted / 43 resolved
+10.1% vs TC avg
Strong +21% interview lift
Without
With
+21.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
17 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are pending. Claims 1, 11, and 20 are independent. Claims 2-10 depend from Claim 1. Claims 12-19 depend from Claim 11. This Application was published as U.S. 2025/0200298. Response to Amendment Examiner thanks Applicant for response filed on 2 Dec 2025 which has been correspondingly accepted and considered in this office action. Claims 1-20 are pending. Response to Arguments Applicant's arguments filed 2 Dec 2025 have been fully considered but they are not persuasive. Each argument of Applicant’s arguments will be addressed in turn. With regards to 35 USC § 101: Applicant's arguments filed 2 Dec 2025 have been fully considered and are persuasive. As a result the 35 USC § 101 rejection is withdrawn. With regards to 35 USC § 103: Applicant's arguments filed 2 Dec 2025 have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 7-8 and 17-18 rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. The criteria for a dependent claim is that it contain a reference to a previous claim in the same application, specify a further limitation of the subject matter claimed, and necessarily include all the limitations of the previous claim. For example, a dependent claim must be rejected under § 112, fourth paragraph if it omits an element from the claim upon which it depends or it fails to add a limitation to the claim upon which it depends. As described in the table below, claims 7 and 17 are broader than the independent claims that they depend on. Specifically, independent claim refers to each of the prompt-response pairs, while dependent claim refers to at least one of the prompt-response pairs, which is a broader limitation. Furthermore, independent claim specifies how the label is assigned to the respective prompt-response pair of the particular task, while dependent claim requires only that the particular task is included which is a broader limitation. Claims 8 and 18 are rejected because they depend on claims 7 and 17, respectively. Claim 1 Claim 7 (depends on claim 1) classifying, by the device, each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; wherein the device classifies at least one of the prompt-response pairs as relating to a plurality of tasks that include the particular task. Claim 11 Claim 17 (depends on claim 11) classify each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; wherein the apparatus classifies at least one of the prompt-response pairs as relating to a plurality of tasks that include the particular task. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 7-13, 15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Carbune et al.(US2025/0069617 hereinafter Carbune) in view of Foley et al. (US2025/0028992 hereinafter Foley) With regards to claim 1, Carbune teaches: A method comprising: obtaining, by a device, prompt-response pairs of prompts for input to a language model and their corresponding responses from the language model; [Carbune Fig 1 Par [0030] teaches user device 110 obtains prompt (152) and response (162) which are prompt-response pairs from language model (business LLM item 160) used for input into language model (150)] training, by the device, a specialized language model to perform a particular task using a training set comprising the prompt-response pairs assigned a particular task label corresponding to the particular task; and [Carbune Fig 1 teaches specialized language model (150) that uses business LLM (160) that “perform the action/task on behalf of the user 10” (Par [0038]) which are particular tasks. Specialized LLM (150) can be trained using “positive examples” (Par [0061]) and “negative examples” (Par [0063]) by a “training mode” (Par [0061]) where positive examples include a training set that comprises “respective prompts 152 created and issued to the business LLMs, the respective response content 162” (Par [0059]) which are the prompt-response pairs assigned to the corresponding to the particular task which can be labeled “successful interactions” (Par [0061]) which label the particular business LLM task as being successful or not. Similarly negative labels (Par [0060, 63]) are also task labels for unsuccessful business LLM tasks.] causing, by the device, the specialized language model to be deployed for use to perform the particular task. [Carbune Fig 1 teaches “user 10 inputs, via a user device 110, a natural language query 116 to the assistant interface 150 specifying a particular action the user 10 wants the assistant interface 150 to perform on behalf of the user 10”(Par[0030]) to perform the particular task of the associated business LLM] With regards to claim 1, Carbune fails to teach: classifying, by the device, each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; With regards to claim 1, Foley teaches: classifying, by the device, each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; [Foley Fig 3 teaches classifying by a device (Par [0052,59]) where module (330) uses “prompts and their corresponding responses” (Par [0063]) which are prompt-response pairs related to the task of training a model, and “Module 330 labels a generated prompt response with the model that generated the prompt response.” (Par [0062]) where module (330) is a classifier model such as an LLM (Par [0063]) It would be obvious to one of ordinary skill in the art at the time of applicant’s filing to combine the method of enabling multiple business LLMs from a user as taught by Carbune using the method of training models using prompt-response pairs as taught by Foley. The motivation to combine the teachings of Carbune with Foley is because “model is being trained to classify LLMs, … [and] image processing models” (Par [0063]) which increases the capabilities of the invention of Carbune to train to new models based on the new prompting functions] With regards to claim 2, Carbune in view of Foley teaches: All the limitations of claim 1 wherein the language model is a large language model trained to perform a plurality of tasks. [Carbune Fig 1 teaches language model (150) is an assistant LLM trained to perform tasks using business LLMs (160a-n) where “each multiple different business LLMs 160 that span a diverse set of LLM capabilities” (Par [0036]) which means each LLM can perform a plurality of tasks] With regards to claim 3, Carbune in view of Foley teaches: All the limitations of claim 1 wherein obtaining the prompt-response pairs comprises: intercepting, by the device, the prompts for input to the language model and their corresponding responses from the language model. [Carbune Fig 1 teaches the device has a user interface (170) that interacts with the assistant LLM (150) to receive prompts for input to the language model (items 152 and 160) and corresponding responses (item 162)] With regards to claim 5, Carbune in view of Foley teaches: All the limitations of claim 1 wherein classifying each of the prompt-response pairs as relating to one or more tasks comprises: identifying a relationship between two or more distinct tasks; and [Foley teaches “fine-tuned model adapts a trained LLM to a specific task” (Par [0002]) where training a model involves “one or more training prompt responses, or combinations of training prompts and their corresponding responses” (Par [0063]) and the relationship between tasks is identified by the training prompts, response, and combinations thereof] merging the two or more distinct tasks into a single task label to be assigned to corresponding prompt-response pairs. [Foley Fig 3 teaches “application 300 concatenates a prompt and its prompt response(s) together, then uses an embedding model to generate an embedding” (Par [0063]) where concatenation is merging the tasks and the embeddings assign the label to the corresponding pairs by “finding the closest embedding from the prompt responses in the training set and using the label of the training sentence as a prediction” (Par [00664]) With regards to claim 7, Carbune in view of Foley teaches: All the limitations of claim 1 wherein the device classifies at least one of the prompt-response pairs as relating to a plurality of tasks that include the particular task. [Foley Fig 3 teaches classifying by a device (Par [0052,59]) where module (330) uses “prompts and their corresponding responses” (Par [0063]) which are prompt-response pairs related to the task of training a model, and “Module 330 labels a generated prompt response with the model that generated the prompt response.” (Par [0062]) where module (330) is a classifier model such as an LLM (Par [0063])] With regards to claim 8, Carbune in view of Foley teaches: All the limitations of claim 7 further comprising: training, by the device, a plurality of language models that include the specialized language model to each perform one of the plurality of tasks. [Carbune Fig 1 teaches business LLMs items (160a-n) where “each multiple different business LLMs 160 that span a diverse set of LLM capabilities” (Par [0036]) which means each specialized LLM can perform one of the plurality of tasks] With regards to claim 9, Carbune in view of Foley teaches: All the limitations of claim 1 wherein the prompts for input to the language model are received via a user interface. [Carbune Fig 1 item 170]] With regards to claim 10, Carbune in view of Foley teaches: All the limitations of claim 9 wherein the language model is cloud-hosted. [Carbune teaches “business LLM 160 that is backed by a particular cloud service provider” (Par [0035])] With regards to claim 11, Carbune teaches: An apparatus, comprising: one or more network interfaces; [Carbune Fig 1 teaches the “network 130 may be wired, wireless, or a combination thereof, and may include private networks and/or public networks, such as the Internet” (Par [0033])] a processor coupled to the one or more network interfaces and configured to execute one or more processes; and [Carbune Fig 1 teaches remote computing system which includes “data processing hardware” and Fig 5 teaches processor (510) for computing device (Par [0072])] a memory configured to store a process that is executable by the processor, the process when executed configured to: [Carbune Fig 5 teaches memory (520) executable by processor (510) (Par [0072])] obtain prompt-response pairs of prompts for input to a language model and their corresponding responses from the language model; [Carbune Fig 1 Par [0030] teaches user device 110 obtains prompt (152) and response (162) from language model (business LLM item 160)] train a specialized language model to perform a particular task using a training set comprising the prompt-response pairs assigned a particular task label corresponding to the particular task; and [Carbune Fig 1 teaches specialized language model (150) that uses business LLM (160) that “perform the action/task on behalf of the user 10” (Par [0038]) which are particular tasks. Specialized LLM (150) can be trained using “positive examples” (Par [0061]) and “negative examples” (Par [0063]) by a “training mode” (Par [0061]) where positive examples include a training set that comprises “respective prompts 152 created and issued to the business LLMs, the respective response content 162” (Par [0059]) which are the prompt-response pairs assigned to the corresponding to the particular task which can be labeled “successful interactions” (Par [0061]) which label the particular business LLM task as being successful or not. Similarly negative labels (Par [0060, 63]) are also task labels for unsuccessful business LLM tasks.] cause the specialized language model to be deployed for use to perform the particular task. [Carbune Fig 1 teaches “presentation content 180 based on the response content 162 returned provided by each business LLM 160 that performed a corresponding portion of the action on behalf of the user 10”] With regards to claim 11, Carbune fails to teach: classify each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; With regards to claim 11, Foley teaches: classify each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; [Foley Fig 3 teaches classifying by a device (Par [0052,59]) where module (330) uses “prompts and their corresponding responses” (Par [0063]) which are prompt-response pairs related to the task of training a model, and “Module 330 labels a generated prompt response with the model that generated the prompt response.” (Par [0062]) where module (330) is a classifier model such as an LLM (Par [0063]) It would be obvious to one of ordinary skill in the art at the time of applicant’s filing to combine the method of enabling multiple business LLMs from a user as taught by Carbune using the method of training models using prompt-response pairs as taught by Foley. The motivation to combine the teachings of Carbune with Foley is because “model is being trained to classify LLMs, … [and] image processing models” (Par [0063]) which increases the capabilities of the invention of Carbune to train to new models based on the new prompting functions] Claim 12 is a system claim with limitations corresponding to the limitations of method Claim 2 and is rejected under similar rationale. Claim 13 is a system claim with limitations corresponding to the limitations of method Claim 3 and is rejected under similar rationale. Claim 15 is a system claim with limitations corresponding to the limitations of method Claim 5 and is rejected under similar rationale. Claim 17 is a system claim with limitations corresponding to the limitations of method Claim 7 and is rejected under similar rationale. Claim 18 is a system claim with limitations corresponding to the limitations of method Claim 8 and is rejected under similar rationale. Claim 19 is a system claim with limitations corresponding to the limitations of method Claim 9 and is rejected under similar rationale. With regards to claim 20, Carbune teaches: A tangible, non-transitory, computer-readable medium storing program instructions that cause a device to execute a process comprising: [Carbune Fig 5 Par [0073] teaches the “memory 520 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 520 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 500”] obtaining, by the device, prompt-response pairs of prompts for input to a language model and their corresponding responses from the language model; [Carbune Fig 1 Par [0030] teaches user device 110 obtains prompt (152) and response (162) from language model (business LLM item 160)] training, by the device, a specialized language model to perform a particular task using a training set comprising the prompt-response pairs assigned a particular task label corresponding to the particular task; and [Carbune Fig 1 teaches specialized language model (150) that uses business LLM (160) that “perform the action/task on behalf of the user 10” (Par [0038]) which are particular tasks. Specialized LLM (150) can be trained using “positive examples” (Par [0061]) and “negative examples” (Par [0063]) by a “training mode” (Par [0061]) where positive examples include a training set that comprises “respective prompts 152 created and issued to the business LLMs, the respective response content 162” (Par [0059]) which are the prompt-response pairs assigned to the corresponding to the particular task which can be labeled “successful interactions” (Par [0061]) which label the particular business LLM task as being successful or not. Similarly negative labels (Par [0060, 63]) are also task labels for unsuccessful business LLM tasks.] causing, by the device, the specialized language model to be deployed for use to perform the particular task. [Carbune Fig 1 teaches “presentation content 180 based on the response content 162 returned provided by each business LLM 160 that performed a corresponding portion of the action on behalf of the user 10”] With regards to claim 20, Carbune fails to teach: classifying, by the device, each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; With regards to claim 20, Foley teaches: classifying, by the device, each of the prompt-response pairs as relating to one or more tasks by applying a machine-learning classifier to a respective prompt-response pair to assign one or more task labels to that respective prompt-response pair; [Foley Fig 3 teaches classifying by a device (Par [0052,59]) where module (330) uses “prompts and their corresponding responses” (Par [0063]) which are prompt-response pairs related to the task of training a model, and “Module 330 labels a generated prompt response with the model that generated the prompt response.” (Par [0062]) where module (330) is a classifier model such as an LLM (Par [0063]) It would be obvious to one of ordinary skill in the art at the time of applicant’s filing to combine the method of enabling multiple business LLMs from a user as taught by Carbune using the method of training models using prompt-response pairs as taught by Foley. The motivation to combine the teachings of Carbune with Foley is because “model is being trained to classify LLMs, … [and] image processing models” (Par [0063]) which increases the capabilities of the invention of Carbune to train to new models based on the new prompting functions] Claims 4, 6, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Carbune et al.(US2025/0069617) in view of Foley et al. (US2025/0028992) in further view of Mimassi (US2022/0383859) hereinafter Mimassi. With regards to claim 4, Carbune in view of Foley teaches: All the limitations of claim 1 With regard to claim 4, Carbune in view of Foley fails to teach: wherein the specialized language model is deployed to an edge node for execution. With regard to claim 4, Mimassi teaches: wherein the specialized language model is deployed to an edge node for execution. [Mimassi Fig 2 Par [0049] teaches “edge devices may operate a default local version of the RNN 205 based upon the received model parameters.” It would be obvious to one of ordinary skill in the art at the time of applicant’s filing to combine the method of enabling multiple business LLMs from a user as taught by Carbune and Foley with execution of a local version of the LLM on an edge device as taught by Mimassi. The motivation to combine the teachings of Carbune and Foley with the teachings of Mimassi is because “system 100 can leverage the computing power of edge devices to train the local models operating on the edge devices, and update the central language models 204 periodically” (Mimassi Par [0049]) which increases the capabilities of the invention of Carbune and Foley to adapt to new training based on the new prompting functions] With regards to claim 6, Carbune in view of Foley teaches: All the limitations of claim 1 With regard to claim 6, Carbune in view of Foley fails to teach: wherein the particular task comprises generating a configuration or script for use by a networking device. With regard to claim 6, Mimassi teaches: wherein the particular task comprises generating a configuration or script for use by a networking device. [Mimassi Fig 11 teaches computing device (10) uses program instructions or “for example, scripts written in Python, Perl, Ruby, Groovy, or any other scripting language” (Par [0089]) for use by a networking device. It would be obvious to one of ordinary skill in the art at the time of applicant’s filing to combine the method of enabling multiple business LLMs from a user as taught by Carbune and Foley with execution of a local version of the LLM on an edge device using scripts as taught by Mimassi. The motivation to combine the teachings of Carbune and Foley with the teachings of Mimassi is because “system 100 can leverage the computing power of edge devices to train the local models operating on the edge devices, and update the central language models 204 periodically” (Mimassi Par [0049]) which increases the capabilities of the invention of Carbune in view of Foley to adapt to new training based on the new prompting functions] Claim 14 is a system claim with limitations corresponding to the limitations of method Claim 4 and is rejected under similar rationale. Claim 16 is a system claim with limitations corresponding to the limitations of method Claim 6 and is rejected under similar rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joseph J Yamamoto whose telephone number is (571)272-4020. The examiner can normally be reached M-F 1000-1800 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH J. YAMAMOTO Examiner Art Unit 2656 /BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Aug 23, 2025
Non-Final Rejection — §103, §112
Nov 17, 2025
Interview Requested
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Examiner Interview Summary
Dec 01, 2025
Response Filed
Jan 27, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602546
KEY POINTS EXTRACTION FOR UNIFORM RESOURCE LOCATORS
2y 5m to grant Granted Apr 14, 2026
Patent 12602377
SYSTEMS AND METHODS FOR QUESTION ANSWERING WITH DIVERSE KNOWLEDGE SOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12592220
DEEPFAKE DETECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585875
DEVICE AND METHOD FOR PROCESSING TEMPORAL EXPRESSIONS FROM UNSTRUCTURED TEXTS FOR FILLING A KNOWLEDGE DATABASE
2y 5m to grant Granted Mar 24, 2026
Patent 12566888
MULTI-LINGUAL NATURAL LANGUAGE GENERATION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
93%
With Interview (+21.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month